id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
15519035
pes2o/s2orc
v3-fos-license
Silting objects, simple-minded collections, $t$-structures and co-$t$-structures for finite-dimensional algebras Bijective correspondences are established between (1) silting objects, (2) simple-minded collections, (3) bounded $t$-structures with length heart and (4) bounded co-$t$-structures. These correspondences are shown to commute with mutations. The results are valid for finite-dimensional algebras. A concrete example is given to illustrate how these correspondences help to compute the space of Bridgeland's stability conditions. Introduction Let Λ be a finite-dimensional associative algebra. Fundamental objects of study in the representation theory of Λ are the projective modules, the simple modules and the category of all (finite-dimensional) Λ-modules. Various structural concepts have been introduced that include one of these classes of objects as particular instances. In this article, four such concepts are related by explicit bijections. Moreover, these bijections are shown to commute with the basic operation of mutation and to preserve partial orders. These four concepts may be based on two different general points of view, either considering particular generators of categories ((1) and (2)) or considering structures on categories that identify particular subcategories ((3) and (4)): (1) Focussing on objects that generate categories, the theory of Morita equivalences has been extended to tilting or derived equivalences. In this way, projective generators are 1 examples of tilting modules, which have been generalised further to silting objects (which are allowed to have negative self-extensions). (2) Another, and different, natural choice of 'generators' of a module category is the set of simple modules (up to isomorphism). In the context of derived or stable equivalences, this set is included in the concept of simple-minded system or simple-minded collection. (3) Starting with a triangulated category and looking for particular subcategories, t-structures have been defined so as to provide abelian categories as their hearts. The finite-dimensional Λ-modules form the heart of some t-structure in the bounded derived category D b (mod Λ). (4) Choosing as triangulated category the homotopy category K b (proj Λ), one considers cot-structures. The additive category proj Λ occurs as the co-heart of some co-t-structure in K b (proj Λ). The first main result of this article is: Theorem (6.1). Let Λ be a finite-dimensional algebra over a field K. There are one-to-one correspondences between (1) equivalence classes of silting objects in K b (proj Λ), (2) equivalence classes of simple-minded collections in D b (mod Λ), (3) bounded t-structures of D b (mod Λ) with length heart, (4) bounded co-t-structures of K b (proj Λ). Here two sets of objects in a category are equivalent if they additively generate the same subcategory. A common feature of all four concepts it that they allow for comparisons, often by equivalences. In particular, each of the four structures to be related comes with a basic operation, called mutation, which produces a new such structure from a given one. Moreover, on each of the four structures there is a partial order. All the bijections in Theorem 6.1 enjoy the following naturality properties: Theorem (7.12). Each of the bijections between the four structures (1), (2), (3) and (4) commutes with the respective operation of mutation. Theorem (7.13). Each of the bijections between the four structures (1), (2), (3) and (4) preserves the respective partial orders. The four concepts are crucial in representation theory, geometry and topology. They are also closely related to fundamental concepts in cluster theory such as clusters ( [20]), c-matrices and g-matrices ( [21,40]) and cluster-tilting objects ( [7]). We refer to the survey paper [16] for more details. A concrete example to be given at the end of the article demonstrates one practical use of these bijections and their properties. Finally we give some remarks on the literature. For path algebras of Dynkin quivers, Keller and Vossieck [33] have already given a bijection between bounded t-structures and silting objects. The bijection between silting objects and t-structures with length heart has been established by Keller and Nicolás [32] for homologically smooth non-positive dg algebras, by Assem, Souto 2. Notations and preliminaries 2.1. Notations. Throughout, K will be a field. All algebras, modules, vector spaces and categories are over the base field K, and D = Hom K (?, K) denotes the K-dual. By abuse of notation, we will denote by Σ the suspension functors of all the triangulated categories. For a category C, we denote by Hom C (X, Y ) the morphism space from X to Y , where X and Y are two objects of C. We will omit the subscript and write Hom(X, Y ) when it does not cause confusion. For S a set of objects or a subcategory of C, call ⊥ S = {X ∈ C | Hom(X, S) = 0 for all S ∈ S} and S ⊥ = {X ∈ C | Hom(S, X) = 0 for all S ∈ S} the left and right perpendicular category of S, respectively. Let C be an additive category and S a set of objects or a subcategory of C. Let Add(S) and add(S), respectively, denote the smallest full subcategory of C containing all objects of S and stable for taking direct summands and coproducts respectively taking finite coproducts. The category add(S) will be called the additive closure of S. If further C is abelian or triangulated, the extension closure of S is the smallest subcategory of C containing S and stable under taking extensions. Assume that C is triangulated and let thick(S) denote the smallest triangulated subcategory of C containing objects in S and stable under taking direct summands. We say that S is a set of generators of C, or that C is generated by S, when C = thick(S). The categories mod Λ, D b (mod Λ) and K b (proj Λ) are Krull-Schmidt categories. An object M of mod Λ (respectively, D b (mod Λ), K b (proj Λ)) is said to be basic if every indecomposable direct summand of M has multiplicity 1. The finite-dimensional algebra Λ is said to be basic if the free module of rank 1 is basic in mod Λ (equivalently, in D b (mod Λ) or K b (proj Λ)). For a differential graded(=dg) algebra A, let C(A) denote the category of (right) dg modules over A and K(A) the homotopy category. Let D(A) denote the derived category of dg Amodules, i.e. the triangle quotient of K(A) by acyclic dg A-modules, cf. [29,30], and let D f d (A) denote its full subcategory of dg A-modules whose total cohomology is finite-dimensional. The For two dg A-modules M and N , let Hom A (M , N ) denote the complex whose degree n component consists of those A-linear maps from M to N which are homogeneous of degree n, and whose differential takes a homogeneous map f of degree n to d N • f − (−1) n f • d M . Then N ) is acyclic when N is an acyclic dg A-module. For example, A A , the free dg A-module of rank 1 is K-projective, because Hom A (A, N ) = N . Dually, one defines K-injective dg modules, and D( A A) is K-injective. For two dg A-modules M and N such that M is K-projective or N is K-injective, we have Let A and B be two dg algebras. Then a triangle equivalence between D(A) and D(B) restricts to a triangle equivalence between per(A) and per(B) and also to a triangle equivalence between D f d (A) and D f d (B). If A is a finite-dimensional algebra viewed as a dg algebra concentrated in 2.3. The Nakayama functor. Let Λ be a finite-dimensional algebra. The Nakayama functor ν mod Λ is defined as ν mod Λ =? ⊗ Λ D( Λ Λ), and the inverse Nakayama functor ν −1 mod Λ is its right adjoint ν −1 mod Λ = Hom Λ (D( Λ Λ), ?). They restrict to quasi-inverse equivalences between proj Λ and inj Λ. The derived functors of ν mod Λ and ν −1 mod Λ , denoted by ν and ν −1 , restrict to quasi-inverse triangle equivalences between K b (proj Λ) and K b (inj Λ). When Λ is self-injective, they restrict to quasi-inverse triangle auto-equivalences of D b (mod Λ). which is natural in M and N . When K b (proj Λ) coincides with K b (inj Λ) (that is, when Λ is Gorenstein), it has Auslander-Reiten triangles and the Auslander-Reiten translation is τ = The four concepts In this section we introduce silting objects, simple-minded collections, t-structures and co-tstructure. Let C be a triangulated category with suspension functor Σ. 3.1. Silting objects. A subcategory M of C is called a silting subcategory [33,1] if it is stable for taking direct summands and generates C (i.e. C = thick(M)) and if Hom(M, Σ m N ) = 0 for m > 0 and M, N ∈ M. Theorem 2.27]) Assume that C is Krull-Schmidt and has a silting subcategory M. Then the Grothendieck group of C is free and its rank is equal to the cardinality of the set of isomorphism classes of indecomposable objects of M. An object M of C is called a silting object if add M is a silting subcategory of C. This notion was introduced by Keller and Vossieck in [33] to study t-structures on the bounded derived category of representations over a Dynkin quiver. Recently it has also been studied by Wei [47] (who uses the terminology semi-tilting complexes) from the perspective of classical tilting theory. A tilting object is a silting object M such that Hom(M, Σ m M ) = 0 for m < 0. For an algebra Λ, a tilting object in K b (proj Λ) is called a tilting complex in the literature. For example, the free module of rank 1 is a tilting object in K b (proj Λ). Assume that Λ is finite-dimensional. Theorem 3.1 implies that (a) any silting subcategory of K b (proj Λ) is the additive closure of a silting object, and (b) any two basic silting objects have the same number of indecomposable direct summands. We will rederive (b) as a corollary of the existence of a certain derived equivalence (Corollary 5.1). Simple-minded collections are variants of simple-minded systems in [36] and were first studied by Rickard [43] in the context of derived equivalences of symmetric algebras. For a finitedimensional algebra Λ, a complete collection of pairwise non-isomorphic simple modules is a simple-minded collection in D b (mod Λ). A natural question is: do any two simple-minded collections have the same collection of endomorphism algebras? ) is a pair (C ≤0 , C ≥0 ) of strict (that is, closed under isomorphisms) and full subcategories of C such that The two subcategories C ≤0 and C ≥0 are often called the aisle and the co-aisle of the t-structure respectively. The heart C ≤0 ∩ C ≥0 is always abelian. Moreover, Hom(M, Σ m N ) vanishes for any two objects M and N in the heart and for any m < 0. The t-structure A bounded t-structure is one of the two ingredients of a Bridgeland stability condition [15]. A typical example of a t-structure is the pair (D ≤0 , D ≥0 ) for the derived category D(Mod Λ) of an (ordinary) algebra Λ, where D ≤0 consists of complexes with vanishing cohomologies in positive degrees, and D ≥0 consists of complexes with vanishing cohomologies in negative degrees. This t-structure restricts to a bounded t-structure of D b (mod Λ) whose heart is mod Λ, which is a length category, i.e. every object in it has finite length. The following lemma is well-known. Lemma 3.3. Let (C ≤0 , C ≥0 ) be a bounded t-structure on C with heart A. (a) The embedding A → C induces an isomorphism K 0 (A) → K 0 (C) of Grothendieck groups. (b) C ≤0 respectively C ≥0 is the extension closure of Σ m A for m ≥ 0 respectively for m ≤ 0. Assume further A is a length category with simple objects 3.4. Co-t-structures. According to [41], a co-t-structure on C (or weight structure in [12]) is a pair (C ≥0 , C ≤0 ) of strict and full subcategories of C such that · both C ≥0 and C ≤0 are additive and closed under taking direct summands, · Hom(M, ΣN ) = 0 for M ∈ C ≥0 and N ∈ C ≤0 , The co-heart is defined as the intersection C ≥0 ∩ C ≤0 . This is usually not an abelian category. For any two objects M and N in the co-heart, the morphism space Hom(M, Σ m N ) vanishes for any m > 0. The co-t-structure (C ≤0 , C ≥0 ) is said to be bounded [12] if A bounded co-t-structure is one of the two ingredients of a Jørgensen-Pauksztello costability condition [27]. A typical example of a co-t-structure is the pair (K ≥0 , K ≤0 ) for the homotopy category K b (proj Λ) of a finite-dimensional algebra Λ, where K ≥0 consists of complexes which are homotopy equivalent to a complex bounded below at 0, and K ≤0 consists of complexes which are homotopy equivalent to a complex bounded above at 0. The co-heart of this co-t-structure is proj Λ. ) Let (C ≥0 , C ≤0 ) be a bounded co-t-structure on C with co-heart A. Then A is a silting subcategory of C. Proof. For the convenience of the reader we give a proof. It suffices to show that C = thick(A). Let M be an object of C. Since the co-t-structure is bounded, there are integers m ≥ n such that M ∈ Σ m C ≥0 ∩ Σ n C ≤0 . Up to suspension and cosuspension we may assume that m = 0. If n = 0, then M ∈ A. Suppose n < 0. There exists a triangle [31]) Let A be a silting subcategory of C. Let C ≤0 respectively C ≥0 be the extension closure of Σ m A for m ≥ 0 respectively for m ≤ 0. Then (C ≥0 , C ≤0 ) is a bounded co-t-structure on C with co-heart A. Finite-dimensional non-positive dg algebras In this section we study derived categories of non-positive dg algebras, i.e. dg algebras A = i∈Z A i with A i = 0 for i > 0, especially finite-dimensional non-positive dg algebras, i.e. , non-positive dg algebras which, as vector spaces, are finite-dimensional. These results will be used in Sections 5.1 and 5.4. Non-positive dg algebras are closely related to silting objects. A triangulated category is said to be algebraic if it is triangle equivalent to the stable category of a Frobenius category. isomorphisms Since M is a silting object, A ′ has vanishing cohomologies in positive degrees. Therefore, if A = τ ≤0 A ′ is the standard truncation at position 0, then the embedding A ֒→ A ′ is a quasi-isomorphism. It follows that there is a composite triangle equivalence In the sequel of this section we assume that A is a finite-dimensional non-positive dg algebra. The 0-th cohomologyĀ = H 0 (A) of A is a finite-dimensional K-algebra. Let ModĀ and modĀ denote the category of (right) modules overĀ and its subcategory consisting of those finite-dimensional modules. Let π : A →Ā be the canonical projection. We view ModĀ as a subcategory of C(A) via π. The total cohomology H * (A) of A is a finite-dimensional graded algebra with multiplication induced from the multiplication of A. Let M be a dg A-module. Then the total cohomology H * (M ) carries a graded H * (A)-module structure, and hence a gradedĀ = H 0 (A)-module structure. In particular, a stalk dg A-module concentrated in degree 0 is anĀ-module. 4.1. The standard t-structure. We follow [22,4,34], where the dg algebra is not necessarily Consider the standard truncation functors τ ≤0 and τ >0 : Let e be an idempotent of A. For degree reasons, e must belong to A 0 , and the graded subspace . Therefore for each decomposition 1 = e 1 + . . . + e n of the unity into a sum of primitive orthogonal idempotents, there is a direct e and e ′ are two idempotents of A such that eA ∼ = e ′ A as ordinary modules over the ordinary algebra A, then this isomorphism is also an isomorphism of dg modules. Indeed, there are two elements of A such that f g = e and gf = e ′ . Again for degree reasons, f and g belong to A 0 . So they induce isomorphisms of dg A-modules: eA → e ′ A, a → ga and e ′ A → eA, a → f a. It follows that the above decomposition of A into a direct sum of indecomposable dg modules is essentially unique. Namely, if 1 = e ′ 1 + . . . + e ′ n is another decomposition of the unity into a sum of primitive orthogonal idempotents, then m = n and up to reordering, We assume, as we may, that A is basic. Let 1 = e 1 + . . . + e n be a decomposition of 1 in A into a sum of primitive orthogonal idempotents. Since d(x) = λ 1 e i 1 + . . . + λ s e is implies that d(e i j x) = λ j e i j , the intersection of the space spanned by e 1 , . . . , e n with the image of the differential d has a basis consisting of some e i 's, say e r+1 , . . . , e n . So, e r+1 A, . . . , e n A are homotopic to zero. We say that a dg A-module M is strictly perfect if its underlying graded module is of the and if its differential is of the form d int +δ, where d int is the direct sum of the differential of the R j 's, and δ, as a degree 1 map from N j=1 R j to itself, is a strictly upper triangular matrix whose entries are in A. It is minimal if in addition no shifted copy of e r+1 A, . . . , e n A belongs to add(R 1 , . . . , R j ), and the entries of δ are in the radical of A, cf. [42, Section 2.8]. Strictly perfect dg modules are K-projective. If A is an ordinary algebra, then strictly perfect dg modules are precisely bounded complexes of finitely generated projective modules. Simple modules. Assume that A is basic. According to the preceding subsection, we may assume that there is a decomposition 1 = e 1 + . . . + e r + e r+1 + . . . + e n of the unity of A into a sum of primitive orthogonal idempotents such that 1 =ē 1 + . . . +ē r is a decomposition of 1 in A into a sum of primitive orthogonal idempotents. Let S 1 , . . . , S r be a complete set of pairwise non-isomorphic simpleĀ-modules and let R 1 , . . . , R r be their endomorphism algebras. Then Therefore, by (2.1) and (2.2), Then by replacing M by its minimal perfect resolution (Lemma 4.2), we see that M is isomorphic Further, recall from Section 4.1 that D f d (A) admits a standard t-structure whose heart is equivalent to modĀ. This implies that the simple modules S 1 , . . . , S r form a simple-minded Let e 1 , . . . , e r , S 1 , . . . , S r and R 1 , . . . , R r be as in the preceding subsection. Then Therefore, by (2.1) and (2.2), For the convenience of the reader we include a proof. Lemma 4.3. The pair (P ≥0 , P ≤0 ) is a co-t-structure on per(A). Moreover, its co-heart is Proof. Since Hom(A, Σ m A) = 0 for m ≥ 0, it follows that Hom(X, ΣY ) = 0 for M ∈ P ≥0 and N ∈ P ≤0 . It remains to show that any object M in per(A) fits into a triangle whose outer terms belong to P ≥0 and P ≤0 , respectively. By Lemma 4.2, we may assume that M is minimal perfect. Clearly M ′ belongs to P ≥0 and the quotient M ′′ = M/M ′ belongs to ΣP ≤0 . Thus we obtain the desired triangle The maps Let Λ be a finite-dimensional basic K-algebra. This section is devoted to defining the maps in the following diagram. bounded co-t-structures henceΓ is a finite-dimensional non-positive dg algebra. Therefore, the derived category D(Γ) , whose heart is equivalent to mod Γ. Moreover, there is a standard co-t-structure (P ≥0 , P ≤0 ) on per(Γ), see Section 4. The object M has a natural dgΓ-Λ-bimodule structure. Moreover, since it generates K b (proj Λ), it follows from [29, Lemma 6.1 (a)] that there are triangle equivalences These equivalences takeΓ to M . The following special case of Theorem 3.1 is a consequence. Then up to isomorphism, the objects X 1 , . . . , X r are sent by the derived equivalence ? L ⊗Γ M to a complete set of pairwise non-isomorphic simple Γ-modules, see Section 4.4. Lemma 5.2. (a) Let X ′ 1 , . . . , X ′ r be objects of D b (mod Λ) such that the following formula holds for 1 ≤ i, j ≤ r and m ∈ Z Then X i ∼ = X ′ i for any i = 1, . . . , r. (b) Let M ′ 1 , . . . , M ′ r be objects of K b (proj Λ) such that the following formula holds for 1 ≤ i, j ≤ r and m ∈ Z Proof. This follows from the corresponding result in D(Γ), see Section 4.4. √ Since Λ is a silting object of K b (proj Λ), it follows from Theorem 3.1 that A has an additive generator, say M , i.e. A = add(M ). Then M is a silting object in K b (proj Λ). Define 5.3. From t-structures to simple-minded collections. Let (C ≤0 , C ≥0 ) be a bounded tstructure of D b (mod Λ) with length heart A. Boundedness implies that the Grothendieck group of A is isomorphic to the Grothendieck group of D b (mod Λ), which is free, say, of rank r. Therefore, A has precisely r isomorphism classes of simple objects, say X 1 , . . . , X r . By Lemma 3.3 (f), X 1 , . . . , X r is a simple-minded collection in D b (mod Λ). Define 5.4. From silting objects to simple-minded collections, t-structures and co-t-structures. Let M be a silting object of K b (proj Λ). Define full subcategories of C be the corresponding simple objects of the heart with endomorphism algebras R 1 , . . . , R r respectively. Then the following formula holds for 1 ≤ i, j ≤ r and m ∈ Z The pair (C ≥0 , C ≤0 ) is a bounded co-t-structure on K b (proj Λ) whose co-heart is add(M ). The first statement of part (a) is proved by Keller and Vossieck [33] in the case when Λ is the path algebra of a Dynkin quiver and by Assem, Souto and Trepode [5] in the case when Λ is hereditary. Proof. LetΓ be the truncated dg endomorphism algebra of M , see Section 5.1. Then per(Γ) has a standard bounded co-t-structure (P ≥0 , P ≤0 ) and D f d (Γ) has a standard bounded t-structure with heart equivalent to mod Γ. One checks that the triangle equivalence ? Proposition 5.4. The pair (C ≤0 , C ≥0 ) is a bounded t-structure on D b (mod Λ). Moreover, the heart of this t-structure is a length category with simple objects X 1 , . . . , X r . The same results hold true with D b (mod Λ) replaced by a Hom-finite Krull-Schmidt triangulated category C. Proof. The first two statements are [3, Corollary 3 and Proposition 4]. The proof there still Later we will show that the heart of this t-structure always is equivalent to the category of finite-dimensional modules over a finite-dimensional algebra (Corollary 6.2). This was proved by Al-Nofayee for self-injective algebras Λ, see [3,Theorem 7]. Corollary 5.5. Any two simple-minded collections in D b (mod Λ) have the same cardinality. Proof. By Proposition 5.4, the cardinality of a simple-minded collection equals the rank of the Grothendieck group of D b (mod Λ). The assertion follows. √ 5.6. From simple-minded collections to silting objects. Let X 1 , . . . , X r be a simpleminded collection in D b (mod Λ). We will construct a silting object ν −1 T of K b (proj Λ) following a method of Rickard [43]. Then we define φ 12 (X 1 , . . . , X r ) = ν −1 T. The same construction is studied by Keller and Nicolás [32] in the context of positive dg algebras. In the case of Λ being hereditary, Buan, Reiten and Thomas [17] give an elegant construction of ν −1 (T ) using the Braid group action on exceptional sequences. Unfortunately, their construction cannot be generalised. Let R 1 , . . . , R r be the endomorphism algebras of X 1 , . . . , X r , respectively. Set X Inductively, a sequence of morphisms in D(Mod Λ) is constructed: Let T i be the homotopy colimit of this sequence. That is, up to isomorphism, T i is defined by the following triangle Here β = (β mn ) is the square matrix with rows and columns labeled by non-negative integers and with entries β mn = β (n) i if n + 1 = m and 0 otherwise. These properties of T i 's were proved by Rickard in [43] for symmetric algebras Λ over algebraically closed fields. Rickard remarked that they hold for arbitrary fields, see [43,Section 8]. In fact, his proofs verbatim carry over to general finite-dimensional algebras. From now on we assume that T i is a bounded complex of finitely generated injective Λmodules. Recall from Section 2.3 that the Nakayama functor ν and the inverse Nakayama functor ν −1 are quasi-inverse triangle equivalences between K b (proj Λ) and K b (inj Λ) The following is a consequence of Lemma 5.6 and the Auslander-Reiten formula. (a) For 1 ≤ i, j ≤ r, and m ∈ Z, (b) For each 1 ≤ i ≤ r, ν −1 T i is a bounded complex of finitely generated projective Λ-modules. (c) Let C be an object of D − (mod Λ). If Hom(ν −1 T i , Σ m C) = 0 for all m ∈ Z and all 5.7. From co-t-structures to t-structures. Let (C ≥0 , C ≤0 ) be a bounded co-t-structure of By definition (C ≤0 , C ≥0 ) is right orthogonal to the given co-t-structure in the sense of Bondarko [11]. Define If Λ has finite global dimension, then K b (proj Λ) is identified with D b (mod Λ). As a consequence, C ≤0 = C ≤0 and C ≥0 = νC ≥0 . Thus the t-structure (C ≤0 , C ≥0 ) is right adjacent to the given co-t-structure (C ≥0 , C ≤0 ) in the sense of Bondarko [12]. 5.8. Some remarks. Some of the maps φ ij are defined in more general setups: -φ 14 and φ 41 are defined for all triangulated categories, with silting objects replaced by silting subcategories, by Proposition 3.5 and Lemma 3.4, see also [12,31,39]. -φ 23 is defined for all triangulated categories, with simple-minded collections allowed to contain infinitely many objects (Lemma 3.3). -φ 21 and φ 31 are defined for all algebraic triangulated categories (replacing K b (proj Λ)), with D b (mod Λ) replaced by a suitable triangulated category; then we may follow the arguments in Sections 4.1 and 5.4. -φ 34 is defined for all algebraic triangulated categories (replacing K b (proj Λ)), with D b (mod Λ) replaced by a suitable triangulated category. Then we may follow the argument in Section 5.7. -φ 12 is defined for finite-dimensional non-positive dg algebras, since these dg algebras behave like finite-dimensional algebras from the perspective of derived categories. Similarly, φ 12 is defined for homologically smooth non-positive dg algebras, see [31]. The correspondences are bijections Let Λ be a finite-dimensional K-algebra. In the preceding section we defined the maps φ ij . In this section we will show that they are bijections. See [5,46] for related work, focussing on piecewise hereditary algebras. Theorem 6.1. The φ ij 's defined in Section 5 are bijective. In particular, there are one-to-one correspondences between (1) equivalence classes of silting objects in K b (proj Λ), (2) equivalence classes of simple-minded collections in D b (mod Λ), There is an immediate consequence: Corollary 6.2. Let A be the heart of a bounded t-structure on D b (mod Λ). If A is a length category, then A is equivalent to mod Γ for some finite-dimensional algebra Γ. Proof. By Theorem 6.1, such a t-structure is of the form φ 31 (M ) for some silting object M of Proof. Let X 1 , . . . , X r be a simple-minded collection in D b (mod Λ). It follows from Proposi- Let (C ≤0 , C ≥0 ) be a bounded t-structure on D b (mod Λ) with length heart. It follows from . √ Lemma 6.6. For a triple i, j, k such that φ ij , φ jk and φ ik are defined, there is the equality φ ij • φ jk = φ ik . In particular, φ 31 and φ 34 are bijective. Mutations and partial orders In this section we introduce mutations and partial orders on the four concepts in Section 3, and we show that the maps defined in Section 5 commute with mutations and preserve the partial orders. Let C be a Hom-finite Krull-Schmidt triangulated category with suspension functor Σ. 7.1. Silting objects. We follow [1,18] to define silting mutation. Let M be a silting object in C. We assume that M is basic and M = M 1 ⊕ . . . ⊕ M r is a decomposition into indecomposable objects. Let i = 1, . . . , r. The left mutation of M at the direct summand M i is the object Similarly one can define the right mutation µ − i (M ). Let silt C be the set of isomorphism classes of basic tilting objects of C. The silting quiver of C has the elements in silt C as vertices. For P, P ′ ∈ silt C, there are arrows from P to P ′ if and only if P ′ is obtained from P by a left mutation, in which case there is precisely one arrow. See [1, For P, P ′ ∈ silt C, define P ≥ P ′ if Hom(P, Σ m P ′ ) = 0 for any m > 0. According to [1,Theorem 2.11], ≥ is a partial order on silt C. We call it the APR tilting module if Λ/Λ(1 − e i )Λ is projective as a Λ-module. When Λ/Λ(1 − e i )Λ is a division algebra (i.e. there are no loops in the quiver of Λ at the vertex i), this specialises to the 'classical' BB tilting module [13] and APR tilting module [6]. The Proof. We modify the proof in [1]. Take a minimal injective copresentation of S + i : , it follows that the injective module I belongs to add D((1 − e i )Λ). Applying the inverse Nakayama functor ν −1 mod Λ yields an exact sequence Moreover, ν −1 mod Λ f is a minimal left approximation of P i in add(P j , j = i). Since the projective dimension of τ −1 mod Λ S + i is at most 1, it follows that ν −1 mod Λ f is injective. This completes the proof for (a). 7.2. Simple-minded collections. Let X 1 , . . . , X r be a simple-minded collection in C and fix i = 1, . . . , r. Let X i denote the extension closure of X i in C. Assume that for any j the object Σ −1 X j admits a minimal left approximation g j : Definition 7.5. The left mutation µ + i (X 1 , . . . , X r ) of X 1 , . . . , X r at X i is a new collection X ′ 1 , . . . , X ′ r such that X ′ i = ΣX i and X ′ j (j = i) is the cone of the above left approximation Similarly one defines the right mutation µ − i (X 1 , . . . , X r ). This generalises Kontsevich-Soibelman's mutation of spherical collections [38, Section 8.1] and appeared in [35] in the case of derived categories of acyclic quivers. . . . , X r ). (b) Assume that · for any j = i the object Σ −1 X j admits a minimal left approximation g j : Then the collection µ + i (X 1 , . . . , X r ) is simple-minded. (c) Assume that · for any j = i the object X j admits a minimal right approximation g − j : Proof. (a) Because in the triangle / / X j g j is a minimal left approximation of Σ −1 X j in X i if and only if g − j is a minimal right approximation of X j in X i = Σ −1 (ΣX i ). (b) and (c) The proof uses long exact Hom sequences induced from the defining triangles of the X ′ j . We leave it to the reader. √ Remark 7.7. In the course of the proof of Proposition 7.6 (b) and (c), one notices that the collection of endomorphism algebras of the mutated simple-minded collection is the same as that of the given simple-minded collection. If Hom(X i , ΣX i ) = 0, then X i = add(X i ). In this case, all six assumptions in Proposition 7.6 (b) and (c) are satisfied. Proof. We will show that the three assumptions in Proposition 7.6 (b) are satisfied, so the leftmutated collection µ + i (X 1 , . . . , X r ) is a simple-minded collection. The case for µ − i (X 1 , . . . , X r ) is similar. By Proposition 5.4, X 1 , . . . , X r are the simple objects in the heart of a bounded t-structure on D b (mod Λ). Moreover, by Corollary 6.2, the heart is equivalent to mod Γ for some finitedimensional algebra Γ. We identify mod Γ with the heart via this equivalence. In this way we consider X 1 , . . . , X r as simple Γ-modules. By [8, Section 3.1], there is a triangle functor such that -restricted to mod Γ, real is the identity; -for M, N ∈ mod Γ, the induced map In general the heart of the mutation of a bounded t-structure with length heart is not necessarily a length category. For an example, let Q be the quiver T , and hence any indecomposable object of A ′ belongs to either T or ΣF. Suppose that A ′ is a length category. Then A ′ has two isomorphism classes of simple modules, which respectively belong to T and ΣF, say S ′ 2 ∈ T and S ′ 1 ∈ ΣF. For n ∈ N define an indecomposable object M n in T as where J n (0) is the (upper triangular) Jordan block of size n and with eigenvalue 0. There are no morphisms from S ′ 1 to M n for any n. Suppose that the Loewy length of S ′ 2 in A is l. Then for n > l, any morphism from S ′ 2 to M n factors through rad n−l M n which lies in F, and hence the morphism has to be zero. Therefore M n (n > l), considered as an object in A ′ , does not have finite length, a contradiction. For two bounded t-structures (C ≤0 , C ≥0 ) and (C ′≤0 , C ′≥0 ) on C, define This defines a partial order on the set of bounded t-structures on C. 7.4. Co-t-structures. Let (C ≥0 , C ≤0 ) be a bounded co-t-structure of C. Assume that the coheart admits a basic additive generator M = M 1 ⊕. . .⊕M r with M i indecomposable. Then M is a silting object of C. Let i = 1, . . . , r. Define C ′ ≤0 as the additive closure of the extension closure of Σ m M j , j = i, and Σ m+1 M i for m ≥ 0 and define C ′ ≥0 as the left perpendicular category of ΣC ′ ≤0 . The left mutation µ + i (C ≥0 , C ≤0 ) is defined as the pair (C ′ ≥0 , C ′ ≤0 ). Similarly one defines the right mutation µ − i (C ≥0 , C ≤0 ). Proof. This can be proved directly. Here we alternatively make use of the results in Sections 3.1 and 7.1. Recall from Theorem 7.1 that there is a mutated silting object µ + i (M ). It is straightforward to check, using the defining triangle for µ + i (M ), that µ + i (C ≥0 , C ≤0 ) is the bounded co-t-structure associated to µ + i (M ) as defined in Proposition 3.5, and similarly for µ − i . The second statement follows from Theorem 7.1. √ For two bounded co-t-structures (C ≥0 , C ≤0 ) and (C ′ ≥0 , C ′ ≤0 ) on C, define This defines a partial order on the set of bounded co-t-structures on C. 7.5. The bijections commute with mutations. Let Λ a finite-dimensional algebra over K. Theorem 7.12. The φ ij 's defined in Section 5 commute with the left and right mutations defined in previous subsections. A priori it it not known that the heart of the mutation of a bounded t-structure with length heart is again a length category. So the theorem becomes well-stated only when the proof has been finished. Proof. In view of Lemma 6. Let Γ = H 0 (Γ) and π :Γ → Γ be the canonical projection. By abuse of notation, write e 1 = π(e 1 ), . . . , e r = π(e r ). Then e 1 Γ, . . . , e r Γ are indecomposable projective Γ-modules. Let The left mutation ofΓ at e iΓ is µ + i (Γ) = Q i ⊕ j =i e jΓ , where Q i is defined by the triangle We claim that f * is surjective. Then the desired result follows. Consider the commutative diagram Hom(e i Γ, T ) where π i : e iΓ → e i Γ and π E : E → H 0 (E) are the canonical projections. Let C = ker(π i ). Then there is a triangle Note that C belongs to ΣD ≤0 , which implies that Hom(C, T ) = 0 = Hom(ΣC, T ). It follows that the map π * i is bijective. Similarly, the map π * E is also bijective. Thus it suffices to show the surjectivity of H 0 (f ) * . Now let P T be a projective cover of T in mod Γ. Then P T belongs to add( j =i e j Γ) because T ∈ T = ⊥ S i . It follows that any morphism e i Γ → T factors through P T , and hence factors through H 0 (f ) : is an equivalence). This shows that H 0 (f ) * is surjective, completing the proof of the claim. A concrete example Let Λ be the finite-dimensional K-algebra given by the quiver 8.1. Indecomposable objects. Let P 1 and P 2 be the indecomposable projective Λ-modules corresponding to the vertices 1 and 2. Then up to isomorphism and up to shift an indecomposable object in D b (mod Λ) belongs to one of the following four families (see for example [19,9]) where the homomorphisms are the unique non-isomorphisms, n is the number of occurrences of P 1 and the rightmost components have been put in degree 0. 8.2. The Auslander-Reiten quiver. The Auslander-Reiten quiver of D b (mod Λ) consists of three components: two ZA ∞ components and one ZA ∞ ∞ component (see [10,28]) The abelian category mod Λ has five indecomposable objects up to isomorphism: the two simple modules S 1 and S 2 , their projective covers P 1 and P 2 and their injective envelopes I 1 = P 1 and I 2 . They are marked on the above Auslander-Reiten quiver. The left ZA ∞ component consists of shifts of P 1 (n), n ≥ 1. The Auslander-Reiten translation τ takes P 1 (n) to Σ −1 P 1 (n). It is straightforward to check that P 1 is a 0-spherical object of D b (mod Λ) in the sense of Seidel and Thomas [45]. The additive closure of this component is the triangulated subcategory generated by P 1 . This component will be referred to as the 0-spherical component. The right ZA ∞ component consists of shifts of B(n), n ≥ 1. The Auslander-Reiten translation takes B(n) to ΣB(n). The simple module S 2 = B(1) is a 2-spherical object of D b (mod Λ) and the additive closure of this component is the triangulated subcategory generated by S 2 . This component will be referred to as the 2-spherical component. 8.3. The derived Picard group. Let E be a spherical object of a triangulated category C in the sense of Seidel and Thomas [45]. Then the twist functor Φ E defined by where ev is the evaluation map, is an auto-equivalence of C by [45, Proposition 2.10]. Recall from the preceding subsection that P 1 is a 0-spherical object and S 2 is a 2-spherical object of D b (mod Λ). Thus the associated twist functors Φ P 1 and Φ S 2 are two auto-equivalences of D b (mod Λ). Proof. Let F ∈ Aut D b (mod Λ). Since F preserves the Auslander-Reiten quiver, the object . This allows us to define a map f : This map is clearly a surjective group homomorphism. Moreover, the group homomorphism is a retraction of f . Therefore Aut D b (mod Λ) ∼ = Z 2 × ker(f ). Proof. Direct computation, or apply some general result (e.g. [25,Section 2]) to the triangulated categories generated by P 1 and S 2 . √ Next we compute the morphism spaces between P 2 and the objects on the ZA ∞ ∞ component. Proposition 8.6. Up to isomorphism, any basic silting object of D b (mod Λ) belongs to one of the following two families · Φ n P 1 • Φ n ′ S 2 (P 1 ⊕ P 2 ), n, n ′ ∈ Z, the corresponding simple-minded collection is Φ n P 1 • Φ n ′ S 2 {S 1 , S 2 }, · Φ n P 1 •Φ n ′ S 2 (Σ m S 1 ⊕P 2 ), n, n ′ ∈ Z and m ≤ −1, the corresponding simple-minded collection is Φ n P 1 • Φ n ′ S 2 {Σ m S 1 , I 2 }. belongs to the ZA ∞ ∞ component. Up to an auto-equivalence of the form Φ n P 1 • Φ n ′ S 2 , we may assume that M 1 = P 2 . Then, if M 2 belongs to the 0-spherical component it has to be P 1 . Thus we assume that M 2 also belongs to the ZA ∞ ∞ component. Then it follows from Lemma 8.5 that M 2 is isomorphic to Σ m S 1 for some m ≤ −1 or to Σ m R(1) for some m ≥ 0. Observing for m ≥ 0 finishes the proof for the silting-object part. 8.7. Hearts and the space of stability conditions. Lemma 8.7. The heart of any t-structure on D b (mod Λ) is a length category. Proof. Let A be the heart of a t-structure on D b (mod Λ). We will show that A has only finitely many isomorphism classes of indecomposable objects. Such an abelian category must be a length category. Due to vanishing of negative extensions, it follows from Lemma 8.4 that A contains at most one indecomposable object from the 0-spherical component respectively the 2-spherical component. Suppose that A contains an indecomposable object from the ZA ∞ ∞ component. Without loss of generality we may assume that it is P 2 . It follows from Lemma 8.5 that for n ≥ 3 and m ∈ Z either Hom(P 2 , Σ m ′ Σ m R(n)) = 0 for some m ′ < 0 or Hom(Σ m R(n), Σ m ′ P 2 ) = 0 for some m ′ < 0. Similarly for L(n). Therefore an indecomposable object M belongs to the heart only if it is isomorphic to one of Σ m P 2 , Σ m R(1), Σ m R(2), Σ m L(1) and Σ m L(2), m ∈ Z. But at most one shift of a nonzero object can belong to a heart. So A contains at most 7 indecomposable objects up to isomorphism. (b) An abelian category is the heart of some bounded t-structure on D b (mod Λ) if and only if it is equivalent to mod Γ for Γ = Λ or Γ = K( · / / · ) or Γ = K ⊕ K.
2013-09-07T07:29:42.000Z
2012-03-26T00:00:00.000
{ "year": 2014, "sha1": "5340752a1f681739b93f5b091354c9f04e33d94c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5340752a1f681739b93f5b091354c9f04e33d94c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
6982524
pes2o/s2orc
v3-fos-license
Effect of sialodacryoadenitis virus infection on axonal regeneration Abstract The effect of sialodacryoadenitis virus (SDAV) infection on axonal regeneration and functional recovery was investigated in male Lewis rats. Animals underwent unilateral tibial nerve transection, immediate repair, and treatment with either FK506 (treated) or control vehicle (untreated). Serial walking track analyses were performed to assess functional recovery. Nerves were harvested for morphometric analysis on postoperative day 18 after an SDAV outbreak occurred that affected the 12 experimental animals. Histomorphometry and walking track data were compared against 36 historical controls. Rats infected with SDAV demonstrated severely impaired axonal regeneration and diminished functional recovery. Total fiber counts, nerve density, and percent neural tissue were all significantly reduced in infected animals (P < 0.05). Active SDAV infection severely impaired nerve regeneration and negated the positive effect of FK506 on nerve regeneration in rats. Immunosuppressive risks must be weighed carefully against the potential neuroregenerative benefits in the treatment of peripheral nerve injuries. © 2011 Wiley‐Liss, Inc. Microsurgery, 2011. The effect of sialodacryoadenitis virus (SDAV) infection on axonal regeneration and functional recovery was investigated in male Lewis rats. Animals underwent unilateral tibial nerve transection, immediate repair, and treatment with either FK506 (treated) or control vehicle (untreated). Serial walking track analyses were performed to assess functional recovery. Nerves were harvested for morphometric analysis on postoperative day 18 after an SDAV outbreak occurred that affected the 12 experimental animals. Histomorphometry and walking track data were compared against 36 historical controls. Rats infected with SDAV demonstrated severely impaired axonal regeneration and diminished functional recovery. Total fiber counts, nerve density, and percent neural tissue were all significantly reduced in infected animals (P < 0.05). Active SDAV infection severely impaired nerve regeneration and negated the positive effect of FK506 on nerve regeneration in rats. Immunosuppressive risks must be weighed carefully against the potential neuroregenerative benefits in the treatment of peripheral nerve injuries. Disability following peripheral nerve injuries is common, resulting in impaired quality of life and decreased productivity. Accelerating nerve regeneration improves the speed and extent of recovery achieved. 1 Daily administration of FK506 (tacrolimus) immediately following nerve injury has been shown to accelerate nerve regeneration in crush, 2-5 transaction, 6-10 allograft, [11][12][13][14] and isograft 15,16 nerve injury animal models. But, the neuroenhancing effect of FK506 is optimal at doses that suppress the immune system. 10,17 Therefore, improved regeneration must be weighed against the increased risks for infection, malignancy, and systemic toxicity. 18 An unanticipated sialodacryoadenitis virus (SDAV) outbreak occurred in our animal care facility, prompting us to investigate effects of viral infection on nerve regeneration in affected animals. Previous studies suggest that remyelination of the central nervous system can occur in the context of viral infection, but systemic illness may still adversely affect regeneration. 19 The effects of a systemic viral infection on peripheral nerve regeneration are not well studied, however. Such data may have bearing on the care of patients with peripheral nerve injury. SDAV is a single-stranded positive-sense RNA coronavirus which commonly infects laboratory rats throughout the world. 20 Rats infected with SDAV demonstrate pathological, though usually temporary, changes in the salivary glands, the lacrimal glands, upper and lower respiratory tract, the reproductive system, and general behavior. 21,22 Although the effect of active SDAV infection on regenerating nerves is unknown, viruses can have profound effects on nerve. For example, virus-induced axonal injury and demyelination has been used to model multiple sclerosis. 23,24 During the severe acute respiratory distress (SARS) epidemic, there were isolated reports proposing a link between coronavirus infection and neuromuscular dysfunction, [25][26][27] But, the effects of SDAV on peripheral nerve are unknown. This study examined the effect of SDAV infection on peripheral nerve regeneration in rats with a nerve transection injury, where animals were treated with FK506 or inert vehicle. Animal Studies Approval All surgical procedures, experimental manipulations, and perioperative care measures were carried out in strict accordance with National Institutes of Health guidelines and were approved by the Washington University institutional Animal Studies Committee. The intended project involved study of effects of tacrolimus (FK506) on nerve regeneration. Animals were given a rodent diet (PicoLab Rodent Diet 20 #5053, PMI Nutrition International) and water ad libitum. Animals were promptly returned to the animal facility following surgical procedures and during the course of the experiment were monitored for weight loss, infection, or impairment. Operative Procedure At the beginning of the experiment, the rats were anesthetized with medetomidine hydrochloride (Orion, NY) and ketamine hydrochloride (Fort Dodge Animal Health. Fort Dodge, IA). The tibial nerve was exposed, transected 4 mm distal to the sciatic trifurcation, and repaired using microsurgical techniques under microscope with four 10-0 nylon epineurial sutures. Muscle and skin were closed with 4-0 vicryl and nylon sutures respectively. On postoperative day 18, left tibial nerves were harvested for morphometric analysis and the rats were euthanized with an intracardiac injection of pentobarbital sodium (Diamond Animal Health, Des Moines, IA). The 18-day time point was selected based on prior data on optimal timing for assessment of nerve regeneration. 28 Pharmacological Regimen A 10-mg/mL solution of dissolved crystalline FK506 (Fujisawa USA, Deerfield, IL) in 20% cremaphor (Sigma, St. Louis, MO) and 80% ethanol (Quantum Chemical, Tuscola, IL) was diluted with an aqueous solution of 75% 1,2-propanediol (Sigma, St. Louis, MO) to a 2.5-mg/mL working solution. The rats received daily subcutaneous injections of 2-mg/kg FK506 or control vehicle after nerve transection and were redosed weekly according to weight. Because of loss of weight in animals in the setting of infection, dosing was proportionately reduced. SDAV Outbreak and Veterinary Oversight The SDAV outbreak became apparent approximately 1 week into the experiment. The symptoms and signs were observed first in animals immunosuppressed with FK506, but were similar among all of the animals, including bulging eyes, facial swelling, squinting, and production of porphyrin around the eyes and nose. Other associated findings included sneezing, swollen salivary glands and lymphadenopathy in the cervical lymph node chain. A few animals exhibited early keratoconjunctivitis. Diagnosis of SDAV was confirmed with ELISA laboratory assay. With the approval of the veterinary staff, animals were maintained in the facility through the course of the 3-week experiment. Supportive measures included quarantine and standard feeding and hydration. Three rats treated with FK506 showed decreased oral intake for roughly 72 hours, and these animals were dropper fed, with soluble ibruprofen in the drinking water to alleviate discomfort during recovery. Experimental Design Twelve inbred adult male Lewis rats (Charles River Laboratories) were housed in a central animal care facility at Washington University. All 12 animals underwent right tibial nerve transection, immediate microsurgical repair, and administration of either FK506 (treated, n 5 6) or control vehicle (untreated, n 5 6). Walking track analysis was performed at scheduled intervals until postoperative day 18, when rats were sacrificed. The left tibial nerves were harvested for morphometric analysis, and blood from animals representing each group was collected to verify serum FK506 levels. The primary data endpoints included peripheral nerve morphometry (using four quantitative parameters to assess nerve regeneration) 29 and serial walking track analysis (using print length factor, a validated assessment of postoperative hindlimb function). 30 Three prior studies with corresponding experimental groups and comparable methods were used to provide historical controls. 7,8,31 These three prior studies were conducted in the same laboratory as used for this study and involved the same animal strain (male Lewis rats) undergoing the same experimental procedures, and comparable timing endpoints. The original raw morphometry data from these three prior studies was pooled as a collective analysis to avoid bias from any individual animal cohort (Given that SDAV infection tends to spread rapidly and uncontrollably through animal facilities, having a true contemporaneous uninfected control group was not a logistical possibility in this study). The 12 SDAV infected experimental animals from the present study, either treated with inert vehicle (n 5 6) or treated with FK506 (n 5 6), were compared with the historical healthy control animals treated with inert vehicle (n 5 17) or FK506 (n 5 19). Functional Assessment Functional recovery was assessed with walking track analysis performed before transection and on postoperative days 7, 13, 15, and 17. The 7-day assessment serves as a baseline for hindlimb impairment, with days 13, 15, and 17 selected to capture the typical window for improvement in recovery of hindlimb function. Hind feet were dipped in X-ray developer, the rat walked down a 14 3 56 cm corridor lined with exposed undeveloped X-ray film, and the prints were used to derive quantitative measure of hindlimb function. The length of the normal right footprint (NPL) and the length of the experimental left footprint (EPL) were measured with a digital pen linked to morphometry software and used to calculate the print length factor using the following equation: PLF 5 (EPL -NPL)/NPL. Histomorphometric Analysis Tibial nerve segments were fixed in glutaraldehyde, dehydrated with ethanol, postfixed with osmium tetroxide, and embedded in Araldite 502. One micrometer thick cross-sections obtained 3-5 lm distal to the repair site were stained and examined by light microscopy. Microscopic images were examined with an automated digital image analysis system linked to morphometry software. At 10003 magnification, six randomly selected fields per nerve were measured to determine axon width, fiber diameter, and myelin width. These measurements were then used to calculate percentage of neural tissue (100 3 neural area/intrafascicular area), percentage of neural debris (100 3 neural debris/intrafascicular area), total number of myelinated fibers, and nerve fiber density (fibers/mm 2 ). Light microscopy was used to evaluate 1-lm thick toluidine blue-stained cross-sections for the quality and quantity of regenerated nerve fibers, preservation of nerve architecture, degree of myelination, and presence of Wallerian degeneration. Statistical Analysis All results are reported as mean 6 standard deviation in results and accompanying figures. Statistica version 6 (Stat-Soft, Tulsa, OK) was used for statistical analysis of the histomorphometric data. Historical control data were analyzed in comparisons against the experimental data using the same statistical methods as would be applied if an uninfected cohort of animals were enrolled at the time of the original experiment. Raw histomorphometric data on total number of fibers, density of fibers, percent nerve fiber, and fiber width was included for all historical control and experimental animals. Data were compared using Kruskal-Wallis's one way analysis of variance on Ranks for nonparametric data distribution. Statistically significant differences were found in values among the treatment groups (P 5 <0.001) and post hoc analysis was performed to isolate groups that significantly differed from the others using Dunn's method for pairwise multiple comparison procedure. The alpha level was set at P 5 0.05. RESULTS Infected rats suffering from SDAV exhibited periorbital and perioral red-brown discoloration, decreased activity, audible rhonchi and labored respiratory effort. One infected rat in the FK506 treatment group died on postoperative day 13 and was excluded from analysis. Infected rats receiving FK506 demonstrated a mean serum level of 23 ng/mL, which is an immunosuppressive level. The serum levels of two randomly selected animals receiving only vehicle were undetectable. Infected rats on FK506 demonstrated weight loss after the first week of the study, with subsequent slow recovery over the next 2 weeks. At nadir, weight loss averaged 10% of body weight. Untreated infected rats did not demonstrate weight loss until the second week, with average of 6% weight loss and gradual recovery toward the conclusion of the experiment on day 18. Weight loss was not a primary endpoint in this study, and the data available from the veterinary division regarding weight loss and precise relation to timing of onset of infection was fragmented, although are similar to previously reported experience with weight loss following SDAV infection. 22 The magnitude of weight loss was greater in immunosuppressed animals treated with FK506, and the duration required for recovery from illness tended to be longer. The functional assessment with serial walking track analysis confirmed that recovery of hindlimb function was delayed in animals that suffered SDAV infection (Fig. 1). Whereas walking tracks began to normalize briskly at 14 days in healthy animals, no improvement was observed in either of the groups with SDAV infection at the experimental endpoint of 18 days. Some fluctuation in walking tracks is seen over time for all groups, but hindlimb performance was essentially flat for infected animals in contrast to the recovery of function observed after day 14 in the pooled data from animals enrolled in the three prior historical studies. The histological assessment of nerve specimens demonstrated marked decrease in nerve regeneration with SDAV infection. Representative sections from the four comparison groups are shown in Figure 2. Only scant nerve fibers are visible amidst Wallerian degeneration in infected animals. In healthy animals, robust nerve regeneration was observed at the same time point. Cross-sections in healthy animals demonstrate densely packed and well myelinated nerve fibers, with restored neural architecture. Regenerating fibers appear as many small annulae with central clearing. The impaired regeneration in the infected animals was evident in both the paucity of fibers as well as dramatically increased myelin debris, and degenerative changes. Quantitative histomorphometry confirmed reduced total fiber counts, decreased neural tissue, and lower fiber density in the infected animals. The complete data on total number of fibers, density of fibers, percent nerve fiber, and fiber width is reported in Figure 3. Among SDAV infected animals, the mean fiber counts were 718 6 381 and 462 6 346 for the FK506 and control animals, respectively. In contrast, healthy animals from historical controls had mean fiber counts that were roughly fivefold higher, 3496 6 1189, and 2242 6 1050 for the FK506 treated and control animals, respectively. Similar trends were observed for nerve density, with 1181 6 641 and 643 6 428 fibers/mm 2 for FK506 treated and vehicle infected animals, respectively versus 7606 6 2389 and 47681 6 305 fibers/mm 2 for the FK506 treated and control healthy animals. Percent neural tissue was 0.923 6 0.5% and 0.495 6 0.4% for FK506 treated and vehicle, respectively versus 5.925 6 2.0% and 3.375 6 1.6% for the FK506 treated and control animals. Among infected animals no significant differences were detectable between groups due to substantial interanimal variability and the overall frail nature of nerve regeneration. Nerve fiber widths were similar across all four groups (P > 0.05). DISCUSSION Studies of peripheral nerve regeneration are routinely performed in rat models, and this study demonstrated a profound impairment of axonal regeneration, with lack of functional recovery in all animals infected with SDAV. This study suggests that experiments on nerve regeneration are unreliable in the setting of an SDAV outbreak. Because transmission of SDAV from rat to mice via direct contact has been documented, these results may also be relevant for study in mouse models, the other commonly used model in studies of peripheral nerve. The study also underscores the potential risks of immunosuppressive agents from an infectious standpoint. The mechanism for immunosuppression by FK506 involves ligation to FK506-Binding Protein 12 (FKBP-12) to form a complex which binds and inhibits calcineurin from binding nuclear factor of activated T cells (NF-AT). The phosphatase activity of calcineurin dephosphorylates NF-AT, which joins its nuclear subunit to modulate gene transcription and T-cell activation. Calcineurin also masks the nuclear export signal of NF-AT. The neuroregenerative effects of FK506, however, may be the result of a separate mechanism mediated by FKBP-52 and involving c-jun expression. 32 Yoo et al. have identified genes in the SDAV genome coding for a spike protein, a small membrane protein, a membrane-associated protein, a nucleocapsid protein, and an esterase protein. 33 Further molecular studies may answer the question of whether viral structural proteins interact with the neurogenic and/ or immunosuppressive mechanisms of FK506. The earlier onset of weight loss in FK506 treated animals versus untreated animals likely relates to FK506 induced immunosuppression, which is associated with increased susceptibility to viral infection. Compared to untreated rats, immunosuppressed rats had a more fulminant course with earlier onset and more severe illness, likely due to higher viral titers in the context of compromised lymphocyte function. There also exists the possibility that secondary bacterial infection developed in some animals, although the clinical findings were pathognomonic for SDAV infection in affected animals. The positive effect of FK506 on neuroregeneration documented in numerous studies was negated by infection. The premature death in one of the infected animals receiving FK506 supports the hypothesis that mortality and morbidity due to SDAV and other infectious agents is intensified by the burden of immunosuppression. 18 The walking track analysis of both groups of rats infected with SDAV demonstrated a significant delay in functional recovery. This was an expected finding, given the very limited regeneration (and therefore minimal muscle reinnervation) observed in infected animals. Although walking track analysis is less precise than morphometry, it is an important outcome measure because it actually evaluates recovery of function, rather than regeneration (which is necessary, but not sufficient for recovery of function). The erratic pattern of the walking track data is typical of this assay, given inherent imprecision of this technique. Nonetheless, the particularly high baseline print length factor day 7 of the SDAV infected group Figure 3. Quantitative histomorphometric analysis of tibial nerve fiber regeneration after nerve transaction at 18 days. Comparisons of nerves from SDAV infected and healthy animals showed statistically significant reduction in fiber counts, percent nerve tissue, and nerve density was observed with SDAV infection (P < 0.05). Mean fiber width did not differ significantly between groups. Error bars reflect standard deviation. treated with FK506 (which exceeded 1.5), is notable. Most likely, these animals were quite sick at this time point, dragging their hindlimbs even more than is normally seen with complete nerve transaction. Several caveats must be considered in interpreting this study. First, the study relied upon historical controls rather than contemporaneous data. Although a practical necessity, this approach introduces the theoretical concern that another factor in addition to viral infection may have contributed to poor nerve regeneration. Retrieving the raw data from three separate cohorts of rats of the same species, age, and endpoint helps mitigate concerns regarding validity of the control group. The pooling of animals from different studies does explain the relatively wide standard deviation in controls. Another limitation is that the sacrifice of animals at day 18 precluded a long-term assessment of whether there would have been ''catch up'' regeneration long after the infection was cleared. The sacrifice of animals at day 18 was based on prior literature on optimal timing of walking track analysis and desire to minimize suffering of any animals with residual disease related to SDAV infection. Applicability of the findings reported here to other viral illnesses or other strains of rats is uncertain. All study and control animals were male Lewis rats. Last, the difficulty in correlating precise onset of infection combined with the fragmented data on weight loss precluded statistical analysis of differences in weight loss for immunosuppressed versus immunocompetent animals infected with SDAV. The results of this study have relevance to research on nerve regeneration in rats and may also have bearing on ongoing use of FK506 when viral infection is observed. From an experimental standpoint, nerve regeneration studies conducted in animals that are suffering viral infection must be interpreted cautiously. Based on the data from this study, we would abort any future nerve regeneration study in which the animals were afflicted with viral infection. The findings in the FK506 group are particularly relevant, as these animals became sicker than their non-immunosuppressed counterparts. FK506 is, to date, arguably the most effective systemic neuroregenerative agent tested on human patients with peripheral nerve injury. This study demonstrates that the potential benefit of enhanced nerve regeneration may be negated in the setting of infection. CONCLUSIONS Rats infected with SDAV suffered impaired nerve regeneration following neurotmetic injury. SDAV viral infection dramatically slows nerve regeneration and functional recovery. Immunosupression with FK506 also intensifies systemic manifestations of viral illness, with early onset of symptoms, increased weight loss, and more protracted recovery.
2018-04-03T03:35:38.575Z
2011-08-22T00:00:00.000
{ "year": 2011, "sha1": "4362991e314138e9f3546a872e69ce12c6e2d07d", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4088328", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "67a664db09255624fda3c0ce93f924ce95d82367", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
260918629
pes2o/s2orc
v3-fos-license
Complementors as ecosystem actors: a systematic review As downstream actors providing innovations that enhance the value of the core proposition, complementors have been recognized as indispensable in many defini‐ tions of ecosystems. The increasing attention they have received in the past years demonstrates the concern to enrich our knowledge of complementors. With a hybrid approach of bibliometric and content analyses, this systematic literature review aims at a clearer understanding of complementors in an ecosystem setting. The findings confirm complementors’ strategic role in enhancing the ecosystem’s focal value proposition and impacting the ecosystem survival and success, more intensely since 2018. Several characteristics of complementors are also revealed. Despite autonomy being their most affirmed feature, an inconsistent understanding of complemen‐ tors in different types of ecosystems is revealed. This study represents a pioneering attempt to systematically understand complementors as ecosystem actors through extant literature. Various research gaps in the extant ecosystem research were also identified, providing research directions in terms of complementors’ coopetitive interactions, strategies, and challenges in ecosystems. Introduction Ecosystems are the locus and structure where various loosely coupled actors interact to materialize (complex) value propositions (Adner and Kapoor 2010;Adner 2017;Kapoor 2018). Beyond the traditional interorganizational network literature, the ecosystem research stream emphasizes the complementors' participation in augmenting the focal value proposition (Tsujimoto et al. 2018). Complementors are downstream actors whose output enhances the value of a focal product or service that customers generate from its use (Brandenburger and Nalebuff 1996). Neglecting complementors in an ecosystem may lead to the failure of the focal firm and the realization of the core value proposition (Adner 2021;Liang et al. 2022), as exemplified by the successful entry of Alfa Romeo and Fiat in the United States, only when specialized mechanics or appropriate spare parts were made available (Brandenburger and Nalebuff 1996). Complementors' innovations and value-added activities, when bundled together with the focal firm's core offering, unlock the full-value potential of the core product, thereby improving the reputation and performance of the entire ecosystem (Teece 1986;Brandenburger and Nalebuff 1996;Adner and Kapoor 2010). Complementors' innovations, role, and presence are thus deemed necessary for the focal firm and the entire ecosystem (Adner and Kapoor 2010;Brusoni and Prencipe 2013). Nevertheless, proper coordination of complementors seems to be overlooked (Liang et al. 2022). Furthermore, in a business world where coopetition, i.e., simultaneous cooperation and competition, is increasingly ubiquitous (Brandenburger and Nalebuff 1996;Kock 1999, 2000), the power of differentiation and competitive advantage may lie in the hands of complementors (Mantovani and Ruiz-Aliseda 2016). Hence, understanding complementors and their interactions with other ecosystem actors is crucial. In recent years, several reviews of various types of ecosystems have brought clarity and progress towards a theory of ecosystems (e.g., Cobben et al. 2022;Granstrand and Holgersson 2020;McIntyre and Srinivasan 2017;Rietveld and Schilling 2021;Shipilov and Gawer 2020;Tsujimoto et al. 2018). However, complementors have not been the main focus of analysis until now. Despite receiving increasing academic attention in ecosystem research and being integrated into most definitions for (innovation) ecosystems (Jacobides et al. 2018), research on complementors in ecosystem settings remains dispersed across contexts and topics. Therefore, this review aims to clarify the development of complementors' research in the extant ecosystem literature, providing a comprehensive and in-depth understanding of their definitions, roles, and interactions within ecosystems. This systematic review investigates and synthesizes the state-of-the-art ecosystem literature to understand complementors as ecosystem actors. For this purpose, we relied on two methods: (1) A bibliometric analysis for an overview of the conceptual structure and development of the literature, and (2) A content analysis of the most relevant articles. To the best of our knowledge, this review represents a pioneering and timely attempt to synthesize the extant literature on complementors in ecosystems. By providing a comprehensive understanding of complementors, the review makes several contributions to ecosystem literature. First, we identify the definitions, core features, and roles of complementors. Additionally, we provide an overview of their interactions, strategies, and challenges in different types of ecosystems. Among others, autonomy emerges as a commonly affirmed characteristic of complementors. Despite their primary role of value enhancement, complementors' relationships with other ecosystem actors are often coopetitive. The intensity of these coopetitive relationships determines the ecosystem's structure, health, and governance system (Gawer 2014). For this reason, coordinating complementors presents a management challenge. Second, we show the contribution and connection of interrelated concepts, i.e., complements, complementary assets, and complementarity, to the ecosystem literature. We emphasize the need for conceptual rigor regarding these terms in ecosystem studies and offer delimitations and suggestions for cautious use of these concepts in connection with complementors to avoid confusion. Third, we provide several research avenues that could enrich our knowledge of complementors as ecosystem actors. Due to the imbalance in the number of studies on complementors in different types of ecosystems, further research on these actors in other ecosystem types, besides platforms, is warranted. Complementors: from game theory origin to ecosystem appropriation With a game theory origin, complementors were first coined as a term in Brandenburger and Nalebuff's book " Co-opetition" (1996). Together with suppliers, customers, and competitors, complementors formed the proposed value net of the focal firm (ibid.). Until then, complementors were only regarded as value enhancers. Since the mid-1990s, the role of complementors has been attested as strategically vital to firms due to their ability to enlarge the business pie. "A player is your complementor if your customers value your product more when they have the other player's product than when they have your product alone" (Brandenburger and Nalebuff 1996, p. 18). However, complementors can also exhibit competitive tensions with other value co-creation actors (Brandenburger and Nalebuff 1996;Yoffie and Kwak 2006;Helfat and Raubitschek 2018). The fact that complementors and coopetition have the same origin is not surprising. Rather than dividing the world into black and white, competitors and partners, coopetition offers the potential for a win-win situation. The simultaneous presence of cooperation and competition dimensions has become the new normal (Brandenburger and Nalebuff 1996;Bengtsson and Kock 2000). The potential for value co-creation is followed by competitive tension and value destruction (Gnyawali and Charleton 2018). Therefore, complementors have also been referred to as a type of coopetitors (Afuah 2000). The concept of complementors was later adopted in ecosystem literature to define actors that add extra value through their innovations. Alongside complementarity, complementors became crucial notions in business ecosystem studies and were later endorsed in innovation and platform ecosystem research (Boudreau 2010;Srinivasan and Venkatraman 2010;Scholten and Scholten 2012;Tsujimoto et al. 2018), as "ecosystem often takes a time to realize the benefits from complementors" (Kang et al. 2011, p. 287). As illustrated in Fig. 1, the literature on complementors has seen a surge in recent years; possibly an attempt to clear up some confusion surrounding this concept (Teece 2018). The products, activities, or services resulting from these complementarities are generally referred to as complements (Teece 1986(Teece , 2018. Failing to engage and coordinate with complementors can lead to the collapse of the focal firms' business (Adner and Kapoor 2010;Brusoni and Prencipe 2013;Mantovani and Ruiz-Aliseda 2016). Despite the wide variety of the complements and their impact on the attractiveness of other businesses' products and success, complements are often overlooked. This prompted Teece (2018) to describe that "the literature on complements is both confused and complex" (p. 1373). Complementors may not generate all the types of complementarities, and third-party firms are not the only providers of complements. In some cases, focal firms may also internally produce complements, 1 but offer them as separate products to the customer. In this instance, the firm has a dual role of complementor and focal firm (Adner and Kapoor 2010;Gawer and Henderson 2007;Zhu and Liu 2018). Google, for instance, owns the Android platform (the focal firm) while acting as a complementor through its applications on Google Play Store. To develop the ecosystem theory, the interdependencies and complementarities among ecosystem actors have been highlighted recently. Particularly, non-generic complementarities are seen as delineating elements of ecosystems (Jacobides et al. 2018;Kapoor 2018;Teece 2018). However, due to application heterogeneity and identical word stems, the use of concepts such as complement, complementary innovation, and complementarity seems to cause conceptual and terminological confusion (Adner 2017;Teece 2018). This confusion may undermine the notional and application utility of these terms. To avoid such threats, clarifications are necessary to improve their conceptual rigor in the ecosystem literature. In the discussion about complementors, we can hardly avoid a more fundamental concept, complementarity, with an undeniable presence in ecosystems (Jacobides et al. 2018). The definitions of complementarity are versatile, depending on the area of study (Xu et al. 2010). In neoclassical economics, complementarity is perceived as the impact on user value, i.e., "the marginal value of a variable increases with another variable" (Teece 2018), or factor prices from the perspective of cross-price elasticity (Xu et al. 2010). While in innovation research, complementarity is seen as technological congruence and the synergistic interactions or effects resulted from combining or reconfiguring current technologies into novel solutions (Teece 1986(Teece , 2018Xu et al. 2010). Despite its apparent conceptual simplicity, complementarity is a rather complex notion that is too "complicated to understand fully" (Samuelson 1974(Samuelson , p. 1255. For a non-exhaustive list of types of complementarities, consult Appendix 1. Furthermore, complementarities and complementary assets are sometimes used interchangeably (Morgan et al. 2013). Complementary assets are a broad term that encompasses "different types of complementary resources, capabilities, technologies, and activities that are required for the commercialization of a given core technology" (Kapoor and Furr 2015, p. 417). According to Teece (1986), complementary resources and capabilities represent a main determinant in a firm's strategic decisions intending to capture value. Originally, complementary assets were considered internal to a firm. However, with the development of the ecosystem stream, complementary assets crossed these boundaries by also encapsulating the complementary products and services delivered by third-party providers (Helfat and Raubitschek 2018). As Adner (2017) also stated, in the ecosystem context, the concepts of complementors, complements, and complementary assets "have suffered from a conceptual blending as improvements in any of these are treated as improving the focal firm's offer in the same general way" (p. 50). Thus, clarification and delimitation of these concepts are necessary. This literature review aims to elucidate and understand the features and differences of these concepts, i.e., complementors, complements, complementarity, complementary assets. In light of the overlaps among these concepts, which may go beyond sharing the same word stem, and the warning on conceptual blending, several questions arise: (1) What are complementors' characteristics and roles in ecosystems? (2) How do complementors behave in ecosystems? and (3) How are the intersecting concepts understood in ecosystem literature? The review also identifies significant gaps in ecosystem research concerning complementors. Methods This study adopts a systematic review approach to provide reliable and evidence-informed findings with minimized bias. Conducting, structuring, and synthesizing the findings in a systematic manner render transparency, replicability, and consistency of the review process and results (Davis et al. 2014;Snyder 2019;Cobben et al. 2022). Given the relatively new research track and complex nature of ecosystems, a hybrid methodology combining bibliometric and content analyses is adequate to gain a comprehensive understanding of complementors. This approach involves a bibliometric analysis of the existing ecosystem literature and a qualitative analysis of the most relevant articles. The combination of bibliometric analysis techniques and content analysis method increases the reliability of the findings and facilitates our understanding of the conceptual structure of the reviewed articles. To ensure the reliability and objectivity of the review, transparent and reproducible steps were employed in the search strategy and selection of articles. Initially, we performed a bibliometric analysis to grasp and map the relevant extant knowledge using quantitative methods (Zupic and Čater 2015). Subsequently, a qualitative in-depth content analysis allowed for identification of themes in the most relevant articles in the dataset (Gaur and Kumar 2017). Search strategy and data selection This review relies on a collection of bibliographic data extracted from two multidisciplinary databases, namely Web of Science and Scopus. The choice of these two major databases is due to their slightly different but overlapping coverage of journals. Relying on two databases is also motivated by the inclusion of all relevant articles in our dataset (Aria and Cuccurullo 2017;Gavel and Iselid 2008;Zhu and Liu 2020). Considering the narrow research focus and yet maturing field of ecosystem stream, the initial search string contained two truncated terms, "ecosystem* AND complementor*" (Phase I, as illustrated in Fig. 2), to capture the plurals. The combined dataset resulted in 79 unique English articles from relevant subject areas until 2021. After reading the content of these articles, alternative wordings and synonyms for complementors were identified and presented in Table 1. These additional terminologies were grouped based on similarity and used in a second search round Article selection and screening process for bibliometric analysis. Database S stands for Scopus, while W stands for Web of Science. Phase I original search string: ecosystem* AND complementor*. Phase II search strings: see Table 2. Phase III comprises two searches, i.e., forward citation (FC) and backward citation (BC). The S dataset for Phase III represent the 21 S merged dataset from Phase I and II, while W dataset is the 69 W merged dataset. Due to different filter options, the results were first limited to publications until 2021. Under the relevancy criterion, Phase III results were further filtered based on the inclusion of "ecosystem*" and "complement*" in topic fields. For backward citations in Scopus, this filter was applied earlier (point a) due limited options (2021); Winter et al. (2018) "ecosystem*" AND ("complement producer*" OR "complement developer*" OR "complement provider*" OR "producer* of complement*" OR "developer* of complement*" OR "provider* of complement*") (2018) 3 Third-party developers of complementary modules 1 Benlian et al. (2015) together with "ecosystem*", in Phase II. Generic terminologies, such as "third-party developers", were excluded. This step enlarged the dataset to an aggregate of 92 unique articles. Lastly, to ensure the inclusion of relevant papers, a third round of dataset expansion with forward and backward cited articles was conducted (82 plus 38 from Web of Science, and 108 plus 83 from Scopus in Phase III). This step also minimized the probability of missing recent but contributing articles. Some of these articles might use only the term "platform" instead of "platform ecosystem". Such publications were initially bypassed by the search strings due to the inclusion of the keyword "ecosystem". The focus on the overall ecosystem concept is justified by its shared fundament across all ecosystems (e.g., business ecosystems, innovation ecosystems, platform ecosystems, entrepreneurial ecosystems). A variety of ecosystem types emerged potentially from the lack of consensus on a core definition (Ritala and Almpanopoulou 2017). Nevertheless, the differentiation between business ecosystems and innovation ecosystems remains unclear in the literature (Gomes et al. 2018). Thus, this review uses the umbrella concept of ecosystem to provide an overview of the research development on complementors across different types of ecosystems. At each search stage, the same exclusion criteria based on publication type, language, scientific disciplines, and relevancy were applied. Consequently, only academic articles written in English and published until the end of 2021 were selected. However, early access articles published in 2022 were also included. The search results were refined based on relevant subject areas, e.g., business, management, and social sciences. Before extractions, further data cleaning was performed. By reading the articles' titles, abstracts, and keywords, we screened the articles based on relevant use of the term "ecosystem". Only articles that refer to business-related ecosystems were included, e.g., business ecosystem, innovation ecosystems, platform ecosystems, digital ecosystems, entrepreneurial ecosystem. In this relevancy stage, we excluded all the articles that refer to other types of ecosystems, e.g., marine ecosystems, biological ecosystems, ecological ecosystems, agricultural ecosystems, or architectural ecosystems. Additionally, articles that used "complement" as a verb in an unrelated context were removed. In cases where such information was not available in the abstracts, the article's content was read to determine its relevance. After extracting and merging the datasets to remove duplicates, 253 unique articles entered the bibliometric analysis. Two more special issue introductory articles were excluded. Figure 2 illustrates the selection process. Bibliometric analysis The final dataset for bibliometric analysis comprised 253 articles, as shown in Fig. 2. This method objectively served the purpose of this study. Bibliometric analysis also revealed critical information about the body of ecosystem literature involving, regarding, or mentioning complementors. A first, descriptive analysis of the bibliographic metadata was conducted. The data were harmonized by converting singulars into plurals (e.g., ecosystems, complementors, platforms, complements) to avoid double occurrences of the same keywords. The bibliometric analysis focused on the conceptual structure to understand the main themes and trends regarding complementors in ecosystems. Relevant keywords were mapped to visualize their growth dynamics using bibliometrix package in RStudio (Aria and Cuccurullo 2017) and Excel. Further science mapping of the conceptual structure through co-word analysis was performed in RStudio and VOSviewer, revealing relationships and similarities between articles based on keyword co-occurrences (Su and Lee 2010;Zupic and Čater 2015;Aria and Cuccurullo 2017). We further generated conceptual thematic maps illustrating the centrality and density, as well as the evolution of topics that represent the dataset, providing insight into the research topics contained therein and revealing links between concepts or themes. Content analysis Bibliometric analysis provides an overview of the ecosystem research stream through a rigorous scientific process. However, it relies solely on bibliographic data for analysis, while the articles' contents are not taken into consideration. Therefore, a content analysis was performed to provide a more in-depth explanation of complementors within the relatively new context of ecosystems (Weber 1990;Duriau et al. 2007). This second analysis of the most influential studies offered a comprehensive understanding of the reviewed articles and trends. Similar to other reviews (e.g., Alon et al. 2018;Bretas and Alon 2021), the selection of articles was objectively performed by intersecting the most globally (at least 10 global citations) and most local referenced documents (at least 10 local citations), resulting in 44 articles for the content analysis. As the selected articles may not all take the perspective of complementors but still provide relevant information, the content of the 44 articles was thoroughly read and systematically reviewed. The articles were coded in NVivo, a qualitative data analysis software, to uncover relevant themes in connection to complementors as detailed in Sect. 5. The codes were delimited by the ecosystem type that set the scene in the 44 articles, allowing for the identification of variations in perceptions and understanding of complementors in different settings. The codes were grouped into categories to generate the main themes. For an overview of the articles included in content analysis, see Appendix 2. Descriptive analysis The dataset for bibliometric analysis spanned over 14 years from 2007 to 2021, consisting of articles written by 524 authors and published in 107 journals, with a compound annual growth rate of 40.08% in scientific production. Multi-authored articles dominated the sample, accounting for 217 articles. As illustrated in Fig. 3, the research agenda on complementors received increasing academic attention from 2012 onwards, with an initial peak in 2010. This peak is likely attributed to Adner and Kapoor's seminal work on value creation and interdependence in innovation ecosystems (2010), which triggered subsequent upsurges. Innovation ecosystem-based studies dominated in 2016, but since 2017, research on platform (ecosystems) has been the leading setting to study complementors. The year 2018 presents the first noticeable apogee, marking a turning point that renders greater academic attention. The 2018 studies contributed with attempts to theorize and conceptualize different types of ecosystems (Helfat and Raubitschek 2018;Jacobides et al. 2018;Kapoor 2018;Teece 2018) and numerous case studies, particularly on platform ecosystems, contributing to understanding complementors (e.g., Cennamo 2018; Inoue and Tsujimoto 2018; Karhu et al. 2018;Ozalp et al. 2018;Rietveld and Eggers 2018;Zhu and Liu 2018). Since 2018, there has been a constant increase in annual publication output, particularly in platform ecosystem empirical studies. The articles included in the dataset covered various types of ecosystems. Complementors in platform ecosystems are the most frequently studied (e.g., Benlian et al. 2015;Boudreau 2012;Boudreau and Jeppesen 2015;Cenamor et al. 2013;Cennamo 2018;Gawer 2014), while complementors in entrepreneurial ecosystems are the most disregarded. Figure Most of the articles in the dataset are empirical studies, with quantitative studies dominating. The remaining articles comprise conceptual or theoretical publications, literature reviews, and experiments or modelling, in descending order. Table 2 illustrates similar disproportion in research design captured by the two datasets. The reliance on empirical studies is understandable, given the still-developing ecosystem research seeking theorization of the field. The presence of several literature reviews also justifies the pursuit of clarity and structure in the ecosystem field. Synthesizing and integrating existing literature pave the way to a deeper understanding of definitions, origins, development, key challenges of ecosystems, and research directions (e.g., De Reuver et al. 2018;Nambisan et al. 2018;Thomas et al. 2014;Tsujimoto et al. 2018). However, to the best of our knowledge, this is the first attempt to systematically review complementors. Keywords' growth trends Examining the yearly cumulative occurrences, complementors resurfaced as an author's keyword in ecosystem-related studies in 2010. Figure 5 depicts an ascending trend after 2017, indicating the increasing academic interest in complementors as ecosystem actors. This phenomenon can be attributed to the upsurge of ecosystem studies that notice and/or regard complementors as undisputable actors (e.g., Complementors' products and services emerged as a top author's keywords under the terminology of complements in 2010, the same year as complementors. From 2013 onwards, complementary assets and complementarities also emerged as keywords, reflecting their importance in sustaining the ecosystem research track. However, the dynamics of these keywords have shown a slowdown in the usage rate recently. In contrast, the keyword complementors has accelerated since 2018, significantly distancing itself from the other keywords. Keyword co-occurrence analysis Through co-word network analysis of the dataset, the conceptual structure of knowledge can be mapped. This analysis captures relationships between relevant concepts based on their co-occurrence in a set of articles. By relying on author's keywords as a method parameter, important and emerging topics are uncovered. The size of the node represents the frequency of the keyword. Figure 6 exhibits two dominant clusters, representing platforms (purple) and the umbrella concept of ecosystems (green). The emergence of platforms and platform ecosystems (blue) in two clusters is not surprising, given the increasing preference for using only platforms. It, thus, illustrates the detachment of platform studies from the ecosystem stream and establishing its own arena. Complementors emerged as a distinct keyword under the platform ecosystems cluster (blue). Under this cluster, complementors heavily link to platform governance, showcasing their critical role in a platform ecosystem setting. However, complementors also strongly connect with concepts from other clusters, such as business ecosystems, platforms, ecosystems, and innovation. Additionally, weaker links of complementors include complement quality, digital transformation, competition, specifically platform competition, sustainability, modularity, and governance. These topics comprise potential research avenues in connection to complementors. While complements materialized as a topic under the open innovation cluster, it connects with all other clusters. Thus, complements seem to contribute to a wide range of topics, e.g., ecosystems, business ecosystems, platforms (ecosystem), platform governance, value creation, and value capture. Unlike complementors and complements, complementarities and complementary assets emerged under the same cluster (red), but without a direct link. Besides ecosystems, the contribution of complementary assets is primarily limited to the cluster it belongs to, i.e., dynamic capabilities, value creation, value capture, business models, and network effects. In contrast, complementarities appeared more versatile, contributing to several research fronts, particularly related to platform (ecosystems), but also ecosystems. Specifically, complementarity connected with platform (ecosystems), digital platforms, ecosystems, strategy, value creation, network effects, platform competition, complement quality, and further with modularity. The lack of direct links among complementors, complements, complementary assets, and complementarity exposes the need for connecting research. The disjunction between these topics confirms Adner's argument (2017). Jointly exploring these concepts may reveal overlaps and discrepancies to better understand their individual contribution to the ecosystem field. Thematic analysis Thematic analysis is a method of plotting connections on a two-dimensional matrix based on density and centrality functions. Density refers to the theme's development, while centrality captures its importance in a specific field (Aria and Cuccurullo 2017). Figure 7 illustrates the thematic analysis of the dataset. The node size indicates the number of keywords captured by the respective topic. The upper right quadrant depicts the motor themes that lead the literature, with high density and high centrality. These "driving" topics mainly include open innovation and complementors. It should be noted that the lower development degree of complementors determines its crossing into the basic themes' quadrant. This explains the research potential of complementors that is left unexplored in ecosystem literature. The upper left quadrant displays niche themes that lack strong representation in the dataset. Themes such as complexity and entrepreneurial ecosystems require further development in connection with complementors. The lower left quadrant of emerging or declining themes includes topics like firm performance and innovation (ecosystems). The umbrella concept of ecosystems, along with business models and platform ecosystems, appears as a basic theme in the lower right quadrant. Due to a lower centrality, innovation ecosystems also transverse into basic themes. These topics show a high degree of relevance to be researched further. Thematic evolution Thematic evolution is a method that divides a given period into time intervals and charts the evolution of themes across time. In this study, the inclusion index weighted by author's keyword occurrences, with a minimum frequency of five, was utilized to map the research field into an alluvial graph. Three cutting years were chosen based on the most notable yearly surges of publication, as shown in Fig. 5. Figure 8 displays the longitudinal thematic map with different representative themes for each period. Each term corresponds to a topic that can converge into another mainstream theme over time or diverge into multiple themes. Complementors emerged as a top theme in the third time slice (2019-2020), stemming from three themes: innovation, network effects, and competition. Complementor's incremental innovations, i.e., complements, create indirect network effects that benefit the entire ecosystem. Despite their collaborative nature in enhancing the core offering's value, complementors' link with the competition theme is not unexpected, given their initial definition involving "some inherent [competitive] tensions" (Brandenburger and Nalebuff 1996, p. 17). Since 2021, complementors have been mainly captured by platform ecosystems research, emphasizing their integral role in platforms. Key findings of bibliometric analysis As a keyword, complementors have been increasingly used in ecosystem literature, particularly since 2018. However, terms with the same word stem show a lower usage rate. The conceptual structure mapped through keyword co-occurrence analysis revealed strong links between complementors and platform and business ecosystem streams, as well as platform governance. Complementors in other types of ecosystems, such as innovation ecosystems, require further exploration. The weaker links with (platform) competition, complement quality, sustainability, modularity, and governance present areas for future research. Additionally, the disconnect between complementors, complements, complementary assets, and complementarity suggests the need for bridging research. The thematic analysis indicates that complementors in ecosystems is a topic that requires further development, particularly in connection with entrepreneurial ecosystems and the competition dimension. Although the thematic evolution shows the contribution of competition research to complementors studies, complementors have mainly been captured by platform research since 2021. Complementors: definitions, characteristics, and roles In ecosystem settings, complementors take different shapes depending on the cited sources, which leads to inconsistencies across various definitions of complementors in the ecosystem literature. The concept of complementors has also overlapped with intersecting terms such as complements and complementary assets. This blending makes their distinction difficult to grasp and explain, reinforcing the findings from Sect. 4.2. As shown in Table 3, some researchers quote Brandenburger and Nalebuff's (1996) definition of complementors, focused on enhancing value (Kapoor 2013;Kapoor and Lee 2013;Gawer and Cusumano 2014;Adner 2017;Rietveld and Eggers 2018), while others refer to Teece's (1986Teece's ( , 2018 work on complementary assets, which are provided by complementors (Helfat and Raubitschek 2018). Lastly, in the platform ecosystem context, complementor-or platform-related studies are cited to define complementors. Complementors are generally perceived as distinct downstream actors (Adner and Kapoor 2010) known for providing "complementary products and services that contribute towards the focal offer's value creation" (Kapoor 2018, p. 7). They can be viewed as part of the economic game for value capture from a game theory perspective (Brandenburger and Nalebuff 1996) or as an extension of the supply chain within the innovation ecosystem literature (Adner 2017). In business ecosystem literature, complementors may be regarded as "neither buyers nor suppliers" (Kapoor 2013, p. 5). However, certain expressions, such as "firms providing complementary components" (Hannah and Eisenhardt 2018), may erroneously bring complementors closer to the component supplier category. Besides the common definitions of complementors from business and innovation ecosystems, the platform research stream offers a greater variety. In platform ecosystems, complementors are seen as "key sources of distinct valuable resources" (Cenamor et al. 2013, p. 413) or innovation (Boudreau 2012). Despite their importance, complementors are sometimes associated unfairly with consumers and treated as such, due to their periphery (Wareham et al. 2014) or downstream location in the value chain (Adner and Kapoor 2010),. Additionally, in platform studies, complementors are often defined as suppliers of complementary products and/or services (West and Wood 2013;Thomas et al. 2014;Benlian et al. 2015;Boudreau and Jeppesen 2015;Kang and Downing 2015), or even "supply-side users" (Benlian et al. 2015). Another pattern specifically linked to digital platform context refers to complementors as software providers (Boudreau 2012) or app developers (Benlian et al. 2015;De Reuver et al. 2018;Eckhardt et al. 2018;Kapoor 2018;Zhu and Iansiti 2012). Despite the diverse definitions, complementors exhibit several key characteristics. One common feature is their autonomy, which is particularly emphasized in platform ecosystem studies (Boudreau 2012;Ceccagnoli et al. 2012;Cenamor et al. 2013;West and Wood 2013;Thomas et al. 2014;Wareham et al. 2014;Benlian et al. 2015;Boudreau and Jeppesen 2015;Cennamo 2018;De Reuver et al. 2018). Complementors may not have formal partnerships or signed agreements with other ecosystem actors. Moreover, they may not share the same supply chains as other ecosystem members (Gawer and Cusumano 2014). Hence, the focal firm usually has no direct control over complementors or their products and services (Cennamo and Santaló 2019). However, in platforms, complementors rely on the platform technology to develop, supply, and promote their complements to users. In this way, complementors earn legitimacy and gain access to platform resources. However, this reliance implies complying with rules imposed by the platform owner, making complementors "platform followers" (Nambisan et al. 2018, p. 360). Stemming from their autonomy, another feature of complementors is their adaptability. This characteristic allows complementors to originate from different markets (Gawer 2014), be highly responsive to changes in the focal product (Kapoor and Agarwal 2017), market, and customer demand. These adjustments would be otherwise more difficult for the focal firms to implement (Wareham et al. 2014). Heterogeneity is another heavily stated characteristic of complementors in ecosystems. A variety of complementors is desired in any type of ecosystem because heterogeneous complementors deliver a wide variety of innovative complements that enhance the focal product's value. Platform studies often mention the variety characteristic, which together with a large number, can generate indirect network effects (Boudreau 2012;Scholten and Scholten 2012;Boudreau and Jeppesen 2015;Cennamo 2018;Cennamo and Santaló 2019). Complementors are also considered to be rational and entrepreneurial-minded (Boudreau 2012;Cennamo 2018;Cennamo and Santaló 2019). They pursue their own interests of maintaining a competitive portfolio, acquiring and protecting knowledge, and gaining experience. Meanwhile, they deliver innovative solutions that meet customer needs at the speed required by the market (Boudreau and Jeppesen 2015; Cennamo and Santaló 2019). Figure 9 presents the characteristics and roles of complementors. In line with the aforementioned characteristics, complementors' roles in ecosystems are multifaceted. First, complementors play an indispensable value enhancement role in materializing the core value proposition and unlocking its full-value potential (Kapoor and Agarwal 2017;Kapoor 2018). Through network effects, complementors can meet numerous and various customer needs, generating strong competitive advantages for the entire ecosystem and contributing to its survival, development, and progress (Boudreau 2010;Williamson and de Meyer 2012;Wareham et al. 2014;Adner and Kapoor 2016;Kapoor and Agarwal 2017;Rietveld and Eggers 2018;Teece 2018;Cennamo and Santaló 2019). Complementors' value creation also impacts the performance and success of the focal firm (Kapoor and Agarwal 2017). This reliance on complementors has been increasingly emphasized in business ecosystems (Kapoor 2013;Tsujimoto et al. 2018) and platform ecosystems (Eckhardt et al. 2018), as complementors determine the shift "from product to network value" (Li 2009, p. 380). Secondly, complementors were found to also act as legitimacy facilitators in platform ecosystem studies. Whenever a platform releases a new technological generation, complementors can contribute to achieving legitimacy of the upgraded platform (Cennamo 2018). Thus, complementors are a critical source of ecosystem legitimacy. Thirdly, complementors may act as ecosystem disruptors, exhibiting challenges and threats to ecosystem incumbents. During intergenerational transitions in technological paradigms, complementors could be crucial reasons for disruption in ecosystems ). Fig. 9 Complementors' characteristics and their roles in ecosystems A fourth role of complementors is ecosystem defender. They have the potential to obstruct others from entering the ecosystem by increasing the entry barriers and intensifying competition . However, in platform ecosystems, a high number of complementors also increases the demand and number of users through network effects and diversity of complements offered (Rietveld and Eggers 2018). Complementors' interactions: participation determinants, challenges, and strategies Interactions with focal firms yield several benefits, primarily deriving from the roles of complementors. However, the platform literature presents a more extensive list of benefits, e.g., enhancing commitment and value co-creation through knowledge and resource sharing (Nambisan et al. 2018), increasing the attractiveness of the platform (Boudreau 2012;Benlian et al. 2015), and showing confidence in the future of the respective platform (Ceccagnoli et al. 2012;Cenamor et al. 2013). This confidence transmits to potential users (Adner and Kapoor 2010;Cenamor et al. 2013). The benefits of complementors' participation in ecosystems rest on the intensity of their involvement, influenced by various determinants (See : Table 4). In business ecosystems, the low appropriability risk (Kapoor 2013), compatibility consensus, and complementors' willingness to invest (Kapoor and Lee 2013) play crucial roles. Platform ecosystem studies provide further insights into the determinants of complementors' participation, e.g., platform-related factors like the size of the installed base (Cenamor et al. 2013;Cennamo 2018;Cennamo and Santaló, 2013;Kapoor 2018), governance mechanisms (Boudreau and Jeppesen 2015;Karhu et al. 2018), high purchase propensity (West and Wood 2013), the number of incentives (Benlian et al. 2015;Eckhardt et al. 2018), adequate share of value capture (West and Wood 2013;Cennamo 2018;Eckhardt et al. 2018), degree of platform openness (Benlian et al. 2015;Karhu et al. 2018), and extent of complementarity (Kapoor 2013). The complexity of complementors' relationships requires alignment of interests (Benlian et al. 2015), capabilities, and activities among the involved ecosystem members (Helfat and Raubitschek 2018). In addition, user behavior (Rietveld and Eggers 2018), time, and resources (Boudreau and Jeppesen 2015;Eckhardt et al. 2018) are strong determinants that complementors consider when participating in an ecosystem. Two of the ecosystem features or governing forces that impact and shape complementors' interactions are interdependence and coopetition. Interdependence is the glue between members, the causal relationship between any two ecosystem actors that are affected by any change in one or the other (Jacobides et al. 2018). Business ecosystem studies suggest that the interdependence with complementors differs from that with suppliers. The distinction lies in the position of actors along the value chain (Kapoor 2018). Due to interdependence, balancing complementors' individual interests with the collective goals of the business ecosystem is challenging for complementors (Wareham et al. 2014). their confidence in the future of the platform, which transmits to potential users (Ceccagnoli et al. 2012;Cenamor et al. 2013), more likely to timely react (Eckhardt et al. 2018), create comparative advantage through specialization and diversity (Boudreau 2012), increase platform attractiveness (Benlian et al. 2015) Participation determinants Low appropriability risk (Kapoor 2013), willingness to invest, compatibility consensus ( To platform owner market entry: (a) Diminish development and innovation efforts (a common reaction), (b) Continue to innovate the product in question to enhance the potential of being acquired by the platform owner or lock-in as many customers to ensure future profits and market position, (c) Focus on short-term profits by changing their pricing strategy considering the degree of vulnerability and, thus, diminishing their chance of being acquired, Potential for research exploration Tackling the platform growth, knowledge sharing and peer learning, forming an identity (Boudreau and Jeppesen 2015), low ecosystem entry barriers for complementors (Wareham et al. 2014 Another dynamic drive featured in the complementors' interactions with other ecosystem actors is coopetition, i.e., simultaneous cooperative value creation and competitive value capture. Complementors pursue the common goal of realizing the core value proposition, thereby increasing the business pie for all ecosystem actors. However, owing to their autonomy, complementors may exhibit competitive dynamics in capturing their share of value. Thus, complementors' relationships are characterized by the value creation-capture duality (Kapoor 2013). Their different degrees of cooperation and competition shape the ecosystem's structure and its governance system (Gawer 2014). These two forces, interdependence and coopetition, generate various challenges for both focal firms and complementors. On the one hand, collaboration with complementors can strain away (significant) value and profits from the focal firm(s) (Teece 2018). Complementors also present coordination challenges for focal firms, which can take various forms, such as delays, incompatibility, slow adoption, low performance, and integration issues Kapoor 2010, 2016;Kapoor 2018). These challenges may affect the reputation, success, and health of the ecosystem as a whole (Scholten and Scholten 2012). Without proper coordination, these complement challenges can lead to bottlenecks in realizing the ecosystem's value proposition (Adner and Kapoor 2010). On the other hand, complementors also face challenges due to an obvious power imbalance in their interactions with the focal firms. The most significant challenge is when the focal firm enters the complementary market space, turning them into direct competitors. This theme has received some research interest in business ecosystems (Kapoor 2013), but this coopetition scenario seems more common in platforms (Cennamo 2018;Foerderer et al. 2018). The generally unavoidable complementary market entry by the platform owner may be aimed at preventing complementors from becoming too powerful (competitors) (Wen and Zhu 2019). Under the threat of direct competition from the platform owner, complementors' strategies vary according to the number and popularity of the affected products from their portfolio. To deal with the aforementioned challenges, complementors may resort to establishing formal relationships with the focal firms instead of loosely coupled interactions (Ceccagnoli et al. 2012;Kapoor and Lee 2013) or engaging in multihoming, a complex strategy more common in platforms that may result in access to more market opportunities and a distributed risk Kapoor 2018;Cennamo and Santaló 2019). However, multihoming may dilute the platform's value proposition, generate technical integration issues, and affect the quality of multihoming complements . Further research on multihoming complementors in other ecosystem types will render a more profound understanding of this strategy. In addition to the interactions between complementors and focal firms, interactions among complementors have also received some attention. Collaboration among complementors was found to be more prone to creating positive network effects in platform studies (Boudreau and Jeppesen 2015). The motivations behind this action may be various, such as tackling platform growth through knowledge sharing, forming an identity by affiliation, or combining resources and capabilities (Kapoor 2013;Boudreau and Jeppesen 2015). However, complementors inevitably compete for the same user base (Boudreau and Jeppesen 2015) or profit from the jointly developed innovations (Rietveld et al. 2019;Zhu and Liu 2018). These competitive dynamics can shift and intensify for various reasons, such as the platform owner's power over complementors' survival and promotion (Rietveld et al. 2019), the share of value captured by the platform owner (Wen and Zhu 2019), entry (or even intent) of platform owner in complementary market, attaining the position of complementor also (Boudreau 2010), numerous complementors and, subsequently, overcrowding effects (Boudreau 2010;Gawer and Cusumano 2014;Wareham et al. 2014;Ozalp et al. 2018), and low ecosystem entry barriers for complementors (Wareham et al. 2014). These actions not only affect participating complementors, but also demotivate prospective complementors from entering the ecosystem (Gawer and Cusumano 2014;Wareham et al. 2014). The strategies complementors employ in their interactions with other complementors represent a wide venue for future research in any type of ecosystem. Complements and complementary assets In ecosystem literature, complementors are closely related to complements and complementary assets. Although these concepts are distinct, their overlaps can cause confusion (Adner 2017). Complements are defined as additional innovations that enhance the value of the focal product, allowing it to reach its full potential, as its individual value would otherwise be lower (Adner and Kapoor 2010;Gawer and Cusumano 2014;Eckhardt et al. 2018;Karhu et al. 2018;Ozalp et al. 2018). Complements are not only offered by complementors; often times focal firms also own and deliver complements (Cenamor et al. 2013). Complementors' output is identified as downstream or third-party complements, distinguishing them from upstream complements or components in terms of location (Adner and Kapoor 2010;West and Wood 2013;Thomas et al. 2014;De Reuver et al. 2018;Kapoor 2018;Parker and Van Alstyne 2018). To minimize confusion, complements delivered by the platform owner are seldom referred to as first-party complements (Cennamo 2018). In ecosystem studies, complements are referred to by various names, e.g., The greater variety of terminologies for platform complements may arise from the common integration of third-party complements by platform owners (West and Wood 2013). The diversity, quality, and generativity level of complements contribute to the success of focal firms (Adner 2006), create indirect network effects, and impact the value of the entire ecosystem (Jacobides et al. 2018;Rietveld et al. 2019). However, complements also increase the interdependence and complexity of the ecosystem Zhang et al. 2022). In platform ecosystems, the number and variety of complements are typically larger and spur innovation (Cenamor et al. 2013;Kang and Downing 2015;Cennamo et al. 2018;Eckhardt et al. 2018;Cennamo and Santaló 2019). Complement quality directly correlates with user satisfaction (Cennamo and Santaló 2019), but the impact of complements varies depending on each complement's popularity (Cenamor et al. 2013). Considering the blurred notional delimitations among the relevant keywords as noticed by Adner (2017), complementary assets have also emerged as a distinct theme. While connected to Teece's framework (1986), which considers a firm's complementary assets (i.e., capabilities and resources) in its strategic decisions for capturing value, complementary assets are also relevant in the context of ecosystem's value creation (Adner and Kapoor 2010;Teece 2018). The availability of complementary assets in ecosystems offers various advantages, including acting as an entry barrier (Ceccagnoli et al. 2012) and contributing to value creation (Thomas et al. 2014). The development or modification of complementary assets must occur before product commercialization to allow full potential extraction by customers (Adner and Kapoor 2010). Nevertheless, ownership of complementary assets or the capability to develop and/or manage them creates a competitive advantage that influences the division of profits (Teece 2018). Complementary assets are not only internal to focal firms but are also used as a synonym for downstream complements provided by third-party complementors (Li 2009;Thomas et al. 2014). Complementary assets can be categorized as vertical and lateral or, more commonly, depending on the type of the complementarity involved (as complements categorization) (Teece 2018). Complementarity "Complementarity lies at the core of ecosystems" (Teece 2018). As a key feature and building block for the ecosystem theory, an understanding of the different natures of complementarities generated by complementors is needed. Although shedding some light on this concept is still considered a challenge (Jacobides et al. 2018;Kapoor 2018;Teece 2018). In ecosystem research, complementarity generally associates with the economic synergy created by a mix of at least two assets that generate a higher value or utility under a combined solution (Cenamor et al. 2013;Kapoor 2018). The value-added potential of complementarities is contingent on the effectiveness of relationships (Adner 2006), interdependence (Kapoor 2013;Cennamo 2018), and alignment of value co-creating interactions within ecosystems (Thomas et al. 2014;Jacobides et al. 2018). Complementarities have the power to enhance value and shape ecosystem development (Teece 2018), determine its competitiveness, and increase its resilience (Thomas et al. 2014). Analyzing the nature of complementarities contributes to understanding the value creation-capture duality within, and also across, ecosystems (Jacobides et al. 2018). Complementarities in ecosystems come in various types, such as unidirectional or bidirectional, generic or specialized/specific (Jacobides et al. 2018;Kapoor 2018); unique or supermodular/Edgeworth (Jacobides et al. 2018). However, complementarities strictly generated by complementors are multilateral, involving a to-and-fro influencing relation on various parties' value, and nongeneric, assuming (some) customization and coordination of operations and other aspects (Jacobides et al. 2018). Nongeneric complementarities and their management represent the essence, dynamics, and distinctive features of ecosystems (Jacobides et al. 2018). Downstream complementarities, whether unique or the more prevalent supermodular/Edgeworth type, render the degree of interest that participants have in ecosystem health (Jacobides et al. 2018). Furthermore, complementarities specifically connect to indirect network effects, as network effects reinforce the impact generated by complementarities among ecosystem actors (Gawer 2014). It is worth noting that supermodularity in consumption can result in both direct and indirect network effects (Jacobides et al. 2018). However, the value-enhancing impact of complement availability, number, and variety on the focal proposition may not be unlimited. In some ecosystems, a large number of complementors may deter others from being willing to join due to saturation (Gawer and Cusumano 2014; Cennamo and Santaló 2019). Further research is needed to determine when indirect network effects cease to be beneficial for the ecosystem. Key findings of content analysis In our content analysis, we first uncovered an inconsistency across various definitions of complementors within ecosystems, which can lead to confusion in understanding their role(s). To clarify this concept, we identified several key characteristics of complementors, i.e., autonomy (although in platforms, complementors rely on the provided infrastructure to develop and sell their complements (Nambisan et al. 2018)), adaptability, heterogeneity, rationality, and entrepreneurial mindset. Building upon these identified characteristics, we unveiled complementors' roles in ecosystems, i.e., value enhancement, legitimacy facilitation, ecosystem disruption, and ecosystem defense. We then mapped out complementors' interactions, participation determinants, challenges, and strategies in relation to the interacting party, i.e., focal firms and other complementors. While platform studies offer valuable insights into these aspects, there remains a need for comprehensive research attention to understand the dynamics of complementor interactions across various types of ecosystems. Finally, we addressed the conceptual overlaps with intersecting terms, i.e., complements, complementary assets, and complementarity. By showcasing their distinctions, we clarified their unique roles and contributions within ecosystems. Discussion and future research To clarify our understanding of complementors in an ecosystem setting, we conducted a systematic literature review using two methods. The bibliometric and content analyses revealed insights at different levels. The systematic approach ensured objectivity and reliability of findings. However, this review is not exempt from limitations. First, the inclusion of the keyword "ecosystem" in the search string might have omitted some platform ecosystem studies and more recent articles. Second, the number of articles in the content analysis is limited. To overcome these limitations, we will widen our discussion perspective by including relevant articles outside our dataset, and pave future research avenues for the literature stream on complementors in ecosystems. Complementors as ecosystem actors With increasing attention from the ecosystem research stream, complementors are recognized as key actors in realizing the core value proposition. Their contribution to the ecosystem's core value proposition unlocks greater potential for the customers and expands the pie for all ecosystem actors (Kapoor and Agarwal 2017). Additionally, complementors play a pivotal role in determining the survival and development of the ecosystem (Brandenburger and Nalebuff 1996;Iansiti and Levien 2004;Kapoor and Lee 2013). Consequently, academic interest in studying complementors as ecosystem actors has grown, particularly since 2018, with a focus on platform ecosystems. However, this topic is tackled disproportionately across different types of ecosystems, calling for more research on complementors in business, innovation, and entrepreneurial ecosystems. Defining complementors in ecosystems poses challenges due to different fundaments being used. This practice stretches the concept in multiple directions, causing a dilution of its perceived usefulness. Although first introduced by Brandenburger and Nalebuff (1996), complementors take on various forms in different kinds of ecosystems, depending on the cited source and emphasized features, i.e., value enhancement nature (Brandenburger and Nalebuff 1996;Kapoor 2013), value capture capability (Teece 2018), and autonomy (Boudreau 2012;Boudreau and Jeppesen 2015;Ceccagnoli et al. 2012;Cusumano and Gawer 2002;Gawer and Cusumano 2014;Yoffie and Kwak 2006;Zhu and Iansiti 2012). Generally, complementors are actors whose output enhances the value of a core product (or service) when consumed together. This integration is (normally) performed by the customer. While this statement holds true in platform ecosystems, where users typically choose the complements, other ecosystems, like innovations ecosystems (without platform), may require dual-party coordination between focal firms and complementors before commercialization. In such cases, the responsibility to unlock the full-value potential lies with both parties. For instance, Adner and Kapoor's (2010) example with Airbus A380 and airports demonstrates that (sometimes) integration must be performed before making it available to customers. Thus, adjustments to the definition of complementors may be necessary to accommodate different ecosystem types. Considering complementors' characteristics, further reflections on their autonomy are warranted. As the most stated characteristic and a reason for the complexity of their coopetitive relationships, complementors' autonomy is expected to be maintained in all ecosystems. However, their autonomy is often associated with coordination challenges and collaboration risks. Complementors may be encouraged to innovate and develop in ecosystems that ensure or augment their autonomy (Kapoor and Agarwal 2017). However, complementors' autonomy may affect their responsiveness, unless proper and targeted coordination is involved in their interactions with other ecosystem actors (Brusoni and Prencipe 2013;Kapoor 2013). Despite being autonomous, complementors are subject to certain rules or standards when participating in ecosystems (Scholten and Scholten 2012;Jacobides et al. 2018). In platform ecosystems, they heavily rely on the platform's technology to develop and commercialize their innovations, as well as to connect with users and perform transactions (Ceccagnoli et al. 2012;West and Wood 2013;Thomas et al. 2014;Nambisan et al. 2018;Parker and Van Alstyne 2018;Agarwal et al. 2023). However, these restrictions and technological dependence may hinder complementors' autonomy. Based on their characteristics, several roles of complementors in ecosystems have been identified, including value enhancer (Kapoor and Agarwal 2017;Kapoor 2018), legitimacy facilitator (Cennamo 2018;McIntyre et al. 2020;Taeuscher and Rothe 2021), ecosystem disruptor Adner 2021;Adner and Lieberman 2021), and ecosystem defender (ibid.). While complementor's roles have been emphasized over the past three decades, particularly in business ecosystem studies (Tsujimoto et al. 2018), platform ecosystem publications investigating complementors have become more numerous. This may be due to the agglomeration of complementors in platforms, their number, facile identification, or their absolute necessary presence on the platform for ecosystem success and dominance (Cenamor et al. 2013;Cennamo 2018;Jacobides et al. 2018;Saadatmand et al. 2019). Moreover, although generally seen as platform participants who require management by the platform owner to maximize their added value, dominant complementors can even influence their management through network effects, extending their roles beyond value enhancement (Agarwal et al. 2023). Furthermore, depending on the degree of platform openness, complementors can even change the platform architecture (van der Geest and van Angeren 2023), revealing their dynamic and multifaceted impact on platform ecosystems. Additional research on complementors may identify more roles across different ecosystem types. The challenge of navigating complementors' interactions is also a highlight in this review. Initially, emphasizing their collaborative nature, the thematic evolution showed that the complementors theme emerged in 2019 stemming from network effects, competition, and innovation. This proves the competitive dimension of complementors' interactions. Understanding complementors' contribution in ecosystems as not solely a derivative of value creation but also involving dynamics of value capture is essential (Adner and Kapoor 2010). Although competition dynamics are undeniably present and linked to complementors, their investigation in interactions with focal firms but also among complementors remains underexplored in ecosystem research (Gawer 2014). Further studies on the variations and intensity of value creation opportunities and value capture risks among complementors and between the focal firms and complementors may uncover cooperation-competition patterns (Kapoor 2013;Gawer 2014). Complementors and focal firms can enter each other's product space (Kapoor 2013). While some studies have examined the platform owner's entry into complementary markets (Gawer and Cusumano 2002;Gawer and Henderson 2007), research on their entry patterns and complementors' responses remains limited (Ceccagnoli et al. 2012;Zhu and Liu 2018;Kang and Suarez 2023). Complementors may also engage in various forms of exploitation, e.g., forking, hacking, infringement, multihoming, to profit from economies of scale (Karhu et al. 2018;Cennamo and Santaló 2019;Tian et al. 2022;Chung et al. 2023). Restricting complementors' access (to a single platform) can improve the complement quality through exclusivity and focused investment (Casadesus-Masanell and Hałaburda 2014;Chu and Wu 2023). Nevertheless, such restrictions may reduce the quantity of complements due to platform exit or hinder complementors' willingness to engage with other platforms (Eisenmann et al. 2009;Boudreau 2010;Chung et al. 2023). Moreover, the performance of exclusive complementors in the video game platform context has been found to be weaker (Castro and Sant'Anna 2023). Further investigation on the degree of ecosystem openness and the impact of complementors' exploitative strategies could offer valuable insights into their innovativeness and behaviors in ecosystems. Complementors, complements, complementary assets, and complementarity: inconsistencies and propositions Complementors maintain their importance as a defining element in ecosystem theory, but the findings indicate a slowdown in the use of interrelated concepts, i.e., complements, complementary assets, and complementarity. Despite conceptual overlaps, their disconnectedness stresses the need for clarifying research and a more careful application of these concepts. Although with a different origin, complementarity not only serves as a building block of ecosystems but also plays a critical role in the discussion about complementors. This is because high complementarities render significant value to customers and, consequently, to ecosystems (Adner 2006;Xu et al. 2010;Teece 2018). However, how to achieve these complementarities in ecosystems is yet to be understood (Jacobides et al. 2018). It is essential to note that complementors in ecosystems are not the actors involved in any (type of) complementarity. Instead, they are strictly linked to multilateral, non-generic complementarities, which are considered an essential and distinctive feature of ecosystems. Regarding complementors in ecosystems, we propose a refined definition that considers the identified characteristics. Complementors in ecosystems are generally perceived as: Autonomous, entrepreneurial-minded, rational, and highly adaptable downstream actors whose complementary innovations, i.e., (downstream) complements, augment the value of the focal proposition when consumed together by the user. Complementors maintain their significance across different types of ecosystems, but their autonomy characteristic may be affected in platform ecosystems. In this setting, complementors base the development of their business and products on the platform's architecture and resources. In this case, complementors are sometimes even called "followers" (Nambisan et al. 2018, p. 360). Hence, their autonomy and interdependence with the focal firm or (platform) ecosystem's structure and resources may be inversely proportional. Possibly, the degree of complementor dependence on the platform ecosystem in which they participate is higher than in other types of ecosystems. Special attention should also be given to the concepts of complementarities and complementary assets, as they are sometimes used interchangeably. Moreover, since complementary assets are found along the entire value chain, they may not necessarily or strictly refer to complementors as downstream actors. Dividing complementary assets according to their origin in the value chain, i.e., upstream and downstream complementary assets, can help eliminate confusion. Thus, when exclusively referring to complementors' output, an option would be to use the term downstream complementary assets. Although complements, complementary assets and complementarity share a common word stem and intersecting conceptual features, they may not solely concern complementors. Therefore, distinctions, clarifications, and cautious use of these concepts in ecosystem studies are required. For instance, even though complements are a broad term that generally encompasses the products, activities, or services resulted from complementarities, in an ecosystem setting using (downstream) complements as only referring to complementors' output would clear out the confusion. Other terminologies, like third-party innovation (Parker and Van Alstyne 2018), are vague and may refer to the output of all external actors. Although, "third-party innovation" excludes the innovation provided by the focal firms. Gaps and future research avenues Despite the remarkable development of literature on complementors and their role in ecosystems over the past two decades, this study has unveiled several research gaps. Further potential research venues in connection to the following gaps are also proposed in Table 5. First, the disconnect among the concepts of complementors, complementarity, and complementary assets in ecosystem studies highlights the disproportionate attention and disparate development of these topics. Their conceptual distinctiveness and individual contribution to ecosystem research stream require further clarification. Secondly, although complementors are recognized as key ecosystem actors, the existing literature is inconclusive and inconsistent regarding their definition, characteristics, and role(s) (McIntyre and Srinivasan 2017; Ozalp et al. 2018). Empirically What coordination processes and mechanisms are most efficient for complementors? How do complementors manage their coopetitive interactions with other ecosystem actors? Are the interaction and collaboration patterns of complementors contingent on the type of ecosystem or its maturity? How do the variety and quality of complements influence the ecosystem's value and competitive dynamics? How different governance modes/degree of interdependence impact the interactions with complementors and their added value? How are the interdependencies between complementors and ecosystem/focal firm(s) properly and strategically coordinated? Participation determinants What determines complementors to interact with the focal firm(s)? What are the determinants for complementors to interact with other complementors? Are these determinants/motivators different from those for the interactions with the focal firm(s)? For complementors: What other risks and challenges complementors face in different kinds of ecosystems? Are these risks higher or more intense in certain types of ecosystems, considering that, for instance, in platform ecosystems the focal firm can access (and even store) data about the complementors' business on the platform? Do complementors in innovation and business ecosystems face this issue at the same intensity? How do complementors respond and solve complement challenges of the ecosystem? Impact and challenges What governance forms strikes the right balance between openness and control to foster complementary innovations? studying the nuances and evolutions, if any, in complementors' behaviors and features across different (types of) ecosystems can facilitate a general and more indepth understanding of complementors as ecosystem actors, as well as a clarification of their definition. Alternatively, we could consider defining complementors based on the type of ecosystem they participate in. Given this gap, the development of methods to identify and categorize complementors, as well as to evaluate or measure their performances, is deemed necessary. Such efforts will contribute to a more unified and refined understanding of complementors' characteristics and contribution within ecosystems. Thirdly, considering the complexity of complementors' relationships, natures, and functions, more recognition and empirical evidence on their (coopetitive) interactions are needed, particularly in the contexts of innovation and business ecosystems. While research on complementors in platform ecosystem has grown (Liang et al. 2022), understanding their relationships and interactions in business or innovation ecosystems requires further investigation. In these settings, do complementors rely more on transactions and, consequently, traditional agreements? Regarding complementors' interactions with focal firms, the interdependence between these actors, particularly during the emergence of platform ecosystems, has been acknowledged. At this stage, the platform and complementors are co-dependent, but unwilling to invest until the other is populated enough, generating the chicken-and-egg problem (Hein et al. 2020). However, effective management of these interdependencies demands further research (Gawer and Henderson 2007;Kapoor and Lee 2013), particularly on how interdependencies between complementors and ecosystems are properly and strategically coordinated in business and innovation ecosystems. Additionally, the interplay between cooperation and competition among complementors represents a notable research gap with significant potential for future exploration in various ecosystem types. Investigating the circumstances under which complementors engage in collaboration despite the obvious competitive dynamics for value capture, the determinants of their participation in such interactions, and their assumed challenges, and strategies employed during collaboration remain intriguing avenues for ecosystem research. Embracing the inherent phenomenon of coopetition can help complementors on platforms like Amazon cope with paradoxical tensions (Yoo et al. 2022). Extending these investigations to other ecosystem types can provide valuable insights into complementors' perspectives and their management of coopetition in their complex interactions with diverse ecosystem actors. Fourthly, concerning complementors' strategies, it is rather unclear what capabilities they need to capture value in ecosystems and how these capabilities can be more effectively utilized for this purpose (Helfat and Raubitschek 2018). Are these capabilities different depending on the kind of ecosystem? For example, sensing by platform complementors in the metaverse context (Zabel et al. 2023) highlights the need for understanding complementors' capabilities in ecosystems. Addressing this research direction can shed light on how complementors effectively balance collaborative efforts, enhance downstream innovation, and optimize their positions and roles within the ecosystem. Thus, conceptualizing different types of complement strategies may help understand how their number and uniqueness influence value creation and competition on ecosystems (McIntyre and Srinivasan 2017). Fifthly, further research is required on the impact and challenges that complementors assume and pose. Are these risks higher or more intense in certain types of ecosystems, considering that, for instance, in platform ecosystems, the focal firm can access, and even store, data about the complementors' business on the platform? Do complementors in innovation and business ecosystems face this issue to the same degree? Therefore, competition and governance are venues worth exploring regarding complementors. Additionally, the connection between complement quality and variety, as well as their impact on the ecosystem's value, sustainability, and competitive dynamics (Cennamo 2018) also require further investigation. Finally, a major gap in the extant ecosystem literature is the lack of studies from the complementors' perspective. Taking their vantage point may reveal a different side of the story. Given the interdependence and mutual influence between complementors and other ecosystem actors, more research may emerge on how complementors should strategize to adapt and survive in different types of ecosystems. Theory-building on complementors' strategies and interactions can enrich the ecosystem literature and strategic management scholarship. Practical implications This systematic review highlights the pivotal role of complementors in realizing the ecosystem's core value proposition and sheds light on their alternative roles, which can positively or negatively influence the ecosystem success and development. Policymakers and managers seeking to stimulate ecosystem expansion should recognize the significance of complementors and promote collaboration with and among them. Preserving complementors' autonomy can stimulate innovation and responsiveness, while being mindful of their potential for competition and cooperation. Understanding complementors' roles, characteristics, participation determinants, interactions patterns, and challenges can inform strategies for fair competition, stimulating innovation, maximizing the benefit of the core proposition, and safeguarding the ecosystem. Conclusion This systematic review assists the consolidation of existing knowledge on complementors and facilitates the development of ecosystem research. To the best of our knowledge, this study represents a pioneering attempt to comprehend complementors based on the extant ecosystem literature. With increasing research interest in complementors, particularly since 2018, managers should take notice of them and address the diverse challenges they may pose or face. Proper identification and coordination of complements, ideally before commercialization to avoid adoption delays, are essential for the focal offering to reach its full-value potential. In addition to identifying and recognizing complementors' roles (Adner and Kapoor 2010;Boudreau 2010), unpacking their interactions and challenges reveals a more profound understanding influencing the success of the core value proposition and ecosystem health (Iansiti and Levien 2004;Adner 2012). Paying attention to complementors' competitive dynamics warrants further investigation for the sake of proper coordination. Thus, more research on their strategies and the (coopetitive) challenges they pose is needed (Zhu 2019). Lastly, the disproportionate focus on different types of ecosystems and research designs reflects the ongoing development of the ecosystem research stream. Empirical research dominates the literature on complementors in various ecosystems, primarily in platform ecosystems, possibly due to their high number and increased visibility, rendering their effortless identification in platform ecosystems. However, this imbalance limits generalizations regarding complementors. Therefore, complementors still require further clarification and research regarding their roles, interactions, and strategies in ecosystems, to manage and collaborate with them efficiently. A higher utility resulted from the simultaneous consumption of two products (or services, assets, activities), than separated, individual consumption Appendix 1: Types of complementarities The consumption of one product positively impacts the value and demand of the other. This complementarity can be found in consumption and production Strong complementarity Strict complementarity (Hart and Moore 1990) Two products that generate value only through joint use The individual use of one of the two products would not generate any benefit (Hicks 1970) A price decrease in production factor triggers an increase in the quantity of its complements that are used in the production Not fully relevant for innovation research because of its hypothesis of an existing link between the two factors. But its contribution lies in the idea that commercializing an innovation impacts the demand for its complements Hirshleifer complementarity Asset price complementarity (Hirshleifer 1978) The impact of an innovation on the asset prices of another Financial equivalent of Hicksian complementarity. It aims at profiting from an innovation (Teece 1986(Teece , 2018 Technological complementarity (Teece 1986(Teece , 2006(Teece , 2018 When an innovation needs new and/or re-engineered complementary technologies to reach its fullvalue potential A type of technological complementarity is innovational complementarity (Teece 2018) Innovational complementarity (Teece 1986(Teece , 2018 Downstream productivity boost generates by an improved technology meant for extended uses It may be considered a type of technological complementarity (Teece 2018) Generic vs. unique complementarity (Teece 1986;Jacobides et al. 2018) No involved coordination due to its generic feature vs required customization Unique complementarity is generated from one-way relation (i.e., the first item requires another to function, though a third item can be used with a possibly lower efficiency; linked with transaction cost economics) or two-way relation (i.e., both items need each other; linked with the concept of cospecialization, resulting into co-specialized assets (Teece 1986) Complements increase the value of a focal product. Complementors are autonomous actors whose output enhances the focal product's value, but they also admit to some sort of guidance or control imposed by the ecosystem leader
2023-08-16T15:12:14.726Z
2023-08-14T00:00:00.000
{ "year": 2023, "sha1": "ec1e40bbcb8d3931ced6ed5b5ebff750e27c113a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11301-023-00368-y.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "606a06e2b507774faca2e42d918690651c648e1f", "s2fieldsofstudy": [ "Business", "Environmental Science" ], "extfieldsofstudy": [] }
249043514
pes2o/s2orc
v3-fos-license
Exploring county-level spatio-temporal patterns in opioid overdose related emergency department visits Opioid overdoses within the United States continue to rise and have been negatively impacting the social and economic status of the country. In order to effectively allocate resources and identify policy solutions to reduce the number of overdoses, it is important to understand the geographical differences in opioid overdose rates and their causes. In this study, we utilized data on emergency department opioid overdose (EDOOD) visits to explore the county-level spatio-temporal distribution of opioid overdose rates within the state of Virginia and their association with aggregate socio-ecological factors. The analyses were performed using a combination of techniques including Moran’s I and multilevel modeling. Using data from 2016–2021, we found that Virginia counties had notable differences in their EDOOD visit rates with significant neighborhood-level associations: many counties in the southwestern region were consistently identified as the hotspots (areas with a higher concentration of EDOOD visits) whereas many counties in the northern region were consistently identified as the coldspots (areas with a lower concentration of EDOOD visits). In most Virginia counties, EDOOD visit rates declined from 2017 to 2018. In more recent years (since 2019), the visit rates showed an increasing trend. The multilevel modeling revealed that the change in clinical care factors (i.e., access to care and quality of care) and socio-economic factors (i.e., levels of education, employment, income, family and social support, and community safety) were significantly associated with the change in the EDOOD visit rates. The findings from this study have the potential to assist policymakers in proper resource planning thereby improving health outcomes. Introduction The continued rise in drug overdoses involving opioids has significantly impacted the social and economic fabric of communities in the United States [1]. For instance, in 2017, the a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 economic cost of opioid deaths, criminal justice involvement, treatment, and lost wages surpassed $1.02 trillion [2]. Between April 2020 and April 2021, over 100,000 deaths involved opioids-an increase of 28.5% over the previous 12 months [3]. An effective response to the opioid crisis requires policymakers to understand which communities are the most affected and how the resources have been allocated in those communities [4]. Recent research has identified many socio-ecological factors as the risk factors of opioid overdose [5,6]. These risk factors are inextricably linked to personal and environmental factors and systems that hinder rather than support individuals [6]. For instance, among individuals receiving healthcare services at a free clinic, prescription opioid misuse was more likely among patients who were employed and less likely among those with post-high school education [7]. Uninsured individuals were significantly more likely than insured individuals to be high-risk drug users [8]. Access to prescription opioids is another prominent risk factor of unintentional opioid overdose deaths [9,10], which has increased in rural areas with a greater need for medical services [11,12]. Opioid overdose rates within the United States have been found to vary across different geographical regions. A socio-ecological framework posits that the characteristics of the community in which individuals live significantly influence their health behaviors [13]. For instance, counties across the United States with high economic distress, high opioid prescription rates, and a lack of opioid treatment program providers have higher opioid overdose mortality rates [14,15]. Statewide health disparities, including lower socioeconomic status and access to health care, can differ at the county level and are more predominant in rural areas [16]. Opioid overdose patterns are consistent with counties that may lack the resources necessary to prevent overdose [17]. In sum, the intersection of where and how people live is a significant factor in health outcomes and requires public health to work with urban planning to create supportive and healthy environments to reduce the risks of opioid overdose [18]. Besides geographical differences, there have been significant variations in the temporal trends of opioid overdose rates. Beginning around the year 2000, opioid overdose rates in the United States have been increasing over time [19]. In recent years (since the early 2010s), much of this growth has been attributed to synthetic opioids such as fentanyl, which increased more than 50% from 2019 to 2020 [20]. During the same time period, prescription opioidrelated overdose showed the first increase in years (10.6%) whereas heroin-related overdose showed a downward trend (down 3.6%), similar to the recent prior years [20]. Not all regions within the United States have the same characteristics and the national pattern may not well reflect the local growth trajectories. Prior research has implemented different spatiotemporal analysis techniques to identify the local geographical differences in opioid overdose rates over time. For instance, Hernandez et al. [21] examined prescription opioid death rates in Ohio from 2010-2017 and identified 12 hotspots along with three significant changing trends of opioid overdose using temporal trend analysis. Marotta et al. [17] examined cumulative opioid overdose deaths in New York State using data from 2013-2015 and identified geographical hotspots of overdose death rates for different types of opioids. Sauer et.al. [22] used spatio-temporal bayesian modeling and exploratory spatial analysis to evaluate risk factors related to drug-involved emergency department visits in the greater Baltimore metropolitan from 2016-2019. These studies and a few others [23][24][25] have demonstrated that spatio-temporal techniques utilizing local population-level data can provide a profile of opioid overdose risk. In this study, we performed a county-level spatio-temporal assessment of opioid overdose rates and their association with different socio-ecological factors for the state of Virginia. We focus on Virginia for two main reasons. First, its growing rates of fatal opioid overdose [26] represent the growing rates of opioid overdose across the United States. Secondly, the Virginia Department of Health collects monthly emergency department opioid overdose (EDOOD) visit data as part of syndromic surveillance to measure health trends [27]. This publicly available dataset can serve as a timely indicator of opioid overdose trends. Different from prior studies, we used the EDOOD visits as a proxy for total overdoses to understand the spatio-temporal dynamics of opioid overdose rates and potential socio-ecological risk and protective factors. Emergency departments (EDs) are the primary treatment venue for patients with overdoses [28]. In recent years, urgent care centers' utilization to treat opioid overdoses has also significantly increased. Between 2007-2016 documented claims for urgent care centers increased by 1,725 percent compared to a 229 percent increase for emergency department claims with 'injury, poisoning, and consequences of external causes' [29]. However, the current study only focuses on the visits obtained from hospital-based and free-standing EDs. EDOOD visit rates increased by 28.5 percent across the United States in 2020, compared to 2018 and 2019 [30]. Understanding the spatio-temporal trends of EDOOD visit rates can help identify targets for policy change and timely resource allocation to mitigate the opioid crisis. To summarize, this study utilized a comprehensive, three-pronged approach to understanding the opioid overdose trends in Virginia. The goals of the study were to 1) identify spatio-temporal variations of EDOOD visit rates from 2016-2021 among Virginia counties, 2) assess how counties cluster together based on their EDOOD visit rates, and 3) identify socioecological factors that are associated with the change in EDOOD visit rates over time. Moran's I [31], Local Indicators of Spatial Association (LISA) [32], Dynamic Time Warping (DTW) [33], and multilevel modeling [34] were implemented for the spatio-temporal analysis. To our knowledge, this is the first study to combine techniques from statistics, data mining, and geographic information systems (GIS) to examine how a county performs in terms of EDOOD visits. Although the study focuses on Virginia, the study methods can be extended to other geographic locations with similar data. The source code can be made available upon request. Study area We analyzed the EDOOD visit rates and the associated socio-ecological factors across the different counties within the state of Virginia, United States. The state of Virginia consists of 95 counties and 38 independent cities that are considered county-equivalent for census purposes. The analysis was performed for those 133 unique geographic regions. Measures Emergency department opioid overdose visits. The EDOOD visits dataset was obtained from the Virginia Department of Health (VDH) [35] and is based on syndromic surveillance reported by hospitals and free-standing EDs in Virginia. It consists of count and rate statistics (monthly and annual) of ED visits for unintentional opioid overdose (fatal and nonfatal) among Virginia residents aggregated by different geographical units. This dataset excludes heroin from all other types of opioid overdoses. There is separate data that reports on heroinrelated ED visits. In Virginia, only a small percentage of the total overdoses are driven by heroin in recent years [35]. The numbers are not significant enough to analyze the spatio-temporal variations of opioid overdose rates. Thus, this study did not include data on heroin-related ED visits in its analyses. Although the EDOOD data spans from 2015 to 2022, we only examined data from 2016 to 2021 given that complete data (i.e., previous 12-month average visits, annual summary) was not available for 2015 and 2022. The outcome variable was defined as the rate of EDOOD visits per 100,000 population. Geography assignment. The county location in the EDOOD visit dataset is assigned based on the patient's self-reported residential zip code. A single zip code may belong to multiple cities and counties. In that case, the visit is assigned to the county/city where the majority of the population resides. Additionally, some Virginia cities and counties are aggregated (e.g., Alleghany County and Covington City) due to overlapping zip codes (see S1 File). EDOOD visit definition. ICD9 (e.g., 965.00, 965.01), ICD10 (e.g., T40.0X1A, T40.601A), or SNOMED (e.g., 295165009, 242253008) codes representing opioid overdose were used to identify overdose incidents. Similarly, the mention of terms like Narcan or naloxone in the chief complaint or discharge diagnosis was also used to identify overdoses. This includes unintentional overdose by opioids or unspecified substances (excluding heroin). The definitions mentioned here are merely provided as examples. For complete information on the inclusion and exclusion criteria, please refer to the "Unintentional overdose by opioid or unspecified substance (excluding heroin)" section in [36]. Converting visit counts to rates. Whenever information was only available in the form of counts, we used the population data from American Community Survey (ACS) to calculate the rates (visits per 100K population) [37]. We followed the same guidelines as specified in . We extracted data from 2016 to 2021 to be consistent with the EDOOD visit dataset. This publicly available dataset aggregates data from the Centers for Disease Control and Prevention (CDC) as well as other sources (e.g., U.S. Census Bureau, Behavioral Risk Factor Surveillance System) to provide yearly county-level rankings based on different attributes related to health outcomes. Based on prior studies on the association between socio-ecological factors and opioid overdoses [5-9, 14, 15], we selected four variables: health behaviors, clinical care, social and economic factors, and physical environment as the most appropriate for our study (see Fig 1). The goal was to examine if these socio-ecological factors influence the spatiotemporal trends of EDOOD visit rates. Other sources have also aggregated data on socio-ecological factors (e.g., Opioid Environment Policy Scan, Virginia Department of Health, etc.). However, the CHR&R dataset is an adequate proxy for socio- PLOS ONE County-level spatio-temporal patterns of opioid overdoses ecological factors for our analysis because it encompasses a wide range of socio-ecological inputs within its four variables: Clinical care. Includes inputs about access to care and quality of care, such as uninsured rates, and access to primary care physicians, dentists, and mental health providers. Social and economic factors. Includes inputs from six unique data sources measuring unemployment, children in poverty, income inequality, single-parent households, social associations, violent crime, and injury deaths. Physical environment. Includes inputs from four unique data sets measuring air pollution, alcohol drinking violations, severe housing problems, driving alone to work, and long commute-driving alone. Health behaviors. Includes inputs about tobacco and alcohol use, diet and exercise, and sexual activity from seven unique data sources. These inputs further include physical inactivity, excessive alcohol use and impaired driving deaths, sexually transmitted diseases, and teen pregnancy. The rankings in the CHR&R dataset were calculated using standardized z-scores from several data sources [39]. The county with the lowest z-score received a rank of 1, which indicates the highest quality of socio-ecological factors (e.g., low tobacco use, better access to care, better education). The county with the highest z-score received a ranking of 133, which indicates the lowest quality of socio-ecological factors (e.g., high tobacco use, poor access to care, poor education). The z-scores were averaged for the counties (or cities) that were combined in the EDOOD visit dataset. The CHR&R dataset has been validated in other studies [40,41]. Neighborhood adjacency. Data on neighborhood adjacency for Virginia counties was obtained from the US Census Bureau [42]. This data lists each county along with its adjoining neighbors including counties that are not in Virginia but adjacent to Virginia counties. For this study, we only considered the neighboring counties that are a part of Virginia. This data was utilized for our spatial analysis. Data analysis Spatial analysis. To identify any spatial variations in the opioid overdose rates across Virginia counties, we calculated the spatial autocorrelation using data on neighborhood adjacency and EDOOD visits for the years 2016-2021. Average monthly EDOOD visit rates were used to summarize the yearly EDOOD trends. Spatial autocorrelation is the phenomenon where the presence of some quantity in an area makes its presence in neighboring areas more or less likely [43]. Positive autocorrelation, which is more common in practice, is the tendency for areas that are close together to have similar values. In contrast, negative autocorrelation is the tendency for areas that are close together to have different values. Global spatial autocorrelation. Global autocorrelation measures the overall association within the data. In this study, it measures the similarity between the neighboring counties in terms of EDOOD visit rates. We calculated the Moran's I index, a common measure of global autocorrelation [31]. Then we performed a permutation test to assess the significance of Moran's I index analysis. The values of Moran's I range from +1 (strong positive spatial autocorrelation) to 0 (randomness) to -1 (strong negative pattern). Moran's I value of 0.7, for instance, indicates that the spatial pattern across counties is homogeneous meaning that the neighboring counties have very similar visit rates. Local spatial autocorrelation. To study the contribution of each county to the global Moran's I index and identify local hotspots (clusters of high EDOOD visit rates) and coldspots (clusters of low EDOOD visit rates), we calculated Local Indicators of Spatial Association (LISA) [32] for each county. These auto-correlation indices were used to divide the counties into four distinct groups: • High-high: Counties with high visit rates with neighboring counties that also have high visit rates (also known as hotspots) • Low-low: Counties with low visit rates with neighboring counties that also have low visit rates (also known as coldspots) • High-low: Counties with high visit rates but surrounded by counties that have low visit rates • Low-high: Counties with low visit rates but surrounded by counties with high visit rates The classification of counties into regions of low or high visit rates was done based on whether they had rates less than or greater than (or equal to) the mean value of visit rates across the state of Virginia. A permutation test was used to identify non-significant associations within neighbors (counties) thereby highlighting the significant hotspots and coldspots of EDOOD visit rates. Both local and global autocorrelation analyses were performed in python using the PySAL package [44]. Temporal analysis Temporal analysis refers to the study of an outcome over time. We used two different methods -clustering and multi-level modeling-to analyze the EDOOD visit rates over time and identify their association with different socio-ecological risk factors. Clustering. To identify similarities between the temporal trends of visit rates in different counties, we used dynamic time warping (DTW) [33]. DTW is a data mining technique used to compute the similarity between multiple time series (e.g., opioid overdose rates over time) and cluster them together based on their shapes and magnitudes. We utilized the moving average of the monthly visit rates (for a smoother curve) to perform the clustering. After we obtained clusters of counties, we mapped the clusters using choropleth maps for easy visualization. Analyses were performed in Python using the PySAL package [44] and tslearn package [45]. Multilevel modeling. To identify how the changes in EDOOD visit rates relate to the changes in different socio-ecological factors, we performed multilevel modeling [34] with the visit rates as the outcome variable and four different time-varying aggregated variables from the CHR&R dataset as our predictors. Multilevel modeling is advantageous because it accounts for correlations across time within individual counties. Moreover, multilevel models handle missing data in visit rates for any county and at any time point without pairwise deletion of individual counties. Multilevel modeling was performed in R using the lme4 package [46]. A forward-stepping procedure was used to create the final model [47]. First, an unconditional means model (i.e., baseline model) was created. From this model, an intraclass correlation coefficient (ICC)-representing the proportion of variance explained within counties-was computed. Next, conditional growth models were created to examine the linear effect of time on EDOOD visit rates, with time modeled as a fixed and random slope in separate models. The model (i.e., either fixed or random slope) with a better fit compared to the unconditional growth model was used moving forward. Finally, a conditional random growth model was created to examine the linear effects of the time-varying covariates (i.e., socio-ecological factors) on the visit rates: Level 1 : PLOS ONE County-level spatio-temporal patterns of opioid overdoses Level 2 : The Level-1 equation models the within-county variance based on EDOOD visit rates. Thus, for county i at time t, the expected outcome, Y, is equal to the intercept, π 0i , plus an effect for the slope, π 1i , plus an effect for Health Behaviors, π 2i , plus an effect for Clinical Care, π 3i , plus an effect for Social and Economic Factors, π 4i , plus an effect for Physical Environment, π 5i , plus error, e it . The Level-2 equations state that the intercept and year were fitted using random effects whereas the socio-ecological predictors were modeled using fixed effects. We used a fixed slope for the socio-ecological factors because specifying many random coefficients overfits the model producing misleading results [48]. Lastly, we ran ANOVA-like table with tests of random effects for each model using the ImerTest package. Each model was compared to the preceding model using these ANOVA tests. Spatial analysis Global auto-correlation. The results from Moran's I index indicate that there is some similarity between counties and their neighbors with respect to their EDOOD visit rates in most of the years. As shown in Table 1, the values of indices are greater than 0 for all the years indicating a positive global spatial autocorrelation. However, the similarity scores and the corresponding p-values vary across different years. The strongest association seems to be present in the year 2018 (Moran's I = 0.25). This indicates that many neighboring counties had similar EDOOD visit rates during that time. On the other hand, the Morans'I value for 2016 is almost close to 0, meaning that the EDOOD visit rates were not significantly similar across the neighboring counties. Local auto-correlation. Fig 2 represents the plots of EDOOD visit rates alongside the classification returned by LISA for the 3 years (2017, 2018, and 2021) with the most significant neighborhood association (based on Moran's I values). To show them in the maps, we assigned the same value/color coding to counties that were combined together in the analyses (although they were only counted once in analyses). For instance, the combined rate for Grayson and Galax for the year 2016 was 21.7, but they are both represented separately as a value of 21.7 on the map. PLOS ONE County-level spatio-temporal patterns of opioid overdoses EDOOD visit rates. As shown in Fig 2A, The EDOOD visit rates differed within the state and across time. However, some of the highest rates of visits can be seen across the southwestern region (e.g., Galax & Grayson, Smyth, Martinsville & Henry) and also in the northwestern region (e.g., Orange, Louisa, Culpeper). In contrast, most counties in the northern region (e.g., Fairfax & Falls Church, Loudon) and some in the eastern region (e.g., Accomack) had relatively lower rates. LISA values. The LISA values were calculated for every county in Virginia for the years 2016-2021. However, as aforementioned, Fig 2B only presents results for the years 2017, 2018, and 2021. The four distinct subgroups returned by LISA are shown in choropleth maps (in Fig 2B) in distinct colors: red (high-high), blue (low-low), light blue (low-high), pink (high-low). The significant clusters returned by the permutation test are highlighted in yellow. These highlighted counties had significantly higher or lower concentrations of EDOOD visit rates based on their color coding (red: hotspots, blue: coldspots). As expected, the hotspots were mostly concentrated around the southwestern region and the northwestern region. The cold spots were scattered across multiple regions with the most consistent one being in the northern region. There were some counties (pink clusters and blue clusters) that had significantly different visit rates than their neighboring counties. For instance, counties like Floyd, Scott, and Caroll in southwestern Virginia had lower EDOOD visit rates even when most of their neighboring counties had higher rates. Similarly, as also verified by Moran's I, the neighborhood similarity seems to be the most prominent in the year 2018, where there are close clusters of high and low EDOOD visit rates. The locations of the LISA subgroups seem to be changing over time. Temporal analysis Clustering. Five distinct groups of counties (clusters) were returned by the DTW clustering algorithm. The clusters with plots of their temporal trends and their corresponding geospatial mapping are provided in Fig 3. These clusters differ in their magnitude and trend over time. As depicted in Fig 3, counties in Groups A and E had similar trends over time but differed in their magnitudes. Most counties in these groups had EDOOD visit rates that slightly decreased from 2016 to 2018, which started increasing around 2019. However, counties in Group A (e.g., Buchanan, Louisa, Shenandoah) had slightly lower rates of EDOOD visits (starting range 7-16) as compared to the counties in Group E (e.g., Orange, Smyth, Wise, Dickenson), which had a starting range of 13-20. Similarly, counties in Group B (e.g., Amelia, Bland, Amherst) and Group C (e.g., Grayson & Galax, Martinsville & Henry) also had similar trends which differed in magnitudes. Overall, these counties had increasing rates over time. PLOS ONE County-level spatio-temporal patterns of opioid overdoses Group B had a starting range of 0-10 whereas Group C had a starting range of 0-15. Group C, however, had a steeper rising curve with an ending range of 16-38. Lastly, counties in Group D (e.g., Arlington, Fairfax & Falls Church, Alexandria, Williamsburg) did not show a consistent temporal trend and mostly had low EDOOD visit rates throughout the 6-year period. Note that for each month, we plotted the average EDOOD visit rates of the previous 12 months. Multilevel modeling. The baseline unconditional model returned an ICC of 0.59, which indicates that 59% of the variance is attributable to differences between counties while 41% of the variance is attributable to differences within counties over time. Given that more than 5% of the variance is attributable to differences within counties over time, the use of multilevel modeling is justified. We ran ANOVA-like table tests of random effects for each model and compared each model to the preceding model. First, we found that the fixed growth model was a better fit than the unconditional model (X 2 = 34.37, df = 1, Pr(>Chisq) = 4.556 x 10 −9 , p < .001). Next, we found that the random growth model was a better fit than the fixed growth model (X 2 = 60.197, df = 2, Pr(>Chisq) = 8.481 x 10 −14 , p < 0.001). Thus, we proceeded to run a random growth model with our time-varying socio-ecological predictors. Table 2 shows the goodness of fit values for all four models. The random growth model showed that time predicted, on average, increases in EDOOD visits from 2016-2021 (see Table 2). When incorporating predictors into the model (i.e., conditional random growth model), Clinical Care and Social and Economic Factors emerged as significant time-varying predictors of the slope for EDOOD visits (see Table 2). This suggests that a 1 unit decrease in Clinical Care z-scores (i.e., higher ranking/better clinical care) increased the slope of EDOOD visit rates by 6.64. In addition, a 1 unit increase in Social and Economic Factors z-scores (i.e., lower ranking/lower quality of social and economic factors) increased the slope of visit rates by 6.74. These sociological factors are important predictors of changes in EDOOD visits. Discussion This study examined spatio-temporal patterns of EDOOD visits across Virginia, as well as socio-ecological factors associated with these patterns using techniques from statistics, data mining, and geographical information systems (GIS). Our spatial analysis revealed that EDOOD visit rates significantly varied across Virginia counties with clusters of overdose hotspots (primarily southwestern region) and coldspots (primarily northern region). Clustering analysis helped identify 5 distinct groups of counties based on the magnitude and the direction of change of the EDOOD visit rates over time. Although the overall trend of EDOOD visit rates differed in these groups, we observed rising trends in recent years (starting around 2019) and a slight decline in the visit rates from 2017 to 2018, in all the groups. The steepest rise in the EDOOD visit rates was seen in counties belonging to Group C (e.g., Grayson & Galax, Martinsville & Henry). Finally, the multilevel analysis revealed that the changes in the EDOOD visit rates were significantly associated with the changes in clinical care factors (i.e., access to care and quality of care) and socio-economic factors (i.e., levels of education, employment, income, family and social support, and community safety). As aforementioned, hotspots of EDOOD visit rates were the most prominent and consistent in the southwestern part of Virginia. Southwest Virginia is a rural area where residents have higher morbidity and mortality rates often correlated to a shortage of healthcare services [49]. Residents in southwest Virginia often cannot afford annual health insurance deductibles and many medical expenses are not covered by insurance [50]. A portion of individuals reported only seeking healthcare as a last resort and many did not receive regular care from a health provider. Individuals in southwestern counties with lower access to care and quality of care, therefore, may be more at risk of opioid overdose. For example, Martinsville County had one of the highest EDOOD rates throughout the six-year period. On the other hand, many counties in northern Virginia had low EDOOD visit rates throughout the 6-year period. This region includes counties that are close to the capital and is considered to have good access to healthcare, better employment opportunities, and higher levels of education [38]. Study findings also revealed sub-groups of counties with similar EDOOD visit trends. Spatial mapping of these counties in Fig 3 indicates that many counties that belong to the same group are clustered together in space suggesting a possible association with the neighborhood characteristics. In most counties, the EDOOD visit rates decreased from 2017 to 2018. This is consistent with the national pattern and is believed to be attributed to the reduced prescribing volume of high-dose opioid pills and a sudden decline in the availability of a highly potent synthetic opioid (carfentanil) [20]. The increase in EDOOD visits from 2019 may depict how Covid-19 influenced the overall opioid overdose trends. Research conducted across six health care systems in the US [51] identified that EDOOD visit counts increased by 10.5% in 2020 compared to the counts in 2018 and 2019 despite having a 14% decrease in the overall ED visits. The same study pointed out that this rise might be attributed to the disruption of access to treatment, social support, loss of employment, social isolation, and many more during the Covid-19 period. Further, this increase in EDOOD rates from 2019 may reflect the increased prevalence and use of illicitly manufactured fentanyl, which has been the major driver of the opioid epidemic over the past few years (having surpassed prescription opioids and heroin as major causes of opioid overdose) [52]. This may also explain the sharp, increasing rates of EDOOD visits (throughout the 6-year period) in some Virginia counties (Group C in Fig 3). Finally, our multilevel analysis found that a decrease in socio-economic factors over time was associated with increased EDOOD visit rates. This finding is consistent with studies that demonstrate associations between poor economic and social conditions and high opioid overdose mortality rates [5,6]. Conversely, the analysis revealed that counties with poor access to care and quality of care (i.e., higher clinical care z-scores) had lower EDOOD visit rates. It is possible that the inaccessibility and poor quality of clinical care might have led to individuals not being able to seek care or go to the ED. These findings suggest that addressing county-level deficits in clinical care and socioeconomic factors may help reduce opioid overdoses. Some limitations of this study could be addressed in the future. If the necessary data is available, the study can be expanded to include a broader time frame and nationwide data. Additionally, breaking down the EDOOD visits by the type of opioids could provide a better picture of the epidemic which was not possible with the data that we used. Lastly, the spatiotemporal variations of EDOOD visits were assessed separately but they could be modeled together to identify how these interact with other outcomes and with each other. Conclusion Overall, there are differences between the counties in Virginia in their EDOOD visit patterns across time. These differences are significantly associated with socio-economic factors (i.e., education, employment, community safety, income, and family and social support) and clinical care (i.e., access to care and quality of care). Targeting areas that are consistently hot spots for EDOOD rates and identifying areas that vary over time is critical to address the social determinants of opioid use disorders and health care access.
2022-05-26T07:07:04.071Z
2022-05-25T00:00:00.000
{ "year": 2022, "sha1": "42abbfe6ee85b0d0b87e6b7c21696e5e5a76fba2", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6e710f7fe71c3c7083f2c04889e25fcced1d35e3", "s2fieldsofstudy": [ "Medicine", "Geography" ], "extfieldsofstudy": [ "Medicine" ] }
264506276
pes2o/s2orc
v3-fos-license
Mitochondrial DNA and Inflammation in Alzheimer’s Disease Mitochondrial dysfunction and neuroinflammation are implicated in the pathogenesis of most neurodegenerative diseases, such as Alzheimer’s disease (AD). In fact, although a growing number of studies show crosstalk between these two processes, there remain numerous gaps in our knowledge of the mechanisms involved, which requires further clarification. On the one hand, mitochondrial dysfunction may lead to the release of mitochondrial damage-associated molecular patterns (mtDAMPs) which are recognized by microglial immune receptors and contribute to neuroinflammation progression. On the other hand, inflammatory molecules released by glial cells can influence and regulate mitochondrial function. A deeper understanding of these mechanisms may help identify biomarkers and molecular targets useful for the treatment of neurodegenerative diseases. This review of works published in recent years is focused on the description of the mitochondrial contribution to neuroinflammation and neurodegeneration, with particular attention to mitochondrial DNA (mtDNA) and AD. Introduction Alzheimer's disease (AD) is characterized by the accumulation of amyloid-β (Aβ) plaques and hyperphosphorylated tau neurofibrillary tangles (NFTs) in the brain, resulting in progressive neuronal death and synaptic dysfunction.Neuroinflammation and mitochondrial dysfunction, which includes mitochondrial damage and dysfunctional mitophagy, are currently considered critical components in the pathogenesis of AD [1][2][3]. In recent years, there has been increasing talk about "mitoinflammation", an inflammatory response mediated by mitochondria that seems to play an important role in the pathogenesis of various neurodegenerative diseases, including AD. Neuroinflammation is a defensive response of the brain to harmful stimuli of various origins, such as infections, trauma, protein aggregation and accumulation [8]. In the beginning, the neuroinflammatory response exerts a protective effect due to its ability to remove cellular debris and promote tissue regeneration.In the long term, instead, the persistence of a chronic neuroinflammatory state, with prolonged release of pro-inflammatory mediators, can cause synaptic dysfunction and neuronal death.Thus, neuroinflammation can participate in the development of neurodegenerative diseases [9].In light of numerous studies, neuroinflammation is therefore considered a common element in various neurodegenerative pathologies, both acute and chronic (e.g., stroke, vasculitis, AD, Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS), multiple sclerosis), including aging.Therefore, it is not surprising that, in recent decades, an increasing number of studies on neuroinflammation have been carried out with the aim of creating the basis for future broad-spectrum therapeutic approaches [10]. The main players in neuroinflammatory processes are glial cells, which are classified as macroglia (oligodendrocytes and astrocytes) and microglia.They, activated by the presence of pathogens or extraneous substances, are responsible for the innate and adaptive immune responses [11].Since mitochondria are organelles of symbiotic origin, they are able to trigger, through the release of mitochondrial damage-associated molecular patterns (mtDAMPs), an innate immune response [8]. These mtDAMPs are mitochondrial components, including mtDNA, mitochondrial transcription factor A (TFAM), ATP, cytochrome c, and cardiolipin, released into the cytoplasm and extracellular environment when severely damaged mitochondria are not correctly removed by mitophagy.mtDAMPs are recognized by microglial receptors and can induce an immune response with the release of pro-inflammatory cytokines [10]. This review, in addition to a brief description of neuroinflammation, provides an overview on the mitochondrial contribution to neuroinflammation by reporting the inflammatory pathways linked to mitochondrial damage, with a focus on the role of mtDNA in the neuroinflammatory response in AD. The Role of Microglia in Neurodegeneration Neuroinflammation is a cellular defense mechanism that is triggered as a result of the loss of brain homeostasis.It involves an initial pro-inflammatory phase, which attempts to neutralize the insult, followed by an anti-inflammatory phase.This phase, through the activation of regenerative mechanisms, tends to restore the correct functional structure of the neuronal tissue, repairing the damage and restoring synaptic functionality.When the harmful stimulus persists, as happens in neurodegenerative diseases, it moves towards chronic neuroinflammation with an enhancement of the pro-inflammatory phase that overcomes the regenerative phase, leading to tissue damage [11].This supports the idea of a close relationship between neuroinflammation and neurodegeneration events, and suggests an early activation of neuroinflammation that could precede and initiate neuronal degeneration [12,13]. Neuroinflammation is a well-coordinated event in which microglia cooperate with other cells present in the nervous tissue, such as astrocytes, capillary endothelial cells, and infiltrating blood cells that arise when the blood-brain barrier (BBB) is no longer efficient, as often happens in neurodegenerative disorders and aging [14]. Neuroinflammation is regulated by specific brain immune and inflammatory cells specialized in counteracting pathogens and/or tissue insults thanks to the presence of different pattern recognition receptors (PRRs).These cells, mainly microglia and astrocytes, represent respectively about 5-15% and 20% of the total central nervous system (CNS) cells [15]. Microglia are considered the resident macrophages of the CNS.The morphology of microglia varies depending on their active or inactive state.In fact, they exhibit a branched conformation with numerous thin and elongated cytoplasmic processes that allow them to carry out immune surveillance in the surrounding environment and interact with nearby cells, which is typical of the resting form.Activated microglia instead present thickened processes and an increasingly reduced degree of branching, until they reach an amoeboid morphology (without branches and with a rounded soma) with great phagocytic capacity [14,16].At the molecular level, the quiescent state in microglia is characterized by a low expression of cluster of differentiation 68 (CD68) and major histocompatibility complex I and II (MHC-I, MHC-II).In contrast, the activated microglia express high levels of MHC-II and co-stimulatory antigens that have the function of presenting antigens to naïve T cells and triggering pathways that regulate the production of inflammatory molecules [16]. From a study by Vela et al. on the distribution and morphology of microglial cells in the cerebellum of wild-type mice of different ages, it can be seen that microglia can present different phenotypes, each linked to specific functional properties [17].Thus, to highlight its dichotomous activity, in some reports we find microglia simplistically classified as M1 pro-inflammatory phenotype and M2 anti-inflammatory phenotype [18,19], which in turn is further divided into four subtypes, M2a, M2b, M2c and M2d [20]. The activation of glial cells occurs in response to stimuli of different natures, such as microbial products, cytokines, products released by damaged neurons and disease-related proteins (e.g., Aβ, tau/p-tau or α-synuclein) [25]. As mentioned previously, activated glial cells have a dichotomous action; in fact, in acute contexts, they can be protective, eliminating debris and releasing neurotrophic factors that contribute to the activation of tissue regeneration mechanisms.However, excessive activation of glial cells, which occurs during neurodegenerative diseases, can, through the release of potentially neurotoxic mediators including cytokines (TNF-α, IL-6, IL-1β, IL1-α), chemokines (RANTES and MCP-1), reactive oxygen species (ROS), nitric oxide (NO), proteolytic enzymes and glutamate, feed a neuroinflammatory state by damaging adjacent neurons [26]. In fact, to maintain CNS homeostasis, continuous communication between microglia and surrounding cells, such as astrocytes and neurons, is necessary [27].Therefore, in astrocytes, inflammatory cytokines such as IL-1α, IL-1β, and TNF-α, secreted by active microglia, can induce an inflammatory response [28,29]. Astrocytes are the most numerous glial cells in the CNS, where they perform various functions ranging from the regulation of synaptic plasticity to the maintenance of the BBB [30,31].Similarly to microglia, they can present a pro-inflammatory phenotype (A1), which induces inflammatory factors such as IL-1β, TNF-α and NO, and a neuroprotective phenotype (A2) capable of releasing protective and neurotrophic factors, such as IL-4, IL-10, active TGF-β1 and brain-derived neurotrophic factor (BDNF), to improve cell survival [1,29,32]. There is a growing body of literature suggesting the involvement of glia in the pathogenesis of AD.Microglia and astrocytes interact with Aβ oligomers and fibrils, modifying their conformation and phenotype.The presence of non-functional microglia can cause an accumulation of Aβ [29].For example, genetic mutations in the triggering receptor expressed on myeloid cells 2 (TREM2) increase the risk of developing late-onset AD (LOAD).TREM2 is present on microglial membranes, and it is involved in the phagocytosis of microglial cells.The mutation found in AD and the reduction of TREM2 activity decreases the autophagocytic capacity of microglia, causing an increase in amyloid plaques [36][37][38]. In recent years, several studies have confirmed the presence of high levels of various cytokines, such as IL-1β, IL-6, IL-18, IL-33 and TNF-α, in various neurodegenerative disorders, including AD [39].The continuous production of these inflammatory molecules culminates in neuronal death and synaptic dysfunction.For example, it has been shown that the activation of NF-κB induces an upregulation of amyloid precursor protein (APP) in neurons and increased production of Aβ, confirming the pathogenetic role of neuroinflammation in AD [40]. The dual role of microglial activation in the pathogenesis of AD is increasingly recognized.On the one hand, microglia contribute to the production of Aβ and the formation of amyloid plaques through the release of pro-inflammatory mediators.On the other hand, microglia play a neuroprotective role by activating the removal of Aβ plaques and the production of neurotrophic factors [41]. Mitochondria, between Neurodegeneration and Neuroinflammation The fundamental role of mitochondria for cellular homeostasis has been widely described and documented in recent years.Their involvement in a large variety of cellular functions is now known.In fact, they regulate the production of ATP through oxidative phosphorylation, the metabolism, the homeostasis of intracellular calcium, the production of ROS, the synthesis of steroids, the catabolism of fatty acids, cell proliferation and apoptosis [42,43]. Neurons are particularly vulnerable to mitochondrial damage because their ability to produce neurotransmitters and maintain membrane excitability depends on them.Indeed, mitochondria are fundamental in the regulation of presynaptic calcium in central glutamatergic terminals [44]. Therefore, the accumulation of dysfunctional mitochondria leads to cell damage and neuron degeneration.Furthermore, in recent years, the ability of mitochondria to react to cellular damage has emerged, promoting, thanks to their endosymbiotic nature, the host's immune response.Thus, it is not surprising that more and more studies have supported the central role of these organelles in the pathogenesis of CNS disorders also through direct action in neuroinflammation.Several studies have shown an early involvement of mitochondrial damage in cellular and animal models of AD, with a decrease in mitochondrial respiration and a reduction on the activity of complex I, complex II-III, and cytochrome oxidase preceding the accumulation of Aβ in AD mice [45][46][47][48].Furthermore, alterations in mitochondrial number, morphology, and activity have been highlighted, culminating in an aberrant production of mitochondrial ROS (mtROS) [7,49].For example, analyzing the frontal cortex of patients with early, definite, and severe AD, Manczak et al. found an imbalance in mitochondrial dynamics with overexpression of the mitochondrial fission gene and downregulation of the fusion gene, likely due to the interaction between dynamin-related protein 1 (DRP1) and Aβ, suggesting a role of mitochondria in neuronal health and synaptic damage [50]. mtROS, produced by the electron transport chain during ATP production, is considered an important cell-signaling molecule.The overproduction of mtROS is responsible for oxidative damage to proteins, lipids, and nucleic acids, including mtDNA.In this way, it drives mitochondrial dysfunction and apoptosis, and contributes to the progression of several neurodegenerative diseases [47,51].Furthermore, oxidative stress is responsible for the fragmentation and release of mtDNA from mitochondria to the cytosol and extracellular space, where it acts as a potent inducer of inflammation [7,52,53]. Damaged mitochondria undergo mitochondrial quality control, a mechanism involved in the isolation and destruction of dysfunctional mitochondria via selective autophagy known as mitophagy.We have mentioned this mechanism in detail in a previous review [54].Neurodegenerative diseases, such as AD and PD, are often associated with an alteration of the mitophagic process and the intracellular accumulation of damaged mitochondria [3].Dysfunctional mitochondria lose mitochondrial membrane integrity and release mtDAMPs into the cytoplasm and extracellular space [8,55,56].Among the mtDAMPs molecules of different nature, such as mtDNA, cardiolipin, cytochrome c (CytC), TFAM, and N-formyl peptides, are present.Several studies report the involvement of DAMPs in the neuroinflammatory process.They directly stimulate the PRRs present on microglia and astrocytes [57], triggering an innate immune mechanism that contributes to neuron degeneration [57][58][59]. In the last few years, an emerging role of mitochondria in the pathogenesis of neurodegenerative diseases concerning their neuroinflammatory capacity has been increasingly recognized.Therefore, a deeper understanding of the mechanisms underlying the ability of mtDAMPs to regulate the complex neuroinflammatory machinery can make an important contribution to the identification of new therapeutic targets for the treatment of AD. mtDNA, a Mitochondrial DAMP with Great Potential Mitochondria are intracellular organelles of endosymbiotic origin and possess bacterial characteristics such as the presence of lipid cardiolipin in the membrane, N-formylated peptides, and double-stranded circular DNA with a hypomethylated cytosine-phosphateguanine (CpG) motif [8,60,61]. Each mitochondrion contains a large number of mtDNA copies per cell.Being close to the electron transport chain, mtDNA easily undergoes oxidation and therefore has a propensity towards mutations [62]. Events such as the accumulation of mtDNA mutations, increased ROS levels, imbalances in mitochondrial dynamics, and loss of mitochondrial membrane potential exacerbate mitochondrial dysfunction and lead to the release of mtDNA.Defective mitochondria are usually degraded by mitophagy, but if stress persists, damaged mitochondria can escape this quality-control pathway and undergo structural modifications that allow the leakage of mitochondrial components.mtDNA released from distressed neurons can act on astrocytes and microglia to induce neuroinflammation [63].Therefore, autophagy/mitophagy may represent a control mechanism for the inflammatory process. Several studies have attempted to explore how mtDNA is released outside the mitochondrion (Figure 1).Garcia et al. conducted studies on rat liver cells exposed to oxidative stress, and observed mtDNA release mediated by the opening of the mitochondrial permeability transition pore (mPTP) [64].Similarly, McArthur et al., using light sheet microscopy, observed in mouse embryonic fibroblasts that, following activation, BAK and BAX oligomerize, forming large pores on the outer mitochondrial membrane from which components of the mitochondrial matrix, including mtDNA, leak out [65].Kim et al. proposed that the release of mtDNA from mitochondria subjected to oxidative stress can also occur through the formation of pores on the outer mitochondrial membrane by voltage-dependent anion channel 1 (VDAC1) oligomerization [53]. In addition to being freed molecules, mtDNA can also be released inside vesicles of various kinds, including mitochondria-derived vesicles (MDV) or exosomes [10,66].These vesicles are able to activate the inflammatory response in immune cells through the cGAS/STING pathway [67]. Neuroinflammation Activated by mtDNA Sterile inflammation occurs in the absence of pathogens and begins when PRRs bind to DAMPs released following cellular damage.PRRs are expressed on different cell types involved in the inflammatory response, such as macrophages, neutrophils, dendritic cells, microglia, and astrocytes [7]. mtDNA present in the extracellular environment is recognized as a DAMP by glial cells, triggering an immune response through the activation of several PRRs, such as cyclic guanosine monophosphate-adenosine monophosphate (GMP-AMP) synthase (cGAS), and the NLRP3 inflammasome, present in the intracellular compartments, and TLRs expressed on the cell membrane [8,[68][69][70]. cGAS-STING cGAS is a cytosolic double-stranded DNA (dsDNA) sensor, predominantly expressed in microglial cells [6].It is capable of triggering the type I interferon (IFN-I) pathway, culminating with the induction of the expression of IFN-I (which includes IFN-α and -β) and inflammatory cytokines by the translocation of interferon regulatory factor 3 (IRF3) and NF-κB to the nucleus [71].Since mtDNA has a double-stranded structure, it is a central activator of cGAS-STING signaling [72]. Briefly, cGAS is constitutively present as an inactive protein mainly in microglial cells.Contact with dsDNA induces a conformational change in the cGAS protein which allows interaction with ATP and GTP, and the production of the second messenger cyclic GMP-AMP (cGAMP).cGAMP activates the stimulator of interferon genes (STING) protein which translocates from the endoplasmic reticulum (ER) to the Golgi compartment where it binds protein kinase 1 (TBK1), promoting its autophosphorylation [6,73,74].The STING-TBK1 complex, through the translocation of the IRF3 transcription factor and NF-κB to the nucleus, promotes the production of IFN-I and the genes encoding inflammatory cytokines, including IL-6 and TNF-α [75,76] (Figure 2). Several studies have shown the involvement of the cGAS-STING pathway in AD, suggesting a role in its onset and progression [77].Studies conducted by Xie et al. in the 5xFAD mouse model of AD showed colocalization between phosphorylated STING and the activated microglial marker CD68 around Aβ plaques, and found increased interactions between cGAS and dsDNA in both the human AD brain and the 5xFAD mouse [78].Additionally, treatment with H-151, a STING inhibitor, reduced inflammation and the presence of Aβ 42 in the cortex of 5xFAD mice.H-151 also decreased Aβ 42 -induced IL-6 production in human HMC3 microglial cells [78].Furthermore, increased phosphorylation of STING, TBK1, p-65, and IRF3 was measured in the prefrontal cortex of a patient with AD.Moreover, elevated levels of IFN-I have been measured in human AD brains postmortem [6]. Similarly, Hou et al. showed increased levels of cGAS and STING protein expression in the brains of APP/PS1 mice compared to wild-type.In contrast, genetic deletion of the cGAS gene improved neuroinflammation and reduced cognitive impairment [79]. Another study conducted in 5xFAD mice confirmed an upregulation of the cGAS-STING pathway in an AD model.The silencing of microglial cGAS in the early phase of the pathology significantly limited plaque formation, preserved synaptic integrity, and protected mice from Aβ-induced cognitive impairment.Thus, it suggests that cGAS-STING signaling may have an important role in activating the pro-inflammatory microglial phenotype that drives the pathology [80]. GMP-AMP (cGAMP).cGAMP activates the stimulator of interferon genes (STING) protein which translocates from the endoplasmic reticulum (ER) to the Golgi compartment where it binds protein kinase 1 (TBK1), promoting its autophosphorylation [6,73,74].The STING-TBK1 complex, through the translocation of the IRF3 transcription factor and NF-ĸB to the nucleus, promotes the production of IFN-I and the genes encoding inflammatory cytokines, including IL-6 and TNF-α [75,76] (Figure 2).Several studies have shown the involvement of the cGAS-STING pathway in AD, suggesting a role in its onset and progression [77].Studies conducted by Xie et al. in the 5xFAD mouse model of AD showed colocalization between phosphorylated STING and the activated microglial marker CD68 around Aβ plaques, and found increased interactions between cGAS and dsDNA in both the human AD brain and the 5xFAD mouse [78].Additionally, treatment with H-151, a STING inhibitor, reduced inflammation and the presence of Aβ42 in the cortex of 5xFAD mice.H-151 also decreased Aβ42-induced IL-6 production in human HMC3 microglial cells [78].Furthermore, increased phosphorylation of STING, TBK1, p-65, and IRF3 was measured in the prefrontal cortex of a patient with AD.Moreover, elevated levels of IFN-I have been measured in human AD brains postmortem [6]. Similarly, Hou et al. showed increased levels of cGAS and STING protein expression in the brains of APP/PS1 mice compared to wild-type.In contrast, genetic deletion of the cGAS gene improved neuroinflammation and reduced cognitive impairment [79]. Another study conducted in 5xFAD mice confirmed an upregulation of the cGAS-STING pathway in an AD model.The silencing of microglial cGAS in the early phase of the pathology significantly limited plaque formation, preserved synaptic integrity, and protected mice from Aβ-induced cognitive impairment.Thus, it suggests that cGAS-STING signaling may have an important role in activating the pro-inflammatory microglial phenotype that drives the pathology [80].Post-mortem human AD brain samples showed increased expression of STING in neurons adjacent to amyloid plaques compared to age-matched control brain samples [6].Furthermore, IFN signaling is increased in AD brains, and the analysis of healthy and AD postmortem brain samples highlighted the upregulation of phosphorylated TBK1 levels in AD brains [80][81][82]. Similarly, a study of AD conducted on different mouse models at different ages found an upregulation of IFN-I response genes, together with early memory decline and a progressive accumulation of Aβ [80,83], while the genetic ablation of Cgas in a mouse model of tauopathy reduced the microglial IFN-I response, preserved synapse integrity, and improved cognitive impairment [84]. IFN-I binds to the interferon alpha receptor (IFNAR), composed of the transmembrane subunits IFNAR1 and IFNAR2, to activate the proteins Janus kinase (JAK), tyrosine kinase 2 (TYK2), and signal transducer and activator of transcription (STAT).Once phosphorylated, STAT move to the nucleus further regulating immune cell recruitment and inflammatory progression.It is therefore possible to reduce neuroinflammation by acting on this pathway.In fact, various studies have reported that the downregulation of IFNAR1 leads to an improvement in astrocytic activity, a decrease in IFN-I and pro-inflammatory cytokines, and an attenuation of microglial proliferation around amyloid plaques [85].Similarly, the ablation of IFNAR1 and IRF7 in the APP/PS1 transgenic mouse provides some protection from Aβ-induced neurotoxicity [86,87]. Finally, the ability of molecules involved in the cGAS-STING pathway to interact with beclin-1 for promoting mitophagy in innate immune cells and increase mtDNA degradation is interesting [88].In Parkin or Pink1 knockout mice, an inflammatory phenotype that can be alleviated by genetic inactivation of STING has been established [89].These findings suggest that the cGAS-STING pathway can be a potent therapeutic target to control mitoinflammation through mitophagy.These data confirm the importance of the cGAS-STING pathway in neuroinflammation related to AD pathology.It is therefore worth pursuing further studies in this field to find therapeutic targets capable of improving the pathological condition of AD. NLRP3 The nod-like receptor (NLR) family represents another DAMP sensor belonging to PRRs.Some NLRs, once activated by interacting with DAMPs, form a multiprotein complex called an "inflammasome".In general, an inflammasome consists of a molecular receptor NLR, an adapter protein, and the caspase-1 precursor.Inflammasomes allow the activation of caspase-1 and the subsequent maturation and release of the pro-inflammatory cytokines IL-1β and IL-18 [90,91]. Inflammasome activation requires two stimuli: a priming signal provided by an inflammatory stimulus, such as TLRs and the TNF-α receptor, leading to NF-κB-mediated NLRP3 expression and the upregulation of pro-IL-1β and pro-IL18.An activation or danger signal, provided by pathogen-associated molecular patterns (PAMPs) or DAMPs, promote inflammasome assembly [92,93]. NLRP3, the most studied inflammasome, is mainly present in microglial cells [94].When inactive, NLRP3 localizes to the ER membrane and the cytosol, but when both NLRP3 and its adapter ASC (apoptosis-associated speck-like protein containing a caspase recruitment domain (CARD)), are activated, they are relocated to the mitochondria-associated membrane (MAM) fraction.Here, they can detect ROS and DAMPs produced by damaged mitochondria, such as mtDNA [95]. The NLRP3 inflammasome consists of the NLRP3, ASC, and caspase-1 precursor proteins.Upon inflammasome activation, caspase-1 converts pro-IL-1β, pro-IL-18 and gasdermin-D (GSDMD), a pyroptosis inducer, into their active forms [96].NLRP3 is a multimeric protein consisting of a conserved core nucleotide-binding and oligomerization domain (NOD or NACHT), a C-terminal leucine-rich repeat (LRR) domain, and an N-terminal pyrin domain (PYD).The NOD domain, thanks to its ATPase activity, is necessary for the self-oligomerization of the molecules at the beginning of the inflammasome assembly.The LRR domain is essential for recognizing PAMPs and DAMPs, and maintaining the NLRP inactive state [90,91].ASC recruits pro-caspase-1 through CARD-CARD interaction.When pro-caspase-1 molecules come together, they undergo an autocatalytic cleavage process that cuts pro-caspase-1 into the p20 and p10 subunits.These subunits bind another identical set of subunits to form an active tetramer.Once activated, caspase-1 cleaves pro-IL-1β and pro-IL-18 into their active forms (IL-1β and IL-18] and induces their secretion [96,97]. The release of the cytokines IL-1β and IL-18 induces the activation of the pro-inflammatory microglial M1 phenotype [11,93].Furthermore, the NLRP3 inflammasome, through the production of GSDMD, triggers a form of pro-inflammatory cell death known as pyroptosis [96] (Figure 2).Gasdermines can bind to membrane lipids, altering their integrity, creating pores in the cell membrane, and facilitating the secretion of the inflammatory cytokines IL-1β and IL-18, and many intracellular DAMPs that can trigger the inflammatory process in nearby cells, increasing any ongoing neuroinflammation with a feed-forward mechanism [98][99][100]. Studies conducted in recent years have shown the involvement of mitochondrial damage in the activation of the NLRP3 inflammasome, identifying mtDNA as the main activator [100][101][102].It has recently been observed, in cellular and mouse models of AD and in the brains of human patients, that the amyloid-beta peptide is also able to activate microglial NLRP3 inflammasomes [103,104]. The activation of the NLRP3 inflammasome is involved in the pathogenesis of various neurodegenerative diseases, including AD [103,105].The use of NLRP3 and caspase-1 knockout mice demonstrated the involvement of the NLRP3/caspase-1 axis in the pathogenesis of AD.Indeed, APP/PS1 mice deficient in caspase-1 or NLRP3 showed a significant improvement in spatial memory and hippocampal synaptic plasticity [106][107][108].Inversely, increased expression of caspase-1 and NLRP3 genes promotes Aβ accumulation and facilitates lesion production in the brains of APP/PS-1 transgenic mice [109]. Furthermore, high levels of IL-1β have been found in the serum, cerebrospinal fluid, and brains of patients with AD and other types of dementia [110][111][112][113][114]. Once released, the effector molecule IL-1β increases the production of Aβ by neurons and participates in the phosphorylation of the tau protein [115].Consistently, the inhibition of IL-1β release reduced neuroinflammation and the accumulation of Aβ and tau, and improved cognitive dysfunction and memory in 3xTg-AD mice [116]. Similarly, high levels of IL-18, the other pro-inflammatory cytokine released following activation of the NLRP3 inflammasome, have been found in the bodily fluids of patients with mild cognitive decline and AD [113,114,117].Furthermore, the involvement of IL-18 in tau hyperphosphorylation through glycogen synthase kinase 3β (GSK-3β) and cyclin kinase 5 has been demonstrated [118]. It seems that the activation of the NLRP3 inflammasome occurs already in the early stages of AD pathology.Indeed, patients with early or mild stages of AD showed higher levels of IL-1β and caspase-1 compared to age-matched controls [119,120]. An interesting study supporting the role of microglia in the clearance of Aβ plaques reports that the inflammasome components NLRP3 and caspase-1 colocalize with p-Tau and Aβ in glial cells, as well as the produced cytokines IL-1β and IL-18, which are more highly expressed in the temporal cortex of the post-mortem AD brain [114,121].Furthermore, cytokines produced by the activation of the microglial NLRP3 inflammasome are able to induce the inflammatory response of astrocytes responsible for neuronal damage and synaptic dysfunction in AD models [28,122]. Therefore, based on the numerous findings that show the inflammasome to be associated with a neuroinflammatory response and the pathogenesis of AD, attention has been focused on the NLRP3 inflammasome as a possible therapeutic target for treatment of AD.Several inhibitors have been tested in vivo with encouraging results.For example, MCC950 (also known as CRID3 and CP-456773) is an experimental drug capable of inhibiting NLRP3.It has been shown to attenuate the activation of reactive microglia in a mouse model of sporadic AD caused by streptozotocin [123] and to improve cognitive function in APP/PS1 and SAMP8 mouse models of AD [124,125].Recently, MCC950 and other NLRP3 inhibitors, such as Inzomelid, have undergone phase 1 clinical trials for AD with encouraging results [126].JC124, another small molecule inhibitor of the NLRP3 inflammasome, works by blocking the caspase-1 activation and secretion of IL-1β.The inhibition of the NLRP3 inflammasome with JC124 in APP/PS1 mice significantly reduced Aβ plaques and neuroinflammation in APP/PS1 mice, leading to improved synaptic plasticity and cognitive function [127]. VX-765, also known as Belnacasan, is a BBB-permeable caspase-1 inhibitor that has already been approved by the Food and Drug Administration (FDA) for clinical trials in humans.This molecule, tested on a J20 a mouse model overexpressing human APP with a mutation linked to familiar AD, and on Sprague-Dawley rats, improved memory capacity, blocked Aβ deposition, improved neuroinflammation, and re-established synaptophysin levels in the mouse hippocampus [128,129]. TLR Astrocytes and microglia may also be activated via another family of PRRs, called TLRs, which are found expressed in different immune cell populations, including B cells, dendritic cells, and cells of the monocyte/macrophage lineage, such as microglial cells, where they localize in the endosomal vesicles [130].Their expression varies among immune cells.In human AD brain immune cells, TLR mRNA is overexpressed compared to the healthy brain, with the exception of TLR2, which remains unchanged [131].In humans, the TLR family (TLR1 to TLR10) is mainly composed of type I transmembrane glycoproteins and can be located both on the cell surface, like TLR1, 2, 4, 5, 6 and 10, and in membrane intracellular cells, like TLR 3, 7, 8 and 9 [132,133].Each one detects distinct external pathogen-associated molecular patterns or internal damage-associated molecular patterns.For example, bacterial lipopolysaccharide (LPS) is recognized by TLR4, lipoproteins by TLR2, flagellin by TLR5, single-stranded viral RNA (ssRNA) by TLR7, and double-stranded viral RNA (dsRNA) by TLR3 [130,134]. TLR9 recognizes the hypomethylated CpG motif typical of bacterial DNA and mtDNA [8].mtDNA contains an unmethylated or hypomethylated CpG sequence.These sequences are TLR9 ligands that are recognized by TLR9 in the endolysosomal compartment [135].Several studies have supported the idea that mtDNA is an endogenous agonist of TLR9 [136,137]. TLRs use different adapters.The most widely used is myeloid differentiation primary response protein 88 (MyD88).The binding between mtDNA and TLR9 triggers a signaling cascade, through MyD88, which culminates in the activation of mitogen-activated protein kinases (MAPKs) and nuclear transcription factor NF-κB [138,139]. TLR activation has been associated with immune responses that contribute to the attenuation of the pathological signs of AD [144].In a series of studies, Scholtzova et al. investigated the possibility of using TLR9 as a therapeutic target in AD models, analyzing the effects of TLR9 activation in three different transgenic mouse models of AD [145][146][147].Through monthly intraperitoneal injections of CpG oligodeoxynucleotides (CpG ODN), a TLR9 agonist, in three different mouse models of AD (Tg2576, 3xTg and Tg-SwDI mice), significant cognitive improvement and a reduction in fibrillar and soluble Aβ were noted.In contrast, microglial and macrophagic markers were essentially unchanged, indicating that no significant activation had occurred.In 3xTg mice, CpG ODN treatment also showed a reduction in the Tau pathology characteristic of this model [147]. Patel et al. confirmed the immunomodulatory role of the TLR9 CpG agonist ODN 2006 with experiments conducted on aged squirrel monkeys, an AD model with pathological characteristics quite similar to humans [148].The administration of CpG ODN 2006 produced significant cognitive improvements, suggesting that this immunomodulatory approach may also have therapeutic potential in patients [148]. It should also be reported that, in AD models, some studies on the modulation of TLRs have produced side effects [149,150].This can be explained by hypothesizing that, depending on the type of ligand used, the dose administered, the frequency of administration, and the stage of the disease, different signaling pathways can be activated.These pathways can alternatively go towards a neuroprotective inflammatory response with greater phagocytic activity and production of anti-inflammatory cytokines, or toward excessive inflammation and neurotoxicity. Therapeutic Strategies Several drugs are currently used in clinical trials for AD.Some of them target the two main pathological features of the disease, Aβ plaques and tau protein.Others turn towards anti-inflammatory mechanisms, while a considerable number is directed towards mitochondrial targeting. At present, there are no treatments capable of stopping the progression of AD.The FDA has approved some drugs with different actions for clinical use.Tacrine, donepezil, carbalatine, and galantamine act as acetylcholinesterase (AChE) inhibitors, whereas memantine is an N-methyl-D-aspartate (NMDA) receptor antagonist.These drugs help to alleviate the symptoms of the disease, but cannot cure it.Lately, the focus has shifted to monoclonal antibodies targeting Aβ aggregation and, in 2021, the IgG1 monoclonal antibody aducanumab (BIIB037, ADU) was approved.Meanwhile, lecanemab (BAN2401) has completed a multicenter, double-blind Phase III study with 1795 participants showing a reduction in amyloid markers and cognitive improvement in patients with early AD, and a slightly better safety profile than aducanumab [151,152].These findings convinced the FDA to authorize the use of the monoclonal antibody for patients with mild cognitive impairment (MCI) or mild dementia stage of AD. Nonsteroidal anti-inflammatory drugs (NSAIDs) have shown a potential therapeutic effect in AD [153].Epidemiological studies have suggested that long-term use of NSAIDs was related to a decreased risk of AD [14].However, clinical studies have not confirmed such benefits, with the exception of indomethacin and naproxen [14,154].Recently, a new, promising NSAID called itanapraced (CHF5074 or CSP-1103) has emerged.This drug has completed several Phase II clinical trials (NCT01303744, NCT01602393, NCT01421056), proving to be able to restore microglial function, increase phagocytosis, and decrease the production of pro-inflammatory cytokines, but no significant differences between treatment groups were found in neuropsychological tests [155,156]. Recently, pharmaceutical companies are looking with utmost care at the NLRP3 inflammasome, as a therapeutic strategy for several diseases, including AD.The aim is to inhibit the NLRP3 inflammasome and reduce the production of pro-inflammatory cytokines.In recent years, many NLRP3 inhibitors have been tested in preclinical studies; they have been discussed in depth by Barczuk et al. [97,104].Currently, some NLRP3 inflammasome inhibitors are in Phase II trials for the treatment of AD [97,157]. Drugs with Mitochondrial Action As regards the molecules with mitochondrial targeting, in order to fulfil the purpose of this review, we will focus on antioxidant molecules and drugs acting on the permeability of the mitochondrial membrane, which can favor the release of mitochondrial DAMPs, as well as on drugs targeting the autophagic process that can regulate the inflammatory response driven by mitochondria. Melatonin and its precursor N-Acetylserotonin (NAS) exert several potential anti-AD properties, including anti-oxidant capacity, and improve mitochondrial health by inhibiting mPTP.Moreover, melatonin has anti-inflammatory properties, suppresses NLRP3 activation, cytokine release, and shifts microglia towards an M2 anti-inflammatory phenotype [158].Piromelatine, the extended-release melatonin, has shown a significant improvement in cognitive performance in patients with AD in a 24-week clinical trial [159].However, a recently completed Phase II clinical trial (NCT02615002) showed no statistically significant progress in cognitive performance [160].A long-term, prospective observational study to investigate the effects of melatonin on AD progression has just concluded, but results are not known yet (NCT04522960).A new randomized efficacy and safety study of piromelatine versus placebo, in participants with mild dementia due to AD is currently being carried out (NCT05267535). Mitoquinone mesylate (MitoQ), based on coenzyme Q10 and the derivative of plastoquinone SkQ1, were shown to be effective antioxidants in vitro and in vivo, and are specifically targeted at mitochondria by covalent attachment to a lipophilic triphenylphosphonium cation [161,162].3xTg mice treated with MitoQ for 5 months showed reduced Aβ-induced cell death and oxidative stress in cortical neurons.Treatment with MitoQ also reduced Aβ accumulation, astrogliosis, and synaptic loss, leading to improved cognitive functions [163].However, clinical trials have highlighted no significant progresses in cognitive decline so far (NCT00117403). Astaxanthin, another mitochondria-permeable antioxidant, which can penetrate the BBB, also showed the ability to modulate neuroinflammation [164].It is currently used in a randomized, double-blind, placebo-controlled trial to test its efficacy in AD (NCT05015374). Hydralazine is an FDA-approved drug with neuroprotective effects derived from different actions.Hydralazine is a strong antioxidant, improves mitochondrial health, and activates autophagy decreasing intracellular aggregate [165].A Phase III, triple-blind, parallel double-armed randomized clinical trial is currently underway (NCT04842552). Glutathione (GSH), an endogenous antioxidant, is fundamental for mitochondrial function, and its deficiency is linked to AD [166].N-acetyl-cysteine (NAC) is a compound that can cross the BBB and provides precursors for GSH synthesis.Several studies have shown that NAC shows antioxidant and anti-inflammatory activities, protects from Aβinduced toxicity, reduces Aβ levels, decreases the amount of phosphorylated tau, and preserves cognitive function [167][168][169].A randomized, double-blind study has evaluated a combined therapy of a nutraceutical formulation composed of NAC, folate, vitamin E, vitamin B12, s-adenosyl methionine, and acetyl-L-carnitine in AD subjects.Subjects who received this formulation showed improvements on their dementia rating scale [170].There is currently an ongoing randomized clinical trial to evaluate the effects of NAC supplementation for 24 weeks compared to placebo in patients with AD.This study will measure the changes in cognitive function, metabolic and mitochondrial activity, oxidative stress, and brain inflammation (NCT04740580). Nicotinamide adenine dinucleotide (NAD+), a cofactor for several proteins including sirtuins, is directly involved in mitochondrial biogenesis and mitophagy [171].The NAD + precursors, nicotinamide riboside (NR) and nicotinamide mononucleotide (NMN), are powerful inducers of mitophagy.In APP/PS1 mice, NR improved cognitive functions by reducing cortical Aβ deposits and increasing mRNA levels of the mitophagy proteins PINK1 and LC3 [172].Moreover, NR treatment reduced expression of pro-inflammatory cytokines, and decreased activation of microglia and astrocytes.NR treatment also reduced NLRP3 activity and cGAS-STING activation [79].Similarly, NMN reduced inflammation and Aβ accumulation in the brain, inhibited neuronal apoptosis, preserved mitochondrial function, and improved cognitive impairment in AD mice [173,174].There is currently an ongoing Phase I clinical trial to evaluate the effect of NR on brain energy metabolism, oxidative stress, and cognitive function in individuals with MCI and mild AD (NCT04430517).Moreover, a Phase I/II clinical trial is underway to evaluate if the microcrystalline form of NMN (MIB-626) penetrates the BBB and estimate the effect on circulating biomarkers of aging (NCT05040321). Several studies suggest that some hypoglycemic drugs can improve mitochondrial performance by preventing both the ROS production and mitochondrial dysfunction [54,[175][176][177].Moreover, different clinical trials have shown that administration of intranasal insulin, metformin, or thiazolidinediones (pioglitazone, and rosiglitazone) in patients with MCI and AD can improve cognition performance and memory [178].The action of insulin on mitochondria has been widely discussed in a precedent review [54]. Metformin stimulates autophagy by acting on AMP-activated kinase (AMPK), improves mitochondrial function, and reduces inflammation [179,180].However, two Phase II clinical trials (NCT01965756 e NCT00620191) showed no significant improvement in cognitive function after metformin administration [181,182], but highlighted some side effects (gastrointestinal symptoms and vitamin B12 deficiency) [183,184].To date, there are three ongoing clinical trials, currently in the early stages, investigating the effects of metformin on cognitive function and brain health in individuals with MCI or at-risk for AD (NCT04511416, NCT05109169 and NCT04098666). In Table 1, we summarized the current therapeutic approaches targeting mitochondria in treatment of AD, described in the present chapter. To sum up, several molecules are able to improve mitochondrial function, showing positive effects in preclinical models of AD.There are many limitations to overcome, such as low bioavailability, rapid metabolism, and poor ability to cross the BBB.While waiting to develop new formulations aimed at overcoming these limits, adopting a healthy lifestyle (Mediterranean diet, caloric restriction, and physical exercise) can help preserve cognitive function. Conclusions Mitochondrial dysfunction and neuroinflammation are two important factors in neurodegenerative diseases such as AD.In the scientific field, there is still debate about the triggering event.Glia recognize mitochondrial DAMPs, including mtDNA, released from damaged mitochondria, activating various inflammatory pathways through PRRs. The dichotomous nature of the cells implicated in inflammation suggests that intervening in the inflammatory process by suppressing the activity of glial cells is not an easy and assured strategy for success.This could explain the partial failure of clinical trials of anti-inflammatory drugs in AD patients [185][186][187][188]. Intervening in the most appropriate time window, during the pathological process, requires a better understanding of the timing and succession of the activation mechanisms in the development of the disease. Acting in the initial stages of the pathology can help to distinguish the neuroprotective or neurotoxic microglial phenotype.To stimulate the first and block the second, through selective interventions, could be a winning strategy against neurodegeneration.Furthermore, understanding how to modulate the levels of mtDNA and inflammatory DAMPs could also open the way to new intervention possibilities.For example, in recent work, Zheng et al. used small molecules capable of intercalating into mtDNA to stimulate the release of mtDNA fragments, activating the cGAS-STING pathway and modulating a specific immune response [189]. Further studies are needed to better understand these processes and investigate the interaction pathways among these mechanisms.This could allow us to identify key molecules that could become therapeutic targets for the treatment of neurodegenerative diseases and help us find the most effective strategies to tackle the great challenge of AD. Figure 2 . Figure 2. Inflammatory pathways driven by mtDNA.Following cellular stress, mitochondria are damaged leading to the accumulation and release of oxidized mtDNA fragments.mtDNA triggers an inflammatory response via cytosolic NLRP3 or cGAS-STING pathways, or via endosomal localized TLR9 signaling.In turn, it promotes the activation of NF-kB and the transcription of proinflammatory genes, such as interferons (IFNs), and pro-inflammatory interleukins. Figure 2 . Figure 2. Inflammatory pathways driven by mtDNA.Following cellular stress, mitochondria are damaged leading to the accumulation and release of oxidized mtDNA fragments.mtDNA triggers an inflammatory response via cytosolic NLRP3 or cGAS-STING pathways, or via endosomal localized TLR9 signaling.In turn, it promotes the activation of NF-kB and the transcription of pro-inflammatory genes, such as interferons (IFNs), and pro-inflammatory interleukins. Author Contributions: Conceptualization, G.G; writing-original draft preparation, G.G.; writing-Review and editing, G.G. and M.D.C.; supervision, M.D.C.All authors have read and agreed to the published version of the manuscript. Table 1 . Summary of current clinical trials involving mitochondria-target therapies for AD treatment.
2023-10-27T15:06:47.628Z
2023-10-25T00:00:00.000
{ "year": 2023, "sha1": "4b578b2a2528f50800ff11da3cb525494ffe95c0", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8b069fa13d10776472bbdad9fde75eda4e4b76ee", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
195791214
pes2o/s2orc
v3-fos-license
Has the social justice approach become pervasive as a tool for fighting HIV in women? The case of Zambia Objective Research has constantly shown how gender-based social inequality in countries like Zambia leads to disproportionately higher HIV prevalence rates among women aged 15 to 45 years old. As a response to this, the social justice approach in HIV response has become gold standard. Despite its continued application, little is known about how this approach is received and experienced by the people it is meant to serve. Thus the aim of this study is to fill this gap by investigating Zambian women’s interpretation and experience with the social justice approach as a tool for fighting HIV infection. Results The social justice movement’s role in highlighting different gender-based social inequalities was praised by our participants; however, there are several ways its application proved counterproductive in the context of Zambia. Thus, in many ways our respondents remained repugnant to the approach thereby closing down opportunities for fighting social inequality and HIV. Overall, our findings indicate that rather than definitively establishing the social justice approach as an incontestable good, there is more to benefit from paying attention to the diverse ways it is viewed by people it is meant to serve. Introduction In Zambia, the HIV prevalence rate for women is around 16%, which is 25% higher than that of men [1]. Other than biological factors, research has consistently shown that this difference is caused by complex gender inequalities which filter women's agency to determine safer sexual practices [2]. Thus in order to ameliorate the HIV scourge, several public health programs have resorted to adopting the social justice approach. The social justice approach involves the designing and implementation of HIV/AIDS programs based on social justice principles centered around the promotion of rights of women [3]. This approach has been popularized and adopted by funders, local Non-Governmental Organizations(NGOs) and the Zambian government to serve as an antidote to the HIV scourge [4]. Despite the continued application of the social justice approach, little is known about how effective it is in inspiring behavioral change among its targets, the Zambian women. It also remains unclear to what extent it is appropriate in tackling the social inequality responsible for high HIV rates among women in Zambia. Thus the aim of this study is to fill this gap by investigating Zambian women's experiences with the social justice approach as a tool for fighting HIV infection. Theoretical framework The study is located against the background of Social Representations Theory (SRT). The SRT holds that people in society give meaning to a new or strange social phenomenon based on shared experiences and as a result of social interaction with others [5]. Thus in order for one to assess the usefulness of interventions, there is need to investigate the social representations of that particular phenomenon in the given society. The Social Representation Theory further postulates that in order to understand the impact, experience and acceptance of a given social phenomenon (in our case the social justice approach), it is important to investigate local people's interpretation and characterization of this phenomenon [5]. We thus use this theory heuristically to guide our investigation, interpretation and presentation of findings. In this sense, the framework allows us to investigate in what ways local women find the social justice approach useful including its limitations. Ethical clearance Before the study was conducted, we got written ethical clearance from the National Health Research Authority of Zambia (NHRAZ). We also collected written informed consent from participants. Setting The study was conducted in Lusaka city; Lusaka is the political and economic capital of Zambia. Lusaka has a population of around 1.7 million people and also retains some of the highest HIV rates in the country. The city also has the majority of NGOs conducting social-justice centered HIV prevention activities. It is for this reason that Lusaka was selected as a case study. Sampling Participants were recruited through convenient and purposive sampling techniques. In total, 63 women were selected to take part in the study. All participants were beneficiaries of social-justice related HIV prevention programs in communities; this was through taking part in the 'rights-for-women' advocacy program which was centered around promoting women empowerment and rights of women to fight HIV. The selected participants varied in marital status, age, occupation and educational level. Variety of participants was meant to increase diversity of opinions expressed. Data collection In order to collect data, 6 different FGDs were held separately in different parts of Lusaka (Misisi, Kanayama, Matero, Kabwata, Mtendere, Chilenje). Our FGDs consisted of about 10 or 11 participants. FGDs were conducted in Chinyanja the local language and English was used were possible. An FGD guide was used to guide the discussion. This guide included about 12 general questions ranging from reasons why participants participated in the HIV prevention programs, what their experience was, what they think was good or bad about the programs, what the value of the programs was in the fight against HIV, what was lacking in the programs etc. We also asked follow up questions to ensure thorough discussions. Analysis Thematic analysis technique with the help of NVivo software was used to analyze the data. Thematic analysis is a technique used to examine and describe different phenomenon by relying on emerging themes arising from the data [6]. Here, similar opinions from participants were clustered together to build global themes. The summary of these results arising from the data analysis are presented in Table 1. Results By use of the Social Representation Theory, our results showed that although representations of the socialjustice approach varied, respondents in general noted that an emphasis on social-justice, in particular, rights of women and female empowerment was necessary for addressing the complex drivers of the HIV epidemic in Zambia. They argued that the social justice campaign proved effective in raising awareness about discrimination and inequality that women suffer: "It is clear that these campaigns have brought to light many issues such as social inequality that most women face and how this then pre-disposes them to the risk of HIV. " It was clear that the majority of the respondents appreciated the "spirit" behind the social justice movement but remained critical of the manner it was being applied in Zambia. Firstly, they highlighted that the confrontational nature of the approach (in which women were expected to challenge their male counterparts) was consistently alienating potential male allies. Our participants noted that the social justice tactics and conversations premised on 'calling-out' perpetrators gave birth to 'enemies' instead of 'allies' . This therefore made it difficult for the well-intended message of curbing social inequality to actualize within communities. "It is a game of calling out each other. It has been packaged as a confrontational initiative which gives no room to building workable relationships with male counterparts. Other women also see this as problematic. " In the fight against HIV, our participants also questioned what they termed as a misrepresentation of perpetual outrage against men as a proxy for women empowerment. They postulated that social justice campaigns had become "toxic" in that they were focused on aggressively-demanding change from men instead of inspiring collaboration. They also wondered why men who were branded as perpetrators were left out of the conversation. They therefore argued that such limitations made it difficult for them to embrace the approach. "Where is the logic in being angry? It is really toxic these days. It's all about challenging men, while at the same time leaving the very perpetrators (men) out of the conversation. " Further, the participants were concerned about who controlled the 'language' of social justice in relation to HIV. Their worry was expressed in two ways: a) the language and narrative around social justice was controlled by women in position of power (such as high class NGO workers). b) the narrative on social justice was riddled with neocolonial tyrannical tendencies as seen from its conceptualization and implementation which reflected a western characterization. The combination of the above led to a detachment of the social justice movement from the lived experiences of local women. Local women thus felt that this state of affairs contributed to the false idea that there was only one way to think about, talk about, and ultimately do activism on social justice. This further closed down any possibilities of incorporating a plurality of tactics in dealing with social inequality thereby leaving citizens (both male and female) repugnant to social justice campaigns much to the detriment of the HIV response. "Who is in control? Us? This is a movement of and by rich women who are driving fancy cars. They have money; they do not know our situation. They speak from air-conditioned offices after receiving funding from the West. This is why people are not taking them serious these days. " Moreover, the vast majority of the discussion focused on the inappropriateness of the social justice approach which was accused of being incompatible with cultural and religious values of Zambia due to its emphasis on challenging traditional authority hierarchies such maledominance in marriages. This was out of touch with our participants' realities because the approach specifically threatened relationships which they relied upon for their sustenance. They further noted that the social justice campaigns were merely symbolic gestures that did not guarantee tangible alternatives for economic survival for women who in most cases were unemployed. "We have our own culture; this is the same culture that has allowed us to live in peace in marriage; what is the value of adopting this western-precipitated culture that threatens our very survival. Men provide for women here. " Discussion The Social Representation Theory helped illuminate the varied experiences of the social justice movement from the perspective of the women it is meant to serve. In this sense, our results indicate that the social justice approach is seen as useful in highlighting the genderbased social inequalities which are responsible for the Global theme Positive characterization Important in highlighting discrimination Made women aware of their rights Opens up conversation on social inequality Provides women with information on where to report abuse Opportunities to gain information on HIV and sexual rights and health Negative characterization Lack of flexibility in implementation Conflict with cultural and religious values Failure to address women's priorities and day-to-day realities Social justice is an "un-Zambian" Western concept Disrupts survival networks (marriage) of unemployed women Women felt disconnected with the approach Implementation of approach done in a way that is too confrontational It conflicts traditional cultural and religious values Alienates potential allies Creates enemies Ignores men in the conversation It is merely a symbolic gesture without substantial economic survival opportunities Champions of the approach are out of touch with local realities Ignores local strategies It is neocolonial It is a hegemonic imposition of western culture and standards disproportionately higher HIV rates in women. The approach also seeks to tackle the symbolic drivers of HIV which are rooted in religious and cultural systems of discrimination against women. Yet at the same time, there are several ways in which this approach contradicts many of our participants' priorities, needs, and their version of effective-implementation. That notwithstanding, our study highlights a problematic bourgeois-precipitated sociocultural imposition of hegemonic understandings of social justice on 'less-powerful' women without regard to their realties. A western understanding of social justice which is usually channeled through affluent local women has failed to find fertile ground among the most vulnerable women in communities [7]. The approach has been consumed with a false assumption that there is only one way to think about, talk about, and ultimately promote social justice [8]. In Zambia, where most women are uneducated and unskilled, the 'modern' worldviews associated with the social justice movement represents a world that these local women don't see themselves as having access to. It is for this reason that this approach as a tool to combat HIV has failed to achieve local buy-in. Further, in line with sentiments by Farmer [8] and Englund [9], our study illustrates local women's frustration with HIV-prevention programs that advocate for awareness of social justice while neglecting issues like poverty, unemployment, religion and culture. Among most Zambian women, cultural and religious understandings of gender relations serve as important economic and social protection mechanisms. Although social justice narratives claim to address deep-rooted social inequalities, in reality they do so only symbolically without providing tangible social and economic alternatives to local women. In settings where men control access to economic and social resources-including privileged access to scarce job opportunities, women who are mostly unskilled rely on the support of their male counterparts [10,11]. Just as other studies [7] from Zambia have shown, without providing alternatives, the social justice approach could be harming the people it is meant to serve. Its insistence on aggressively challenging socially unequal gender relations while ignoring the role these relations play in the survival of women is seen as counterproductive. Further, in agreement with another study from Zambia [7], the social justice approach in Zambia is viewed as a neo-colonial project which seeks to disrupt cultural and religious networks of survival. Many of our participants consequentially called for locally-informed approaches to HIV prevention initiatives. The ones which are alive to the lived realities of local women. Our respondents also suggested that rather than branding men as the "enemy to be fought", involving them as partners in the fight against HIV had the potential of yielding better outcomes. Thus instead of seeking for perfection, our participants seemed to favor a social justice of imperfection and responsibility; one which constantly investigates its own reproduction of and complicity in sustaining social inequality. This type of social justice intervention should be based on seeking context-specific strategies and plurality of tactics in bringing about a social change that is more feasible in the given context. Limitations Findings from this study are based only on views of respondents who were located in only one of the ten provinces of Zambia. This therefore may have limited the variety of experiences with the social justice approach vis-à-vis HIV response. However, we posit that this study from Zambia is adequate and relevant in highlighting insights into ways the social justice approach creates or inhibits opportunities for HIV response in Zambia
2019-07-04T14:18:38.724Z
2019-07-03T00:00:00.000
{ "year": 2019, "sha1": "f2e5794fa63d2f6507dca7576a3c754a090dac8f", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-019-4420-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f2e5794fa63d2f6507dca7576a3c754a090dac8f", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Sociology" ] }
260395259
pes2o/s2orc
v3-fos-license
Effective Attenuation of Arteriosclerosis Following Lymphatic-Targeted Delivery of Hyaluronic Acid-Decorated Rapamycin Liposomes Background The activation of lymphatic vessel function is the crux to resolving atherosclerosis (AS), a chronic inflammatory disease. Rapamycin (RAPA) recently has attracted considerable attention as a potent drug to induce atherosclerotic plaque attenuation. The objective of this work was to develop a ligand-decorated, RAPA-loaded liposome for lymphatic-targeted delivery of drugs to improve abnormal lymphatic structure and function, resulting in highly effective regression of atherosclerotic plaques. Methods Hyaluronic acid-decorated, RAPA-loaded liposomes (HA-RL) were fabricated by emulsion-solvent evaporation. The average size, zeta potential, entrapment efficiency were characterized, and the stability and drug release in vitro were investigated. Furthermore, the in vitro and in vivo lymphatic targeting ability were evaluated on lymphatic endothelial cells and LDLR−/− mice, and the efficiency of this nano-system in inducing the attenuation of atherosclerotic plaques was confirmed. Results HA-RL had a size of 100 nm, over 90% drug encapsulation efficiency, the storage stability was distinguished, demonstrating a slow release from the lipid nano-carriers. The mean retention time (MRT) and elimination half-life (t1/2β) achieved from HA-RL were 100.27±73.08 h and 70.74±50.80 h, respectively. HA-RL acquired the most prominent efficacy of lymphatic-targeted delivery and atherosclerotic plaques attenuation, implying the successful implementation of this novel drug delivery system in vivo. Conclusion HA-RL exhibited the most appreciable lymphatic targeting ability and best atherosclerotic plaques attenuation efficiency, opening a new paradigm and promising perspective for the treatment of arteriosclerosis. Introduction Cardiovascular disease remains a major cause of morbidity and mortality worldwide. 1 Pathologically, the dominating etiology involves atherosclerosis, a chronic inflammatory disease characterized by the formation of lipid plaques in the intima of arteries, which can lead to life-threatening cardiovascular diseases such as coronary heart disease, myocardial infarction, stroke, and so on. 2,3 Immunosuppression therapy represents an attractive option. 4 In addition, there is increasing evidence that autophagy occurs in advanced atherosclerotic plaques. 5 Selectively activating autophagic death of macrophages could stabilize the fragile and ruptured lesions. 6,7 These findings strongly indicate that multiactive therapeutic agents are favorable for the inhibition of atherosclerosis progression. Many antilipemic or antiplatelet agents are designed to prevent the later consequences of atherosclerosis, but not the process of lesion formation and progression. Rapamycin (RAPA), also known as sirolimus, an effective immunosuppressant that inhibits the mammalian target of the rapamycin (mTOR) pathway, has been used clinically for transplantation. 8 Given its anti-inflammatory, anti-proliferation, anti-migration, and autophagy activating properties, RAPA may have potential as an anti-atherosclerotic. 9 RAPA is able to inhibit growth factor-driven smooth muscle cell (SMC) proliferation, and prevent monocyte recruitment, lipid accumulation in macrophages and SMC, as well as being autophagy stimulating. [10][11][12] Additionally, it has been shown that RAPA can inhibit inflammatory immune responses, and modulate and stabilize atherosclerotic plaques. [13][14][15] Nevertheless, current oral RAPA delivery usually leads to significant fluctuations in plasma concentration, severe side effects, and elevated plasma triglyceride (TG) and low-density lipoprotein (LDL) levels. 16 In addition, high-dose RAPA stimulation can inhibit both mTOR complex 1 (mTORC1) and mTORC2 signaling pathways [17][18][19] , while mTORC2 has multiple beneficial functions such as anti-inflammation, promoting endothelial cell survival and migration, and regulating Rit-mediated oxidative stress resistance. [20][21][22] However, considering that atherosclerosis is a chronic inflammatory disease and requires intervention in the long term, sustained delivery of RAPA with low doses is highly desirable for effective atherosclerosis therapy without obvious side effects. 22 Nanomedicine has been deemed to be a tactic to manage cardiovascular-related diseases, and is also used for diagnosis and treatment by site-specific delivery. [23][24][25][26][27][28][29] Moreover, a nanocarrier drug delivery system (NDDS), as a potent means to optimize drug efficacy, has become an attractive focus in pharmaceutical research. Liposomes, which are multifunctional vesicles with a phospholipid bilayer structure, have been widely used in the delivery of therapeutic and diagnostic reagents, which benefit from their simple structure and specific targeting ability, especially the function of controlled release. They also have high biocompatibility and inherent lymphatic targeting properties, and show low toxicity and immunogenicity. However, one drawback of orthodox nanodelivery systems is the lack of targeted delivery at specific sites. One of the vital mechanisms for their limited targeting is that the phagocytic system will recognize nanoparticles as foreign, leading to rapid clearance. 30 Therefore, achieving a long half-life in blood circulation and site-specific targeting by engineering nanoparticles with ligands is highly desired. Surface-modified liposomes with ligands have also been proved to enhance lymph node accumulation by promoting specific receptor-ligand identification. 31 It has been reported that PEG-modified liposomes might enhance lymphatic drainage by increasing steric stabilization. 31,32 Hyaluronic acid, a negatively charged acidic mucopolysaccharide, has been applied to the surface of nanocarriers as an alternative to PEG, where a hydrated layer can be formed. The spatial stabilization and hydrophilicity of HA formation could prolong the residence time in vivo and reduce the non-specific delivery of nanoparticles. [33][34][35] Studies have confirmed that atherosclerosis is a chronic inflammatory disease, and activation of lymphatic vessels has been shown to be the crux to resolving chronic inflammation. [36][37][38][39][40] Moreover, numerous hyperplastic lymphatic vessels and smooth muscle cells have been found around atherosclerotic vessels. It is reported that lymphatic vessels are not only involved in the initiation and regression of arterial inflammation, but also play a vital role in the reversal of cholesterol transport (expulsion of infiltrated glia and macromolecules from the arterial wall). [41][42][43][44] Lymphatic vessels have also been proved to be effective delivery channels for immunomodulators. Relevant studies have shown that HA has a high affinity with the surface receptors of lymph endothelial cells (lymphatic vessel endothelial hyaluronan receptor 1, LYVE-1). 38,45,46 In addition, HA also interacts with HARE (Stabilin-2) and CD44 receptors, as well as being associated with other proteins, including intracellular adhesion factors and related binding proteins. 47 Therefore, construction of nanoparticles responsive to these specific proteins may create an effective pathway for lymph-specific delivery. We hypothesized that reducing the proliferation of lymphatic vessels and smooth muscle cells by lymphatic targeted delivery of RAPA might be efficacious in attenuating atherosclerotic plaques. A major goal of this work was to developed a lymphatic targeted nanoliposomes delivery system for atherosclerosis therapy (Figure 1). Rapamycin was loaded in liposomes as a model drug. This work aims at: (1) Fabricating hyaluronic aciddecorated liposomes via an emulsion-solvent evaporation method and studying physicochemical characterizations; (2) Investigating in vitro release behavior, pharmacokinetics, and biodistribution in vivo; (3) Studying the lymphatic targeting efficiency by near-infrared fluorescence imaging and laser confocal scanning microscopy; (4) Establishing an atherosclerosis model in LDLR −/− mice and investigating the effects of different RAPA formulations on atherosclerotic plaques. Sprague-Dawley rats (220-250 g) were supplied by the Animal Experimental Center of Guangzhou University of Chinese Medicine. Male LDLR −/− mice at 8 weeks of age were supplied by the third Affiliated Hospital of Guangzhou Medical University. All animals were acclimated and maintained in animal facilities with temperatures of 21-27°C, relative humidity of 50-60%, and a light/dark cycle of 12/12 h. The animals were fasted for 12 h prior to experiment, with food and water ad libitum. All the animal care and experimental protocols were performed according to the guidelines for the Care and Use of Laboratory Animals of Guangzhou Medical University. All animal experiments were approved by the Institutional Animal Care and Use Committee of Guangzhou Medical University (Acceptance number: S2021-099). Fabrication of Hyaluronic Acid-Decorated RAPA Liposomes Efficient solubilization is vital for achieving sufficient doses of hydrophobic RAPA in atherosclerotic lesions. We, therefore, fabricated liposomes by emulsion-solvent evaporation and sonication method. Briefly, the oil phase (OP) was composed of 15 mg RAPA, 120 mg phospholipid, dissolved in 2 mL dichloromethane (DCM). 20 mg cholesteryl sodium sulfate (CHOS) was ultrasonically dispersed in the aqueous phase (AP, 20 mL Tris-HCl) that contains 5% HA-DOPE 4405 (phospholipid/HA-DOPE, w/w). Firstly, the OP was mixed with the AP and sonicated at 200 W for 2.5 min to form an emulsion, followed by a 45°C water bath to volatilize organic solvent. Then probe sonication (100 W, JY92IID ultrasonic processor, Ningbo Xinzhi Biotechnology Co., Ltd, China) was carried out for 3 min with ice-water bath. The unencapsulated RAPA were removed by filtration through a 0.22 μm cellulose nitrate membrane (Supplementary Material Section S.3. Determination of the trapping rate). The HA-decorated RAPA liposomes (HA-RL) were freezedried by a LGJ-10C freeze dryer (Beijing Sihuan Furuike Instrument Technology Development Co., Ltd, China) with 5% (w/v) lactose solution as the freeze-drying protective additive. Moreover, HA quantification was performed by the carbazole assay (Supplementary Material Section S.4. Hyaluronic Acid Quantification). Non-hyaluronic acid-decorated liposomes (RAPA-liposomes, RL), without the addition of HA-DOPE, were prepared by the same composition and method as the HA-RL. To trace the cell-uptake behavior of liposomes, the RAPA was replaced by 6-coumarin (HA-C6-LIP and C6-LIP) by the same procedure. DIR-loaded liposomes (HA-DIR-LIP and DIR-LIP) used for the near-infrared fluorescence imaging studies were also produced with a similar procedure (Supplementary Material Section S.5. Fabrication of Coumarin 6/DIR-loaded liposomes). Physicochemical Characterization of RAPA Liposomes Particle Size (nm), Zeta Potential (mV), and Morphologies The mean particle size and ζ-potential of prepared liposomes were analyzed by dynamic laser light scattering (DLS) method using Malvern Zetasizer Nano ZS instrument at 25°C (ZEN3500, Nano ZS Instruments, Worcestershire, UK), repeating the measurement in triplicate. The polydispersity index (PDI) was also acquired by a cumulant analysis of the correlation function. The morphological investigation of liposomes was performed using transmission electron microscopy (TEM). Fourier Transform Infra-Red Spectroscopy (FT-IR) and X-Ray Diffraction (XRD) Analysis For FT-IR analysis, appropriate amounts of lyophilized RL and HA-RL powders were finely ground with potassium bromide (KBr) and thoroughly mixed. Subsequently, the mixtures were placed in the abrasive tool carefully and pressed into transparent flakes. A similar method was used to generate transparent flakes of RAPA and HA-DOPE samples. Then a FT-IR spectrophotometer was used to detect the infrared absorption spectrum of the resulting flakes in the spectral region of 400-4000 cm −1 , and the characteristic absorption peaks of the samples were analyzed. XRD analysis was also performed to investigate the crystallinity or amorphous nature of prepared liposomes. The diffractograms were recorded in the range of 2θ = 1-50° at a scanning rate of 2°/min. Measurement of Entrapment Efficiency (EE%) of RAPA Liposomes To measure EE% of RAPA liposomes, the RL and HA-RL were harvested by microporous membrane filtration (0.22 μm) and centrifugation (15,000 rpm, 15min, 4°C). 10 μL of the upper liposomes solution was added to 900 μL methanol for demulsification and centrifuged at 15,000 rpm for 15 min at 4°C. 10 μL of the supernatant was injected into the High Performance Liquid Chromatography (HPLC) apparatus for analysis. A C 18 analytical column (Diamonsil ® C18 Column, 4.6 mm×250 mm, 5 μm, Dikma Technologies Inc., China) was used for detection and the chosen wavelength was 278 nm. RAPA was well separated using an optimized mobile phase with methanol/acetonitrile/water(43/40/17, v/v/v) at a flow rate of 1.0 mL/min with a running time of 20 minutes (Supplementary Material Section S.6 HPLC methods for in vitro quantitation of RAPA). In addition, the results of methodological validation met the requirements for RAPA measurement ( Figure 1S, Tables 1S-6S). The EE% was measured using the following equation: Stability of Liposomes in vitro The stability in vitro of RL and HA-RL was executed by determining the amount of drug or entrapment efficiency (EE%) and particle size. In particular, RAPA-loaded liposomes were added to Tris-HCl (pH 7.4) or 50% of the plasma (1:1, v/v) and the samples were incubated at 37°C with gentle shaking. At predetermined times (0, 1, 2, 4, 8, 12, and 24 h), the https://doi.org/10.2147/IJN.S410653 DovePress International Journal of Nanomedicine 2023:18 4406 particle size and drug amount were measured by method mentioned in Section 2.3. Meanwhile, the prepared liposomes were stored at 4°C and detected at 0, 7, 15, and 30 days to evaluate the storage stability. Release Studies in vitro A known quantity of RAPA liposomes lyophilized powder (RAPA concentration, 0.8 mg/mL) was reconstituted in 2 mL of fresh pH 7.5 PBS. The resulting dispersion was placed into a dialysis tube (MWCO of 12,000-14,000 Da), which was submerged into 100 mL release medium (0.01 M PBS with 0.5% SDS, pH 7.5, 6.5, 5.5). The beaker was vibrated continuously with 100 rotations/min at 37 ± 0.5 °C using an Orbital Shaker Bath. At predetermined time intervals (1, 1.5, 2, 3, 4, 6, 8, 10, 12, 24, 36, and 48 h), 1 mL of the dissolving medium was withdrawn and replaced with 1 mL of fresh release medium to maintain slot leakage conditions constantly. 1 mL DCM was added to each sample, and the extraction was vortexed thoroughly, then the supernatant (aqueous phase) was removed after standing and stratification. The residue was evaporated to dryness under a vacuum stream and reconstituted with 0.1 mL methanol prior to analysis. The amount of RAPA released into the dissolving medium was quantified by HPLC. Cell Uptake and in vitro Assessment of the Lymphatic Targeting Property To evaluate the lymphatic targeting of Hyaluronic acid-decorated liposomes, laser confocal scanning microscopy (LSCM) was used to observe the phagocytosis of liposomes. Lymphatic endothelial cells (LECS) were inoculated into 6-well plates (10× 10 4 cells/well). After incubation for 24 h, the medium was replaced by fresh medium containing C6 solution, C6-LIP, HA-C6-LIP(6-coumarin [C6] = 5 μg/mL) and incubated for 1, 2, and 4 h. Notably, the LYVE-1 receptor blocking group was cultured in medium containing HA solution ([HA] = 0.4 mg/mL) for 4 hours before adding HA-C6-LIP to saturate receptor on the cell surface. Subsequently, the medium was removed and cells were fixed with 4% paraformaldehyde for 15-20 min, stained with DAPI (2.5 ug/mL) and washed thrice with sterile PBS. Laser confocal scanning microscopy imaging (LSCM, Nikon, Japan) was performed to observe the cell uptake, fluorescence distribution of DAPI and C6. Bioanalytical LC-MS/MS Methods for RAPA Plasma samples (100 μL) or tissue samples (200 μL, 20% w/w) were mixed with 500 μL of ice-cold methanol containing roxithromycin (100 ng/mL). After vortexing to sufficient extraction, the samples were centrifuged at 16,000 rpm for 15 min at 4°C. The supernatant was transferred into a new tube and evaporated under vacuum to dryness. The residue was reconstituted in 100 μL of methanol and centrifuged at 16,000 rpm for 15 min at 4°C. An aliquot of 10 μL of the supernatant was injected into the liquid chromatography-mass spectrometry (LC-MS/MS) apparatus for analysis (Supplementary Material Section S.7 Bioanalytical LC-MS/MS methods for RAPA). Meanwhile, the methodological evaluation of LC-MS/MS met the requirements of RAPA analysis in biological samples (Tables 7S-10S). In vivo Pharmacokinetics Eighteen SD rats were randomly divided into three groups. HA-RL, RL, and RAPA solution (RSM) were administered to the rats via tail intravenous injection at 2.0 mg/kg of RAPA. At predetermined time points (0.083, 0.1667, 0.25, 0.3333, 0.5, 1, 2, 4, 8, 12, and 24 h), the blood samples were collected from 18 animals and stored at −80°C until further analyzed by the methods assay mentioned in "Bioanalytical LC-MS/MS methods for RAPA" section. The pharmacokinetic parameters were calculated with the Drug and Statistics 2.0 (DAS2.0) software using a two-compartmental analysis model. In vivo Evaluation on Lymphatic Targeting Efficiency and Biodistribution To evaluate lymph node retention, HA-DIR-LIP, DIR-LIP, and free DIR solution were injected subcutaneously at 20 nmol of DIR (DIR, Near-infrared Fluorescent Dye). Mice were sacrificed and the lymph nodes (Inguinal LNs, Abdominal LNs, Axillary LNs, Cervical LNs, Popliteal LNs) were removed after 3 h. In vivo imaging system was used to observe fluorescence of lymph nodes and the radiation efficiency was also semi-quantitatively analyzed. Similar procedures were followed to examine the biodistribution and targeting efficiency of HA-decorated drug delivery systems. The LDLR −/− mice were administrated via the tail vein at a single dose of 2.0 mg/kg. Mouse lungs, hearts, livers, spleens, kidneys, and lymph nodes were collected at 2, 4, 6, 8, and 12 h after injection. The tissues were accurately weighed and stored at −80°C to spare for analysis. Experimental Anti-Atherosclerosis in LDLR −/− Mice LDLR −/− mice were given a normal diet containing 0.25 wt% cholesterol and 2 wt% lard for 3 months. The atherosclerosis model was established after 3 months of hyperlipid diet containing 1.5 wt% cholesterol and 15 wt% lard. Thirty-six mice were randomly divided into six groups (n = 6) and subjected to different treatments every 2 days for an additional 4 months: one model group received saline; one group treated with RSM via the tail vein at 2.0 mg/kg of RAPA; one group received RL by intravenous injection at 2.0 mg/kg of RAPA; three groups were treated with intravenous injection of HA-RL at 1.0 (low dose, HA-RL/L), 2.0 (medium dose, HA-RL/M), and 4.0 (high dose, HA-RL/H) mg/kg of RAPA, respectively; in addition, one control group was also included. Analysis of Vascular Atherosclerotic Plaques in LDLR −/− As Model Mice Blood was collected in heparinized anticoagulant tubes from the eyeballs of the mice after treatment for 2 months, and stored at −80°C to use for biochemical analysis after centrifugation at 3000 rpm/min for 10 min. The aortic vessels were obtained and opened longitudinally after the mice were sacrificed, the residual blood was washed out and they were stained with Sudan Red IV staining solution for 10 min. The atherosclerotic plaques were observed under a stereo microscope and photographed. An image analysis system was used to accurately delineate the contours of the whole vessel and the plaque, calculate the area of the whole vessel (A) and the plaque area (A 0 ), and obtain the ratio of plaque area to the whole vessel area (A 0 /A). The aortic vessels was fixed in 10% formalin, then embedded in paraffin and sectioned. Atherosclerotic plaques were analyzed by hematoxylin-eosin (H&E) staining of 6-μm sections under an inverted fluorescence microscope (Nikon ECLIPSE Ti2). Analysis of the Changes of Perivascular Lymphatic Vessels "Whole mount immunofluorescence staining" method was used to observe the lymphatic distribution of blood vessels. In brief, vessels were immersed in ice-cold 4% PFA solution and fixed at 4°C. Subsequent tissue transparency, including dehydration and bleaching, was performed. After dehydration again and washing, the samples were blocked with 6% goat serum for 24 h and incubated with 3% goat serum containing primary antibody (LYVE-1 rabbit antibody) for 6 days, followed by incubation with secondary antibody diluent for 6 days. The samples were gradually dehydrated with methanol of different proportions (20%, 40%, 60%, 80%, 100%) for 1 h each. The samples were incubated in dibenzyl ether (DBE) until clarified and then stored at room temperature for fluorescence imaging by LSCM. Statistical Analysis Data were expressed as the mean ± standard deviation (SD). Statistical analysis was executed using one-way analysis of variance (ANOVA) in GraphPad Prism 9.0 statistical software. P values less than 0.05 were considered significant. Characterization of Liposomes and Stability Evaluation Analysis showed that the RAPA-loaded liposomes formed a translucent liquid with a pale blue opalescence and presented an appropriate particle size (Figure 2A and B). The particle size was the most important factor affecting lymph node uptake, and the pertinent size ranges from 10-100 nm. 48,49 Herein, we reduced the particle sizes by ultrasonic and continuous extrusion, resulting in a uniform particle size distribution and smaller PDI, which could reduce the difference in uptake efficiency caused by heterogeneous particle size distribution. The mean particle sizes, PDI, zeta potentials and EE% of the liposomes are listed in Table 1. Particle sizes of liposomes with HA (size: 99.11±0.220 nm, PDI = 0.116 ±0.010) were slightly larger than those of liposomes without HA (size: 79.85±0.854 nm, PDI = 0.149±0.008), which indicates HA-DOPE had embedded into the lipid bilayer. Good drug encapsulation efficacy was required for the fabrication of a successful nano drug delivery system (NDDS), over 90% EE proved good loading ability of liposomes. The findings showed that EE of RL was 93.11±1.39% and that of HA-RL was 94.88±2.08%. The zeta potential of RL was −41.4±2.73 mV and that of HA-RL was −54.9±4.90, which revealed that HA successfully embedded into the phospholipid bilayer structure of liposomes. Besides, the results of HA quantification analysis showed that the binding rate of HA-DOPE to the phospholipid bilayer was as high as 93.37% (Table 11S). The zeta potentials of all liposomes are negative due to the existence of negatively charged (CHOS) and HA. The ζ-potential of liposomes higher than 30 mV was reported to make the liposomes repel each other, thereby avoiding particle aggregation and keeping the long-term stability of liposomes. The observation under transmission electron microscopy of the liposomes revealing their morphologically regular spherical nanostructures are shown in Figure 2C. Plasma stability was performed to simulate the in vivo hemocompatibility of liposomes. The adsorption of proteins on the liposomes could cause aggregation, and lead to an increase in particle size. The size and drug content showed no evident change during 24 h, indicating the good stability of the liposomes in plasma ( Figure 2D). In addition, no visible size and drug content changes in two types of liposomes were observed after incubation for 24 h in PBS (pH 7.4). The results also showed no significant changes in size, and EE% after 30 days of storage at 4°C, implying an excellent storage stability of prepared liposomes ( Figure 2E). These findings indicated the potential application and appropriateness for in vitro/vivo studies. FT-IR and X-RD Analysis The FT-IR spectrum of RAPA, HA-DOPE, RL, and HA-RL is shown in Figure 2F . For RL and HA-RL, the characteristic peak of hydrogen bonding (3416.89 cm −1 , -OH stretching) enhanced absorption intensity, most of the characteristic peaks shared relatively similar absorption position as compared with RAPA and HA-DOPE standards. Therefore, these results confirmed that HA has been successfully modified on the surface of HA-RL. Furthermore, the amorphous HA-/RL complexes were revealed when X-ray diffraction (X-RD) was performed ( Figure 2G). In vitro Release To comprehend the release mechanism of RAPA from HA-RL or RL, a dynamic dialysis method was executed to perform in vitro drug release studies. The in vitro drug release behavior was investigated with different environments at 37°C, simulating endosomal pH (5.5), tumor microenvironment pH (6.5), physiological condition (pH 7.4), respectively. As presented in Figure 2H, a significant RAPA release from RL or HA-RL was detected at pH 5.5, while relatively less release occurred at pH 6.5 and 7.4. Obviously, there is a sharp release in the first 2 h of the liposomes at pH 5.5, known as burst release, which resulted from the RAPA molecules being absorbed on the outer phospholipid layer of liposomes. Approximately 40% of the RAPA from liposomes was released during the first 2 h, implying the acid-triggered release of The results also demonstrated that RL showed a biphasic release pattern with a slightly faster release rate, followed by a sustained releasing phase, without an initial burst at pH 6.5, 7.5. Release profile showed that about 40% of RAPA was released in the first 12 h followed by a sustained release of near 10% drug contents up to 48 h at pH 7.5, while this amount was around 60% at pH 6.5 in the first 12 h. Additionally, HA-RL also demonstrated a biphasic release pattern and the followed releasing phase exhibited a sustained release pattern. In particular, comparing the release profile of RL and HA-RL revealed that HA modification had prolonged the release of RAPA from liposomes. HA-RL exhibited a slower drug release and the total release amount of RAPA from HA-RL was less than that of RL after the same time, which was similar to a previous report that HA modification generates an additional barrier for drug diffusion, reducing the fluidity of the bilayer, as well as membrane permeability, resulting in the decreasing release rate of hydrophobic RAPA molecules. As for RSM, the release percentage of RAPA increased gradually at a slower release rate, with a cumulative release of about only 10% in the first 12 h followed by a more slow release and amount of near 20% cumulative release at 48 h. Moreover, the release behavior of RAPA from RSM was generally consistent at pH 7.5, 6.5, 5.5, with no difference, which might be due to RAPA being a lipophilic drug with low solubility, thus it was difficult to release into the medium. In vivo Pharmacokinetic and Biodistribution Studies The blood concentration-time profiles and pharmacokinetic parameters of RAPA after intravenous administration of different preparations are summarized in Figure 3A and Table 2. The blood concentration of rats rapidly reached the peak after tail vein administration, and it gradually decreased and tended to be relaxed thereafter. Meanwhile, the plasma concentration-time data were fitted with a weight of 1, and the pharmacokinetic models of RSM, RL, and HA-RL in rats conformed to the two-compartment model. As the results show, the clearance of RSM from blood circulation in rats was 2.1-4.5-fold faster than that of liposomal preparations. Consequently, all liposomal preparations were effective in enhancing systemic exposure and prolonging its blood circulation, and also boosted the efficiency of the lymphatic targeted delivery of RAPA. In addition, the AUC of RL and HA-RL was 1.8-fold and 4.6-fold higher (**P<0.01), respectively than that of RSM. Notably, the mean retention time (MRT) of HA-RL was significantly extended, which was 4.8-fold greater than that of RSM (*P<0.05). Furthermore, liposomal formulations also enhanced the elimination half-life (t 1/2β ) of RAPA by 2.1-4.5-fold due to the decreased clearance, which effectively solves the problem of rapid metabolism of RAPA, thereby improving the bioavailability. On the other hand, a non-invasive NIR fluorescence agent (DIR) was introduced to investigate the biodistribution of various formulations in LDLR −/− mice. As illustrated in Figure 3B, the retention of RL or HA-RL in LNs was dramatically higher than that in RSM, which may be because the natural lymphatic targeting of liposomes and the overexpression of LYVE-1 in lymphatic endothelial cells resulted in more internalization of HA-decorated liposomes via receptor mediated endocytosis. However, liposomal formulations also manifested a high fluorescence intensity in liver and spleen, and the fluorescence intensity and the amount of RAPA in liver and spleen in the HA-RL group were slightly higher than those in the RL group, which may be due to the inevitable phagocytosis of the nanoparticles by the reticuloendothelial system, also including the inherently high blood perfusion rate in liver, as other HA receptors are highly expressed in liver. Moreover, the retention of RAPA in lung was the highest of all tissues owing to pulmonary circulation after administration with different rapamycin preparations via tail vein injection. In addition, RAPA in the liver and spleen of the HA-RL and RL groups reached its maximum at 6 h, when there was less distribution in tissues other than lymph nodes, especially in lung ( Figure 3C). Lymphatic Targeting Ability in vitro and in vivo Uptake of HA-LIP by lymphatic endothelial cells was evaluated by LSCM. As illustrated in Figure 4A, the amount of C6-loaded liposomes ingested into the cytoplasm of the LECs was higher than that of free-C6 after incubation at 37°C for 1, 2 and 4 h. The findings showed that the fluorescence intensity at 2 h and 4 h was stronger than that at 1 h, indicating that the uptake of liposomes by LECs increased with time, showing a time dependence, and the fluorescence intensity at 2 h was prominently stronger than that at 1 h. Notably, it can be clearly seen that most of the HA-C6-LIP are engulfed by cells, and tightly dispersed on the cell membrane, and some have even entered the cytoplasm. However, the fluorescence intensity decreased at 4 h, which we hypothesized was due to the exocytosis of the liposome nanoparticles as foreign material after being phagocytosed by LECs and reached saturation. Moreover, the fluorescence intensity of HA-C6-LIP is stronger than that of the C6-LIP without HA modification. This indicated that HA-C6-LIP acquired a high homologous targeting ability. Interestingly, the uptake of liposomes by the cells was reduced after the cells were pre-incubated with pure HA solution for 4 h, and the LYVE-1 receptor on the surface of LECs was blocked, which proved that the liposomes entered the cells via the LYVE-1 receptor mediated endocytosis. These findings indicated that HA-based liposomes have been successfully constructed, and this nanodelivery system has great potential for lymphatic targeting. The liposomes with a particle size between 10 nm and 100 nm will effectively drain to lymph nodes. Encouraged by the excellent lymphatic targeting efficacy of HA-modified liposomes in vitro, the targeting efficacy in vivo was examined after s.c. administration. We assessed LN accumulation of free DIR, DIR-LIP, and HA-DIR-LIP using the whole fluorescence imaging system at the tissue level. As illustrated in Figure 4B and C, potentiated DIR fluorescence intensity can be seen in LNs (Inguinal LNs, Axillary LNs, Abdominal LNs, Cervical LNs, Popliteal LNs) after subcutaneously injected DIR-loaded liposomes. On the contrary, fluorescence signal can slightly be observed in LNs after s.c. injection of free DIR, which indicates that drug loaded into a nano carrier can effectively induce its flow to lymph nodes (****P<0.0001). Furthermore, HA-decorated liposomes enhanced the amount of liposomes accumulated in lymph nodes more through an active targeting effect. The findings proved that fluorescence intensity of HA-DIR-LIP was higher than that of the liposomes without HA decoration (**P<0.01). What's more, the results of biodistribution were similar to those of lymph nodes fluorescence imaging; the amount of RAPA in liposomes, HA-decorated and not, was significantly higher than that found in RSM group when we detected the RAPA in lymph nodes (****P<0.0001). In addition, as demonstrated by the increase in fluorescence signal of lymph nodes in HA-RL group, we also found the amount of RAPA was observably higher than that in RL group (*P<0.05). The above findings further confirmed that HA-RL can acquire lymphatic targeting more efficiently. RAPA Effectively Attenuated the Growth of Atherosclerotic Plaques RAPA has the potential effect of inducing the attenuation of atherosclerotic plaques, as well as the pleiotropic protective function of anti-atherosclerosis. 13 Thus, we examined whether RAPA inhibits the growth of atherosclerotic plaques, 4414 treated LDLR −/− AS mouse model with drugs solution or entrapped in nanocarrier or HA-nanocarrier and investigated the effects of different formulations on the atherosclerotic plaques after treatment for 2 months. Representative images of vascular atherosclerosis plaques and H&E-stained sections of the aortic vessels are reported in Figure 5. As shown in Figure 5A, Sudan red IV staining showed the atherosclerotic plaques formation and the uneven thickness of the aortic wall. The area of atherosclerotic plaques in LDLR −/− mice given a hyperlipid diet for 3 months was much larger than that of the control group, which indicates the atherosclerosis model was successfully established. However, a significant reduction of atherosclerotic plaques can been found after the treatment with RAPA (P<0.01) ( Figure 5B), these findings are also in agreement with the results of H&E-stained sections ( Figure 5C) .0001) were markedly smaller than that in RL group, respectively. Similarly, the total degree of vascular lumen stenosis in HA-RL groups was smaller than that in RL group and RSM group, respectively. Therefore, our findings showed that treatment with RAPA resulted in a conspicuous change in the atherosclerotic plaque area. Based on the experimental evidence that the hyaluronic acid-decorated rapamycin liposomes could obtain a more effective reduction in atherosclerotic plaque area, we hypothesized that the liposome modified with HA ligands may play a crucial role in the development of lymphatic targeting delivery systems. To further verify our hypothesis that lymphatic targeted delivery of RAPA might be effective in reducing the proliferation of lymphatic vessels, a whole mount immunofluorescence staining test was used to analyze the perivascular lymphatic vessels ( Figure 5D). The results of immunofluorescence imaging of the lymphatic vessels have shown that there were a large number of proliferating lymphatic vessels around the vessels in LDLR −/− AS mice, however, a significantly reduction of the lymphatic vessels was found after RAPA treatment. Interestingly, we noticed that administration with RAPA liposomes, actively targeted, caused a more conspicuous regression in lymphatic vessels, which further confirmed that the hyaluronic acid-modified liposomes could more effectively target the lymphatic vessels. Effects of RAPA on Lipid, Lipoprotein, and Inflammatory Cytokines Atherosclerosis is a chronic inflammatory disease, accompanied by the generation of a variety of vascular growth factors, cytokines, and chemokines, which also involves the metabolic abnormality of lipoprotein or lipoidosis. 38 To investigate the effect of RAPA on serum lipids and inflammatory cytokines, TCH, TG, LDL, HDL, and major cytokines including MCP-1, IL-1β, TNF-ɑ, MMP-3, MMP-9 we detected. As shown in Figure 6A, a significant decrease of TCH, LDL, and TG in LDLR −/− AS model mice was found in either the RL group or the HA-RL groups, while HDL was distinctly increased after administration. Furthermore, the effect of increasing the level of HDL in HA-RL/M group was obviously stronger than that in RL group (3.25±0.17mmol/L vs 5.71±0.21mmol/L, n = 6, P<0.0001). These results suggested that RAPA can effectively ameliorate the metabolism of lipids, reducing the lipid deposition. The above findings of a significant change in lipids metabolism prompted us to further investigate whether the treatment with RAPA induces inflammatory cytokine changes in serum ( Figure 6B). Interleukin-1β (IL-1β) is a critical inflammatory factor mainly produced by activated mononuclear-macrophages, 50 which is considered to be a major regulator in atherosclerotic inflammatory response. The results showed that the levels of IL-1β in RAPA treatment groups (RL, HA-RL) was evidently lower than that in model group (P<0.0001). As a potent inflammatory factor, tumor necrosis factor ɑ (TNF-ɑ) is a key factor mediating various inflammatory reactions as well as an important factor in the formation of AS. Similar to what was observed in the changes of IL-1β, RAPA treatment groups (RL, HA-RL) had a significantly higher decrease in TNF-ɑ levels than that of the model group (P<0.0001). To the best of our knowledge, foam cells formation is the basis and crux of atherosclerosis. Monocyte chemoattractant protein-I (MCP-1) is hardly expressed in normal vascular wall, but the expression of MCP-1 is increased in different stages of atherosclerosis, which mediates the whole process of monocyte from adhesion, migration to foam. 51 Therefore, it is important to examine whether the expression of MCP-1 in serum is affected by RAPA treatment, as the results revealed that the levels of MCP-1 were significantly decreased after administration, which strongly suggested that RAPA could inhibit the expression of MCP-1 and thus exerted an anti-atherosclerosis effect. In addition, the levels of MMP-3 in HA-RL groups were markedly lower than that in RL group ( 4416 RL groups among the mice treated after 2 months, the relative descending level of MMP-9 in HA-RL groups was 0.11 times that of RL group. MMP-3 and MMP-9 are matrix metalloproteinases secreted by macrophages, neutrophils, and smooth muscle cells, 52 highly expressed in the vulnerable area of atherosclerotic plaques. They can degrade the basement membrane of the vascular wall and extracellular matrix, resulting in fibrin cap rupture and increase the instability of the plaque. The results proved that RAPA could reduce the expression of MMP-3/9, stabilize atherosclerotic plaques and induce plaques attenuation, thereby ameliorating the progression of atherosclerosis. Conclusion In this study, a novel pharmaceutical delivery system potentially useful for lymphatic-targeted drug delivery in the modulation of atherosclerosis has been developed. In detail, an advanced drug carrier for RAPA delivery was designed to efficiently disperse in an appropriate medium, resulting in a colloidal dispersed nanoparticle ready for in vivo delivery. For this purpose, a pertinet phospholipids material was introduced and surface-engineered liposomes were produced by emulsion-solvent evaporation method. During this process, hyaluronic acid, a site-specific ligand with a high affinity for the lymphatic endothelial cells LYVE-1 receptor, was also incorporated to acquire a lymphatic-targeted delivery system. In effect, the obtained nanoparticles have a particle size of less than 100 nm, propitious to lymphatic-targeted delivery, and entrapped large amounts of RAPA in amorphous form. Furthermore, the prepared liposomes were kept stable at 4°C, and the lyophilized powder could be reconstituted after dispersion in an appropriate medium with properties practically comparable to those of the initial liposomes, protecting the encapsulated drug from degradation and releasing it in a sustained manner. Cytological studies on lymphatic endothelial cells have revealed that phagocytosis assay confirmed liposomes internalization, and in vitro studies were required to verify the efficiency of LYVE-1 targeting. Furthermore, the drug entrapped in the HA-decorated liposomes, actively targeted, has more efficacy in enhancing the cell uptake compared with the liposomes without HA, even the free drug. In addition, the results of the in vivo studies are momentous as it demonstrated that drugs embedded in nanocarriers could efficiently drain to LNs, and is more pronounced in HA-nano carriers. Furthermore, it is widely known that activation of lymphatic vessels is the crux to solve chronic inflammation and limit acute inflammation, and not only participates in the initiation and resolution of inflammation, but also plays an active role in cholesterol reversal. In this work, we provide significant and compelling evidence that the lymph-targeted liposomes could reduce the proliferation of lymphatic vessels, as well as reverse and stabilize the atherosclerotic plaques when MMP3/9 in serum was detected. Moreover, except for a slight decrease in TCH, it had little effect on TG while raising HDL. These results, taken together, manifested that RAPA liposomes, HA-decorated and not, had lymphatic targeting characteristics and the potential to induce regression of atherosclerotic plaques, providing some inspiration for the development of an efficient NDDS in the treatment of atherosclerosis.
2023-08-03T15:10:36.507Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "444163ebf5f4042a0e22c9714103228d7a160c94", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=91663", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d78a7bd9292fcb48ad6778af890701f222f292f3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2838480
pes2o/s2orc
v3-fos-license
A call for global governance of biobanks Abstract The progress in genomic research has led to increased sampling and storage of biological samples in biobanks. Most biobanks are located in high-income countries, but the landscape is rapidly changing as low- and middle-income countries develop their own. When establishing a biobank in any setting, researchers have to consider a series of ethical, legal and social issues beyond those in traditional medical research. In addition, many countries may have inadequate legislative structures and governance frameworks to protect research participants and communities from unfair distribution of risks and benefits. International collaborations are frequently being created to support the establishment and proper running of biobanks in low- and middle-income countries. However, these collaborations cause cross-border issues – such as benefit sharing and data access. It is thus necessary to define and implement a fair, equitable and feasible biobank governance framework to ensure a fair balance of risks and benefits among all stakeholders. Introduction The introduction of genomic technology has led to a biomedical revolution. Whole-genome sequencing and genome-wide association studies have become powerful tools to investigate environmental, genetic, social and behavioural determinants of human diseases. 1,2 Many countries have set up biobanks to collect human biological samples and their associated data for genomic research and public health purposes. To maximize the utilization of biobanking resources, regional and transnational biobank networks, such as the BBMRI-ERIC (Biobanking and Biomolecular Resources Research Infrastructure), the International HapMap Project and the International Cancer Genome Consortium, have been established. [3][4][5] Although genetics and genomics have contributed to better understanding of causes and mechanisms of human diseases, some researchers are concerned that genetic research conducted to date has mainly focused on the health needs of high-income countries, thus increasing health inequity between people in poor and rich nations. [6][7][8] Low-and middle-income countries are benefiting less than high-income countries from the applications of epidemiological and genetic research. It has been suggested that the disadvantage could partly be attributable to the lack of biobanks and large cohort studies in poorer countries. 9 To find indigenous solutions to health improvement, biobanks have recently been set up in several developing countries (e.g. China, Gambia, India and Mexico). [10][11][12][13][14] The establishment and proper running of a biobank can be perceived as an overwhelming task, since researchers have to consider a series of ethical, legal and social issues, such as informed consent, benefit sharing, confidentiality, ownership, commercialization and public participation. [15][16][17][18] Building transnational biobank networks is even more difficult, as these require sharing of samples and interoperability of data in a mutually-applicable ethical and legal framework. However, such frameworks differ between countries. 19 Compared with the situation in high-income countries, where the ethical, legal and social issues of biobanks have been debated, researchers in low-and middle-countries are less experienced in coping with these issues. 12,13,20,21 The fear of exploitation -i.e. unfair distribution of risks and benefits -makes many low-to middleincome countries hesitant about foreign researchers accessing and using their human biological samples and associated data. [22][23][24][25] Furthermore, research participants may sometimes not be fully aware of the risks of participation. 22,25 Therefore, the proliferation of biobanks in low-and middle-income countries has led to ethical, cross-border and benefit-sharing issues not witnessed in other human research areas, due to local culture, religious beliefs and poor awareness of developed countries' concept of ethics. 26 These issues may have a negative impact on international research collaborations. In this paper, we argue that it is important to develop a governance framework at the global level to guarantee equity, fairness and justice in biobank collaboration between developing and developed countries. Biobanks in developing countries Biobanks currently exist on every continent, including Antarctica, with most located in North America and Europe. 27 However, this landscape is changing rapidly. 4,14 Some countries, including China, Gambia, Jordan, Mexico and South Africa, have placed great effort into building their own biobanks and biobanking networks. [10][11][12][13][14] In Table 1, we present the aim of biobanks with publicly available information and how these biobanks are funded. All the selected biobanks have partnered with facilities in high-income countries. The Kadoorie Study of Chronic Disease in China and the Mexico City Prospective Study collaborate with Oxford's Clinical Trial Service Unit and Epidemiological Studies Unit. 28,29 The KHCCBIO project in Jordan, that will collect cancer specimens within the country and from its neighbouring countries, collaborates with Trinity College Dublin, Biostór Ireland and Accelopment AG, Switzerland. 30 The centralized Gambian National DNA Bank was created with help from the Centre d'Etude du Polymorphisme Humain, an international genetic research centre located in Paris, France. 31 Human Heredity and Health in Africa (H3Africa) is based on a partnership among the African Society of Human Genetics, the National Institutes of Health in the United States of America and the Wellcome Trust. 20,32,33 Abstract The progress in genomic research has led to increased sampling and storage of biological samples in biobanks. Most biobanks are located in high-income countries, but the landscape is rapidly changing as low-and middle-income countries develop their own. When establishing a biobank in any setting, researchers have to consider a series of ethical, legal and social issues beyond those in traditional medical research. In addition, many countries may have inadequate legislative structures and governance frameworks to protect research participants and communities from unfair distribution of risks and benefits. International collaborations are frequently being created to support the establishment and proper running of biobanks in low-and middle-income countries. However, these collaborations cause cross-border issues -such as benefit sharing and data access. It is thus necessary to define and implement a fair, equitable and feasible biobank governance framework to ensure a fair balance of risks and benefits among all stakeholders. The establishment of biobanks is an important step towards establishing national genomics research programmes. However, development of biobanks can face challenges. Maintaining these biobanks and producing effective scientific outcomes based on the biobanking resources are not easy without a proper framework and the capacity to manage biobanks. In addition, some countriessuch as China and South Africa -lack adequate legislative structures and governance frameworks that regulate the use and development of biobanks. 20,29 Cross-border issues Facilities in high-income countries conducting human genetic research may see an advantage in examining human samples from populations with rich genetic diversity in low-and-middle income countries. The samples can either be shipped from the biobank in the low-or middle-income country to the research facility or researchers can come and collect the samples from the biobank. 21 These cross-border flows of biological samples and data are troublesome for low-and middle-income countries, since many of these countries have poor or absent medical and patents laws and/or regulatory frameworks. The lack of legislative structures makes both countries and their people vulnerable to exploitation. 25 In December 2000 The Washington Post published a six-part series titled The body hunters that surveyed research subjects in China, Africa and Latin America. The research subjects claimed they did not receive the expected benefits -such as health care serviceswhen participating in medical research led by high-income countries. 34 There have also been reports of researchers from high-income countries collecting blood samples from Hagahai people in Papua New Guinea, Havasupai people in Arizona, United States of America, and the Karitiana people in Brazil without securing proper informed consent. The participants reported that they were disappointed not to receive the benefits they expected and felt they deserved, such as financial compensation and medicines. 24 In India, although the government issued regulations against biopiracy in 2002, this was poorly implemented and biological samples are still shipped abroad for studies without the proper approval from authorities. 35 A systematic review of all human genetic studies using Cameroonian deoxyribonucleic acid (DNA) samples published between 1989 and 2009, found that only 14% of Cameroonian institutions and 28% of Cameroonian authors were associated with any of the identified 50 articles. Moreover, very few studies were on the most common genetic diseases in African populations. Almost all of the Cameroonian DNA samples are stored outside Africa. 21 In genetic research, benefit-sharing issues are usually central when it comes to possible exploitation cases. In general, benefits can be shared at two levels: (i) at an individual level; and (ii) at a community, tribe or national level. 36 Benefits can also be shared directly and indirectly. Direct benefits include access to medical care for the participating research subjects and/or communities. Indirect benefits include research-capacity building, such as publications, fund-raising, research staff training and development of a stronger scientific culture. Data sharing in genomic research and human biobanks comprises one form of benefit sharing, even though there are issues with data sharing -such as who owns the data, which third parties can benefit and who decides what can be shared. Researchers may also gain financial benefits, personal recognition and reputation through access to and commercialization of biobanking resources, which could potentially violate the interests of research participants. 37 Unfair benefit-sharing with local participants and communities may constitute exploitation, and contribute to a public distrust of biomedical research. In addition, poor consent procedures and inadequate engagement, both at an individual and community level complicate the relationship between researchers and participants. 23,24 Benefit-sharing issues in cross-border flows of samples and data have previously been debated. Several studies have discussed and made suggestions for fair benefit-sharing in genetic research collaboration between countries. 23,25,29,36,38,39 However, the conceptual and practical problems of benefit-sharing remain unsolved. Some international organizations have developed ethical and legal policies to promote benefit sharing and data access -such as the Human Genome Organization Ethics Committee's 10 Lawlor et al., 11 Rudan et al., 12 Sgaier et al. 13 (2007). However, these organizations and policies provide inconsistent and incomplete frameworks, and none of them possess supra-national status, authority or enforceability. 37 Global governance of biobanks Low-and middle-income countries have weaker research capacity and governance mechanisms for biobanks than high-income countries. 12,40 It is important to develop a feasible and equitable governance framework at the global level to guarantee benefit sharing in biobank collaboration. The potential commercial benefits resulting from access to the data of biobanks underscores the urgent need for such a framework. International initiatives -such as the Public Population Project in Genomics and Society, 41 the International Society for Biological and Environmental Repositories 42 and the International Agency for Research on Cancer 43 -have offered governance structures, best practices and guidelines to promote the internationalization and standardization of biobanks. An international research group has created the ELSI 2.0 initiative to accelerate the translation of ethical, legal and social knowledge into policy and practice. ELSI 2.0 invites people working with biobanks, policy-makers, funders, the public and other stakeholders to be engaged in ethical, legal and social research. 44 In addition, international organizations should harmonize the multiple existing standards, best practices and guidelines, and consolidate these into a single global governance framework for biobank operation and collaboration. We propose a provisional global governance framework for biobanks that includes the following six key elements: (i) respecting participants and donors of biological samples, and protecting their privacy and confidentiality; (ii) informing participants and donors of potential risks through initial consultations; (iii) sharing samples, data and benefits in a fair, transparent and equitable manner; (iv) ensuring quality and interoperability of samples and their associated data; (v) improving public awareness, trust and participation in biobanks; and (vi) defining the role of the private sector in the use of knowledge derived from biobank operations. As a step towards global governance, the Global Alliance for Genom-ics and Health was formed in 2013 to convene global stakeholders from more than 210 leading institutions across different sectors. The alliance aims to enable responsible data-sharing for genomic innovation and discovery. The Global Alliance proposes a provisional Framework for Responsible Sharing of Genomic and Health-Related Data, 45 which includes all of the six elements we propose. The framework is currently available from http://genomicsandphealth.org/ and is open for comments, and provides a setting for further discussions among key stakeholders and interested parties. A key question will be the legitimacy and implementation of any proposed framework or guidelines on data-sharing. We call upon the following organizations to jointly develop a comprehensive global framework to ensure that the benefits of biobanks will be shared by all: the World Health Organization, UNESCO, the World Intellectual Property Organization, the World Trade Organization and the World Medical Association. ■ Un llamamiento a la gobernanza mundial de los biobancos Los avances en la investigación genómica han dado lugar a una mayor toma y almacenamiento de muestras biológicas en biobancos. La mayoría de los biobancos se encuentran en países de ingresos altos, pero el panorama está cambiando rápidamente a medida que los países de ingresos bajos y medios desarrollan sus propios biobancos. A la hora de establecer un biobanco en una ubicación cualquiera, los investigadores deben tener en cuenta una serie de cuestiones éticas, legales y sociales más allá de las cuestiones de la investigación médica tradicional. Además, es posible que muchos países no cuenten con estructuras legislativas adecuadas y dispongan de marcos de gobierno para proteger a los participantes de la investigación y a las comunidades frente a la distribución injusta de riesgos y beneficios. Por ello, con frecuencia se crean colaboraciones internacionales para apoyar el establecimiento y funcionamiento adecuados de los biobancos en países de ingresos bajos y medios. No obstante, estas colaboraciones provocan problemas transfronterizos como el reparto de beneficios y el acceso a los datos. Por tanto, es necesario definir y poner en práctica un marco de gobernanza de los biobancos justo, equitativo y viable para garantizar un equilibrio justo de riesgos y beneficios entre todos los interesados.
2017-04-03T02:06:26.621Z
2014-11-24T00:00:00.000
{ "year": 2014, "sha1": "3a0e267b9934528e28ac61a39bd466d603942136", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2471/blt.14.138420", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "3a0e267b9934528e28ac61a39bd466d603942136", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business", "Medicine" ] }
233631336
pes2o/s2orc
v3-fos-license
Analysis of Heat Flux Distribution during Brush Seal Rubbing Using CFD with Porous Media Approach : This paper discusses the question of heat flux distribution between bristle package and rotor during a rubbing event. A three-dimensional Computational Fluid Dynamics (3D CFD) model of the brush seal test rig installed at the Institute of Thermal Turbomachinery (ITS) was created. The bristle package is modelled as a porous medium with local non-thermal equilibrium. The model is used to numerically recalculate experimentally conducted rub tests on the ITS test rig. The experimentally determined total frictional power loss serves as an input parameter to the numerical calculation. By means of statistical evaluation methods, the ma in influences on the heat flux distribution and the maximum temperature in the frictional contact are determined. The heat conductivity of the rotor material, the heat transfer coefficients at the bristles and the rubbing surface were identified as the dominant factors. Introduction Over the years, brush seals have been well established for use in stationary turbomachines and aircraft engines. The design processes are still based, to a large extent, on the manufacturers' experience. The aim of the design process is to obtain a brush seal that is subject to minimal wear during operation, but it is still stiff enough to achieve the required pressure reduction in steady-state operation (see Ref. [1])). According to Aksit and Tichy [2], the wear of brush seals, similarly to other sliding contacts, depends on three ma in factors: Heat transfer: In particular, the temperature level in the friction contact has a direct effect on the mechanical and physical material properties of the sliding partners and the tribochemical reactions, such as the formation of oxide layers. The temperatures in the friction contact are determined by the level of the ambient air temperature and the heat input during the contact with the seal. These changes also indirectly affect the friction coefficient µ. In addition to the knowledge of the heat input in friction contact, the distribution of heat fluxes between the bristle package and the seal and, subsequently, the distribution within the bristle package are also important. In brush seals, these distributions are strongly influenced by cooling effects due to leakage. In the following, only the third factor, the heat transfer, will be discussed. Already, in 1988, Gorelov et al. [3] recognized that a reduction of the leakage rate results in a significant heating of the brush seal. Accordingly, even small amounts of air are sufficient to cool the bristles adequately. The heat transfer within the package is comparable to that of a heat exchanger (see Ref. [4]). In this respect the diameter of the bristles and the cavities between the bristles play a decisive role. With the help of a numerical flow simulation with modelling of the bristle package as a porous medium, Dogu and Aksit [5] calculated temperature profiles in the bristle package as a function of directly specified heat inputs in the friction contact. The maximum temperatures always occur at the bristle tips and decrease exponentially in radial direction to the level of the air inlet temperature. In the axial direction, a very uniform temperature distribution was determined. However, the authors assumed an isotropic heat conduction in the package. In reality, this is strongly anisotropic due to the orientation of the bristles and the bristle spaces. The highest radial temperature drop was found in the area of the back plate. The convective cooling effect increases due to higher leakage rates at high pressure differences. This, in turn, reduces the temperature level and increases the radial temperature gradient. Demiroglu and Tichy [6] developed a semi-empirical equation in closed form to calculate the contact forces. This was used to calculate the friction power (see Ref. [7]): In order to validate the equation, rub tests were carried out and the temperatures at the rotor and at the downstream bristles were measured by infrared thermography. Because the part of the equation used is only valid without a pressure difference applied, no flow was applied in the validation measurements. The calculated frictional power served as an input variable for a finite element analysis. Therefore, the assumption was made that the heat flux between the rotor and seal is split in a ratio of 50:50. This was justified with similar heat conduction coefficients and an identical contact surface. However, further measurements have shown that this assumption is no longer valid under pressurisation. In this case, the heat flux distribution changes very much in favour of the bristles. On the same test rig that was used by Demiroglu and Tichy [6], Ruggiero et al. [8,9] carried out steady-state, pressureless rub tests with brush seals with non-metallic bristles of aramid and carbon fibres. By means of a subsequent FE analysis, the heat input into the rotor was determined by iteratively adjusting this value until the calculated temperature gradients matched the measured ones. Following Demiroglu's and Tichy's hypothesis that, in the absence of flow through the seal, the heat flux distribution mainly depends on the thermal conductivity of the materials, Ruggiero et al. [8] assumed that, due to the low thermal conductivity of Kevlar, almost all of the heat should flow into the rotor. Qiu and Li [10] used a numerical flow simulation to calculate the pressure forces. They did not resolve the individual bristles, but modelled the entire package as a porous medium. The contact forces were iteratively calculated with an FE model, taking into account the friction and deflection of the bristles. The frictional power was determined according to Equation (1). A constant friction coefficient of 0.24 was used. The resulting frictional power served as input for the Computational Fluid Dynamics (CFD) simulation. For the heat transfer in the bristle package, they assumed a local thermal equilibrium, i.e. the local air temperature corresponded to the bristle temperature. In a subsequent publication, the assumption of a local thermal equilibrium was replaced by an equation considering the convective heat transfer in the bristle package (see Ref. [11]). In order to determine the heat transfer coefficients at the bristles, they used correlations for banks of staggered tubes, as done by Dogu and Aksit [5]. The calculations showed an increase in the maximum temperatures in friction contact with increasing differential pressure, but a lower mean temperature in the package. With increasing speed, the maximum temperature and the mean temperature in the package also increased. As the temperature increased, a reduction in leakage was found. This is explained by a decreasing density of the streaming fluid. Pfefferle et al. presented an instrumentation concept for the measurement of rotor temperatures during rubbing [12]. They placed thermocouples below a radially thin-walled rotor at six axial positions. For redundancy, the instrumentation was mirrored at a circumferential position offset by 180°. The thermocouple tips were welded to the rotor and the thermocouple lines were fixed with spot-welded sheets. In order to calculate the heat input into the rotor, an FE model was created and the heat input, as a boundary condition of the simulation, was iteratively varied until the best possible agreement between the experimental and numerical temperature curves was obtained. During the tests, the resulting total frictional power was calculated by subtracting the power measured before and during the rubbing. The difference between total frictional power and rotor heat input results in the heat input into the bristles and the air. Pfefferle [13] published the results of these tests. The tests were carried out with four seals of the clamped design, which are identical in design. Over all tests, it was calculated that 60-90% of the total frictional power goes into the rotor. Pfefferle [13] tried to model the transient behaviour of the experimental rubbing tests with a duration of 30 s in an additional simplified FE model in order to investigate these heat flux distributions in more detail. He was able to show that the transient test procedure leads to more heat being conducted into the rotor by up to 9% than would be the case in steady-state operation. A variation of the heat transfer coefficients showed that only for α b < 100 W/(m 2 K), more heat is conducted into the rotor. Pfefferle [13] explains this by an anisotropic arrangement of the bristles in axial and tangential direction, which results in little or no flow through parts of the bristle package (see also Ref. [14]). Pfefferle [13] believes that the worn bristle package itself provides a further explanation: agglomerated wear material at the bristle tips and severely deformed bristles reduce the leakage flow in the area in direct contact with the rotor. Both reasons lead to a reduction of convective heat transfer due to the reduced flow velocities. This is in contradiction to the correlations for banks of staggered tubes previously used for brush seals to calculate the heat transfer coefficients (see Refs. [11,15,16]). The work by Pfefferle [13] was continued at the ITS in the subsequent years on a modified test rig (see Refs. [17][18][19][20]). The experimental investigations that were presented in these publications with subsequent calculation of the rotor heat input using an FE model demonstrated the influence of the geometry and operating parameters, as well as the degree of contamination of the bristle package on the heat flux distribution. When considering all of the experiments, it can be concluded that, in most cases, a large part of the frictional power generated is dissipated into the rotor. The aim of this paper is to show the ma in factors influencing the heat flux distribution. The knowledge gained will be used to verify whether, as assumed by Pfefferle [13], the heat transfer coefficients at the bristles are significantly lower than that predicted by the correlations usually used for banks of staggered tubes. For this purpose, selected rub tests with seal 1 at different differential pressures are numerically simulated. The calculations include seal and rotor as well as the flow fields upstream and downstream of the seal. Hildebrandt previously published this numerical study (see Ref. [21]). The experimental results of the rub tests with this seal were published in [18]. Several approaches are available for modelling the flow in the bristle package. Besides the semi-empirical approaches ("bulk flow models") and models with fully dissolved bristles, calculation approaches in which the bristle package is modelled as a porous medium have been established. When modelling the real geometry, the complete flow channels between the bristles within the package have to be meshed. Because of the small wire diameters in combination with high packing densities, this means an increased modelling and calculation effort. Furthermore, it is difficult to model the axially and radially variable bristle spacing according to a real brush seal. The approach of modelling the bristle packing by means of a porous medium overcomes the problems of this dynamic movement and the complex structure of the bristle package. For this reason, the following calculations are based on a modelling of the bristle package while using a porous medium. Basic Equations for Calculating Flow Losses and Heat Transfer in Porous Media The equations that are used to calculate the leakage flow of a brush seal are described below. Outside the bristle package, the usual Navier-Stokes equations are used. To account for the flow resistance that the bristles exert on the fluid, an additional resistance force F r per unit volume is defined within the bristle pack: This resistance is composed of a viscosity and an inertia term. Equation (2) is based on the results that were obtained by Darcy [22] and Forchheimer [23], and it is called the Forchheimer equation in its basic form. Because of the anisotropic composition of the bristle package, the flow losses are also directional. For example, the losses parallel to the longitudinal axis of the bristles are significantly lower than those transverse to the longitudinal axis of the bristles. However, they are largely responsible for the radial pressure gradient in the bristle package. As a result, the tensors A and B each consist of three elements: The coefficients are related to a local coordinate system, which is defined relative to the individual bristles. The ma in directions e x , e s , and e n are defined, as follows: • e x : parallel to the x-axis of the test rig, • e s : parallel to the bristle longitudinal axis, and • e n : perpendicular to x-axis and bristle longitudinal axis. For the determination of the coefficients a i and b i , the findings from Ergun [24] can be applied, who investigated the flow through porous media in a Reynolds number range relevant to brush seals. According to Ergun [24], Equation (2) can be represented for the isotropic case as pressure loss over the height of the layer bed in the following way: The coefficients a i and b i are, thus, proportional to (1 − ) 2 / 3 and (1 − )/ 3 , respectively. The variable represents the porosity of the medium through which the medium flows. It is defined as the ratio of the void volume to its total volume: The porosity used for brush seals is described, in detail, in Section 2.2. The diameterd shown in Equation (4) represents the equivalent sphere diameter: On the basis of measurements with packed beds of spherical particles, Ergun [24] suggests the factorsα = 150 andβ = 1.75. With respect to the wire diameter d b of the brush seal, Equation (6) gives the factors α =α/d 2 · d 2 b = 66.7 and β =β/d 2 · d 2 b = 1.17. In order to obtain a better match with the application under consideration of the brush seal, which resembles a cylinder packing, Chew and Hogg [25] and Proestler [26] suggest the factors α = 80 and β = 1. 16. The coefficients a x and a n or b x and b n can be represented according to Equation (4), as follows: Chew et al. gave the loss coefficient a s along the bristle axis [27], based on a comparison of experimental and numerical results, with a ratio of 1:60 to the losses transverse to the bristle axis. In contrast, Proestler [26] calculated an analytical expression assuming the pressure reduction of a laminar flow in non-circular pipe sections. Proestler [26] calculated the factor a s , as: The dissipative losses along the bristle axis are neglected and the factor b s is set to zero. The numerical calculations performed in this paper use the loss coefficients according to Proestler [26] . Energy Conservation Equations In general, two cases can be distinguished for heat transfer in porous media: The simple case of thermal equilibrium between the solid and the fluid and the case where there is a significant temperature difference between the two. In the case of a typical flow through the bristle package of a brush seal, strong forced convection occurs between the bristles and the leakage air, so that the second case applies. This is modelled using a double cell approach. Such an approach defines a fixed zone that spatially coincides with the porous fluid zone. This solid zone only interacts with the fluid in terms of heat transfer. The conservation equations for energy are separately solved for the fluid and solid zones. The conservation equation solved for the fluid zone is in accordance with Nield and Bejan [28]: and for the solid zone α fs is the heat transfer coefficient between the fluid and the solid and a fs is the specific surface area of the solid. Relating to the wire package of a brush seal, a fs equals to: With regard to the heat conduction within the bristle package, it should be noted that, similar to the loss coefficients, it is also directional. Perpendicular to the longitudinal axis of the bristle, an effective heat conduction λ eff can be calculated, which results from a series connection of the heat conduction of the solid and fluid (see Ref. [29]): Equations (11) and (12), in combination with Equation (15), lead to: Porosity of the Brush Seal Referring to a cylinder packing, as in the case of a brush seal, the porosity can be expressed, as follows: In Equation (18), it was neglected that the porosity is dependent on the radius. Because the circumferential area increases with increasing radius, but the number of bristles remains constant, the porosity increases in the outward radial direction. According to Qiu et al. [11], the porosity as a function of the radius can be calculated, as follows: For ∆r → 0 follows from Equation (19): In addition to the fixed geometric parameters, such as the wire diameter d b , the rotor outer diameter in relation to the package centre D R , and the laying angle λ, the porosity also depends on a size that changes depending on the operating pressure applied, namely the package width B (see Refs. [30,31]). Consequently, this size must be taken into special consideration when calibrating the model (see Section 4). Here, it is neglected that the laying angle λ also changes during rubbing, due to the radial deflection of the bristles. However, the changes are very small. Description of the Model In Figure 1, the computational doma in of the numerical calculations is shown. It consists of a 0.5°section of the ITS brush seal test rig (see Ref. [18]). Periodic boundary conditions are set in circumferential direction because of the rotational symmetry. The calculation doma in includes the rotor, the test seal, the axial brush seal, a part of the seal holder, and the flow areas upstream and downstream of the test seal. The test seal is modelled as a porous medium, assuming that there is no thermal equilibrium between the bristles and the fluid. The axial brush seal is also modelled as a porous medium, assuming that there is a thermal equilibrium. The calculation doma in starts at a radius of 50 mm. The recalculation of the experimental tests by means of the FE analyses (see Refs. [17][18][19]) has shown that the areas further inwards are only slightly thermally affected within the rubbing period of 30 s. At the inlet, the total pressure and temperature are set according to the respective measured values from the experiments. At the outlets, the static pressures and temperatures are specified. At the outer surfaces 1 -4 , appropriate heat transfer boundary conditions are defined. Along the outer contour of the rotor, a circumferential speed corresponding to the tests is set. In the detailed section in Figure 1, it can be seen that the specific total frictional power lossq tot is applied to the contact area between the seal and rotor: This quantity is based on the measurement of the total power loss during the experiments. Similar to the temperatures at the inlets and outlets, it is defined as time-dependent and specified as a boundary condition. The heat flux distribution will be self-adjusting due to the prevailing conditions. In the detailed section, it is also evident that the package width is not constant over the height. In the clamping area, the package width is defined to the minimum package width (see Section 4). The package width increases towards the inside and it reaches its maximum value at the contact point to the rotor. This specification is based on own observations and descriptions by other authors (e.g. Ref. [30]). A narrow gap was defined between the bristle package and the back plate, thus reducing the contact height of the bristles on the back plate h BP . In various publications (e.g. Refs. [32][33][34]), it has been described how the bristle package bends in axial downstream direction below the back plate when a pressure difference is applied. As a result, there is a bending in the upper area in the upstream direction and, thus, a reduction of the contact surface. This effect is to be replicated by reducing the contact surface in the model. The contact height at the back plate has an influence on the radial pressure drop and, thus, on the pressures in the pressure relief chamber, as well as on the heat transfer. The axial position of the seal relative to the rotor is adjusted individually for all test cases, analogously to the experiment. In addition, the determined rotor elongation is taken into account to adjust the gap below the back plate as closely as possible to the gap during the rub tests. Investigated Test Cases The numerical test cases that are examined are based on rub tests of seal 1 under variation of the differential pressure (see Ref. [18]). The bristle package consists of bristles of Haynes 25 with a bristle diameter d B of 0.07 mm, a package density ρ p of 200 Bpmm and a seal inner diameter of 300 mm. The laying angle of the bristles λ is 45 • . The rotor is made of Inconel 718 and it has a diameter of 299.5 mm on the right front edge. The cone angle of the rubbing surface is 2.95 • . The pressure difference was varied in the steps of 1.0, 2.5, 4.0, 5.5 and 7.0 bar. The choice was made for these test points, because the largest influences on the heat flux distribution have been observed in the experiment when varying the pressure difference and the heat transfer coefficients at the bristles α fs = α b must change over a wide range. The heat transfer coefficients α b are determined according to correlations for banks of staggered tubes by Gnielinski [35], Žukauskas [36], and Mart in [37]. Appendix A describes the calculation of the coefficients. Figure 2 shows the calculated values. The mean values were used for the numerical simulations. The error bars, which span an interval of ±50%, show that the correlations yield very different results. The heat transfer coefficients were assumed to be constant in the entire bristle package. Besides the heat transfer coefficients at the bristles, the porosity is a very important parameter of the modelling. The porosity was calculated according to Equation (20), i.e., the radial dependence of the porosity was taken into account. Furthermore, it was taken into account that the bristle package tapers with increasing radial height. In Figure 3a,b, it can be seen that the consideration of these effects has a very large influence on the porosity. If it is also taken into account that the available space increases with increasing package height, the decrease in porosity is much less pronounced (see Figure 3b). To further take into account that the bristle package in the area of the back plate is compressed more than the bristles upstream when a pressure difference is applied, the porosity was modified according to Equation (22): Figure 3c shows the porosity when considering the package width and the weighting according to Equation (22). In the last contour plot, all of the effects are finally combined (see Figure 3d). Mesh Independence Study A mesh independence study was carried out in order to assess the influence of the created mesh on the quality of results. The analysis was performed for the medium pressure difference level of 4.0 bar. The mass flow, the heat flux distribution, and the maximum temperature in the friction contact serve as comparative values. A total of four meshes with different resolutions were compared. The k − ω-SST model was used to model the turbulence. Within the bristle package, laminar calculations were performed. This assumption is valid due to the low Reynolds numbers (see Refs. [38][39][40]). This is also confirmed in this study. Within the package, the Reynolds numbers based on bristle diameter were well below 1000. They only reach this order of magnitude in the region near the back plate gap where the highest velocities are present. At first, a steady-state solution was calculated without the imposition of a heat flux boundary condition. The fluid properties were assumed to be compressible. Subsequently, a transient calculation is performed for 30 s with time-dependent specification of the total frictional power loss and temperatures from the experiment. In Table 1, the results are summarized in relation to the mesh with the highest cell number: For the following simulations, the computational mesh with 3.6 million cells was used. Calibration of the Numerical Model The adaptation or calibration to the experimental data is inherent in the modelling of the bristle package by means of a porous medium. Because the geometry of the bristle package changes during operation as a function of the differential pressure, the leakage characteristics must first be calibrated. Once the numerical model has been calibrated, it can be validated using further measurement data (see Section 5). In the present study, the leakage mass flow and pressures in the pressure relief chamber must be adjusted. For the calibration of the leakage mass flow, the porosity of the bristle pack is the decisive parameter. A variation of the porosity is achieved by adjusting the package width B. Because the package width in the clamping area was set to the minimum value B min , the calibration is done by varying the package width B max (see Figure 1). The minimum package width B min is calculated from: Because the bristle package is compressed depending on the pressure difference applied, the porosity also changes in the same way. Therefore, calibration must be carried out separately for each pressure stage. Figure 4a shows the results of the calibration of the leakage mass flows as discharge coefficients. The comparison of the numerically calculated values with the values from the experiments shows a very good agreement and, thus, a good adaptation to the experiments. Only at a pressure difference of 2.5 bar, the calculated flow rate is below the experimental value. The experimental value is most likely an outlier. A comparison of the two experimental measurement runs carried out confirms this suspicion. Because the actual course of the leakage mass flow for this pressure is not known and a linear course between the values at differential pressures of 1.0 and 4.0 bar is not necessarily present, the average package width from the package widths at differential pressures of 1.0 and 4.0 bar bar was used. In Figure 4b, the respective underlying package widths B max are plotted over the pressure difference. It becomes apparent that, above a pressure difference of about 5.0 bar, the package is not compressed any further and the maximum sealing effect is achieved. Furthermore, it is clear that, even at the highest pressure differences, the package width is still significantly larger than the theoretical minimum package width B min . When adjusting the pressures in the pressure relief chamber, the contact height of the package at the back plate h BP plays a decisive role in addition to the porosity of the package or the package width. The contact height has a significant influence on the radial pressure drop and, thus, the chamber pressures. In Figure 5a, the calculated and measured chamber pressures are plotted over the absolute pressure upstream of the seal. After adjusting the contact heights according to Figure 5b, the chamber pressures are within the measuring range. This is spanned by the deviations of the measured values at the circumferential positions of 0 and 180°. The decrease in the contact height with an increase in the differential pressure could, be explained by an increase in the axial deflection of the bristles and, therefore, also appears to be physically plausible, as described in Section 3. Validation of the Numerical Model In this section, the model will be validated. Several comparison variables are available in order to be able to check whether the model reflects reality accurately. One of these variables is the pressure distribution in axial direction. In the absence of own measurement data, the results by Schur et al. [31] and Bayley and Long [41] are used. In particular, the results shown by Schur et al. [31] are suitable for a comparison, since one of the tested bristle packages corresponds exactly to that of the seal 1. In Figure 6a-e, the normalized pressures below the bristle packs are plotted over the normalized package width. In general, the measured values of Schur et al. [31] are well matched. The pressure gradient increases in the area of the back plate, whereby this effect increases with increasing pressure difference. In Figure 6f, this can be clearly seen when looking at the numerically calculated pressure profiles. The differences between the measured values and the calculations can be explained by different operating conditions. In the experiments conducted by Schur et al. [31], a circumferential speed of 47 m/s was present, while the measurements by Bayley and Long [41] were carried out purely statically. In the simulation, a circumferential speed almost twice as high as the value referred to by Schur et al. [31] is set. However, Schur et al. [31] could show that, with increasing speed, the pressure level also continues to rise. Furthermore, the measured values of the investigations by Schur et al. [31] are only available up to a pressure difference of 5.0 bar. Altogether, the flow and pressure conditions can be simulated very well with the numerical model according to the experimental tests. The assumption of a rigid bristle package or the omission of the calculation of the bristle bending does not represent a significant impairment. The measured temperatures within the rotor structure are used for validation because the flow variables are of less interest than the distribution of heat input in the rotor and seal. In addition, a comparison of the heat flux distributions are useful. The heat flux distribution is defined as the ratio of rotor heat input and the measured total frictional power loss: heat f lux distribution = rotor heat input total f rictional power loss . The values of the heat flow distributions of the current numerical simulation are compared with those of the corresponding experimental tests (see Ref. [18]). For the experiments, the total frictional powers were measured and the rotor heat inputs were calculated by means of FE analyses. In Figure 7a-e, the experimental rotor temperatures are plotted together with the rotor temperatures that are determined from the numerical simulations for all five test cases. In addition, the temperature curves of the FE analysis are shown. In each case, the points in time within the contact duration of 30 s are shown, where the highest temperatures occurred. While the temperature differences between the experiment and the FE analysis are very small as a matter of principle, there are stronger deviations between the experiment and numerical simulation, especially at low pressure differences. At higher pressure differences above 5.5 bar, the profiles aga in agree very well. In order to check at this point whether the hypothesis by Pfefferle [13], namely the overestimation of the heat transfer coefficients at the bristles by the correlations for banks of staggered tubes, is correct, calculations with heat transfer coefficients of α b = 100 W/(m 2 K) and α b = 1000 W/(m 2 K) have been performed for the first two pressure levels. The experimental temperature level is reached in the case of a heat transfer coefficient of α b = 100 W/(m 2 K) and a pressure difference of 2.5 bar. The calculated temperatures are too low for all other cases. The findings from the comparison of the rotor temperatures are also well reflected by the differences in the heat flux distributions (see Figure 7f). The trend is quite well matched and the level of the heat flux distributions corresponds approximately to that of the experiment or FE analysis. Assuming that the slope of the experiment or FE analysis profile is correct, the heat flux distributions of the numerical simulation should follow the dotted line ( ). For this purpose, the values from the experiment or the FE analysis ( ) were transposed upwards until they corresponded to the value from the numerical simulation at the operating point with a differential pressure of 5.5 bar. This operating point was selected because the best agreement between numerical simulation and experiment was found there. If the dotted line ( ) is used as a comparison level, it becomes clear that too little heat is introduced into the rotor at low pressure differences in the case of numerical simulation. In the case of the additional calculations with heat transfer coefficients of α b = 100 W/(m 2 K) or 1000 W/(m 2 K), this level ( ) is almost reached (not drawn in the diagram). However, other factors must also play a role, because, as shown above, the rotor temperatures are still calculated significantly too low. Results of the Sensitivity Study So far, the comparison of the calculated and experimental results has shown that, at low pressure differences, the amount of heat into the rotor is underestimated by the numerical simulation. This seems to be a first indication that the heat transfer coefficients at the bristles were chosen too high. However, the fact that the temperature differences persist if the heat transfer coefficients are significantly reduced suggests that other effects or factors must also have a significant influence on the heat transfer. A possible explanation would be that the contact areas have been chosen too large and, thus, the specified specific total frictional power loss becomes too small (see Equation (21)). The package width or porosity are coupled with the leakage flow and they have been calibrated. However, a closer look at the bristle packages shows that in reality, not all of the bristles have exactly the same length. Therefore, it is possible that individual bristles do not come into contact with the rotor and consequently do not contribute to the conversion of the total frictional power loss. Photographs of the undersides of the bristle packages confirm this assumption (see Figure 8). This becomes particularly clear in the photograph shown in Figure 8a). In the area marked with the letter A, the bristles show clear signs of rubbing, whereas large parts (B) show no traces. Additionally, in the other pictures, a clear gradation of the bristles can be seen. It seems quite plausible that this effect especially occurs at low pressure differences. A screening test was carried out for the test case at a medium pressure difference of 4.0 bar in order to investigate and quantify the influences. In addition to the influencing factors already identified, namely the heat transfer coefficients at the bristles α b and the contact surface A, the thermal conductivities of the materials of the bristles, the rotor and the seal housing were varied. The variation of the contact area was achieved by adjusting the specific total frictional power loss. Furthermore, it was examined whether the adjustment of the contact height of the bristles on the back plate h BP has an influence. The latter factor was varied in three steps from -10% over 0% to +10%, while the variation of the thermal conductivities covers the range of ±5% supplemented by an additional step at 0%. This corresponds approximately to the scattering of the available material data of the materials used. The heat transfer coefficients were varied in steps from -50% over 0% to +50%. This is roughly in accordance with the fluctuations in the predictions of the correlations under consideration. The maximum reduction of the contact area was estimated with -30% and the maximum increase in the contact area with +5%. For the central point, the contact area was −12.5%. The underlying test design complies with a definitive screening design, therefore comprising 13 tests. The influences of the factors on the heat flux distribution and the maximum temperature in friction contact were investigated. The results of the investigation are summarized in a ma in effect diagram in Figure 9. Regarding the heat flux distribution, it can be concluded that the heat conductivities of the two direct friction partners (λ Bristle , λ Rotor ) as well as the heat transfer coefficients at the bristles α b have a clear influence. The influences of the contact surface A and the thermal conductivity of the seal housing λ Seal play a minor role. Adjusting the contact height of the bristles on the support ring h BP does not result in a change in the heat flux distribution. This factor also has no influence on the maximum temperature in the friction contact. The contact temperature is mainly determined by the thermal conductivity of the rotor material λ Rotor and, to a greater extent, by the contact surface A. A Pareto diagram of the standardized effects can be considered in order to be able to check which factors are significant (see Figure 10a,b). In this diagram, the absolute values of the standardized effects are shown in the order of their influence. The standardized effect is the observed effect divided by its standard deviation. The result corresponds to the t-value and is thus a measure of the significance of the effect. The selected significance level is α = 0.05. This results in a t-value of the reference line of 2.45. If the bar of an effect extends beyond the reference line, it is significant. In the case of the heat flux distribution (see Figure 10a), this means that the heat transfer coefficients at the bristles are the decisive factor. The thermal conductivities of the rotor and bristle materials have a similarly strong effects and are also statistically significant. From the Pareto diagram for the maximum rubbing temperature in the friction contact as response variable (see Figure 10b), it is clear that the rubbing surface is the decisive factor by a large margin. The thermal conductivity of the rotor material is slightly significant. The results of the screening test, in which the ma in influencing factors have been identified, are now to be transferred to all five test cases in order to obtain the influence of the pressure difference. The factors heat transfer coefficients at the bristles α b , thermal conductivity of the rotor material λ Rotor and the contact surface A are to be examined more closely. The thermal conductivity of the bristle material λ bristles was excluded, because the number of simulations to be performed had to be reduced. This limitation seems to be justified, since a similar effect on heat flux distribution is expected, as the thermal conductivity of the rotor material and the magnitudes of the effects are nearly the same. The simulations were varied in accordance with a response surface design in the form of a central composite test plan. The design of a central composite design consists of a cube with two levels per factor. In addition, test points in the form of a star are also investigated. The star, starting from the central point, is created by varying the individual factors. Because the experimental points of the star exceed those of the cube, each factor is varied on five levels. With this type of experimental design, first-and second-order terms can be estimated. The range of the experimental design, in relation to the star points, corresponds to that of the screening test. Figure 11 shows the effects of the three factors on the response variables heat flux distribution and maximum temperature in friction contact for all five test cases. As expected, the amount of heat that is introduced into the rotor increases as the thermal conductivity of the rotor material increases and decreases accordingly as the convective heat transfer at the bristles increases. The influence of the contact surface is only significant at low differential pressures. As a tendency, the changes of the varied parameters have less effect on the target value with increasing pressure difference. Thus, the effect of the pressure difference outweighs the three factors examined. This is not the case when evaluating the effects on the maximum temperature in the rubbing contact. Still apparent is the disproportionately strong influence of the contact surface on the temperature in the rubbing contact. In Figure 12a,b, the results of the simulation are plotted as standardized effects over the pressure difference. Instead of a Pareto chart, a line chart was chosen. The reference line at t = 2.23 still represents the limit above which an effect is considered significant. Only those factors are shown in the diagrams whose effects are significant at least one pressure difference. In the case of heat flux distribution (see Figure 12a), it is clear, as outlined above, that the effects decrease significantly with increasing pressure difference. Except for the contact area A ( ), all applied factors rema in significant. The heat flux distribution is, however, increasingly determined by the increased pressure level and the convective heat transfer in the bristle package, which has increased in absolute numbers. Interaction effects between the individual factors are not significant. With regard to the maximum temperature in the contact area, the effects are not dependent on the pressure difference (see Figure 12b). If the knowledge gained is transferred to the results that are discussed in Section 5, this means that a reduction of the heat transfer coefficients at the bristles, as well as a reduction of the contact area, should mainly affect the temperatures and heat flux distributions at low pressure differences. The corresponding values at high pressure differentials should show significantly smaller changes when varying the factors. The parameter values listed in Table 2 were transferred to the five test cases and the numerical calculations were repeated in order to reduce the deviations in the temperature profiles between experiment and numerical simulation. In a first step, the thermal conductivity of the rotor was assumed to be constant. In any case, there is no change between the test cases, since the experiments always involved the same seal and the same rotor. However, if the thermal conductivity varies, the assumed values from Table 2 would probably have to be adjusted again, in order to achieve a match of the temperature profiles. In Figure 13a-e, the experimental rotor temperatures are plotted with the temperatures determined from the numerical simulations for all five test cases over the axial measuring position. The time points within the rubbing period of 30 s at which the highest temperatures occurred are shown. It is clear that the simulated temperatures at low pressure differences now correspond to the experimental values. At high pressure differences, the numerically calculated temperatures increase as compared to the original calculations, but they are still very close to the measured temperature values. Additionally, the curve of the heat flux distribution over the pressure difference (see Figure 13f) now follows the slope resulting from the evaluation of the experimental data by means of the FE analysis. The values of the heat flux distribution that result from the results of the numerical simulation are now always higher than those of the FE analysis. There can be many reasons for this. Finally, it has to be considered that two numerical calculations are compared with each other, which can never exactly represent reality, despite careful verification and validation of the models. In Figure 13f, error bars are shown for a range of ±10%. Conclusions The aim of the numerical analysis of the heat flux distribution was to identify the crucial factors for the heat transfer at the seal. The chosen approach was to model the bristle package as a porous medium. For the calculation of the porosity, in addition to the package width and radial height, an axial weighting was used for the first time to take that the bristle package is more strongly compressed under pressure in the vicinity of the back plate into account. Another new feature is the quantitative validation of such a model with experimental data from brush seal rub tests. Among other things, the calculations should serve to verify the applicability of the often used correlations for calculating the heat transfer coefficients at the bristles on the basis of banks of staggered tubes. After the evaluation of the results, it has been shown that it is quite possible that the correlations for banks of staggered tubes used for brush seals significantly overestimate the heat transfer coefficients α b . Besides the reasons already mentioned by Pfefferle [13] (see Section 1), the oblique flow of the bristles also has an influence here. When applying the correlations, a straight inflow is always assumed. Publications by Moreno and Sparrow [44] and Žukauskas [36] show that the Nusselt numbers decrease with increasing inclination. According to Žukauskas [36], the Nusselt numbers can decrease by approximately 20% with an oblique flow of 45°. If the oblique flow is increased to 80°, the values decrease to just above 40%. As a result of the swirl that is caused by the rotor rotation and especially by the laying angle of the bristles, a straight inflow or flow through the bristle pack is not ensured to a large extent. Especially in the area close to the back plate, the flow follows very strongly the laying direction of the bristles (see Ref. [45]). Furthermore, the assumption of isotropic heat transfer coefficients throughout the package is a rough assumption. In addition, the correlations for calculating the heat transfer coefficients assume an average flow velocity in the package. In all literature sources applying these correlations to brush seals, this mean velocity is calculated by relating the total leakage mass flow to the gap area between the back plate and rotor. This results in average velocities that are too high. When compared to a bank of staggered tubes, the bristle package of a brush seal exhibits locally strong pressure and velocity differences. As a result, although very high convective heat transfer coefficients are conceivable on a localized basis, on average, the correlations overestimate the values. The experimental investigations that served as a basis for this scientific publication are the result of a research project which has been initiated by the FVV ("Forschungsvereinigung Verbrennungskraftmaschinen e.V.") and carried out at the Institute of Thermal Turbomachinery, Karlsruhe Institute of Technology. The work has been supported financially by the FVV and the participating companies. This support is gratefully acknowledged. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: Appendix A. Calculation of the Heat Transfer Coefficients at the Bristles The heat transfer coefficients α b are determined according to correlations for banks of staggered tubes by Gnielinski [35], Žukauskas [36], and Mart in [37]. Knowledge of the bristle spacing is required to apply the correlations. The characteristic quantities are shown in Figure A1. The assumptions are made that the bristles are staggered and the bristle spacing is identical at the same radial height in all directions (A 1 = A 2 = δ). To calculate the bristle spacing δ, four equations with four unknowns are available. The bristle width B is known in relation to the radial bristle height. 1. Number of rows in flow direction N x : Bristle spacing in flow direction S L : 3. Bristle spacing δ: 4. Number of bristles in circumferential direction N Θ : . Calculation of Heat Transfer Coefficients According to Gnielinski The average Nusselt number of a cross-flow plain tube bundle is calculated according to Gnielinski [35] from the average Nusselt number of a cross-flow single tube multiplied by an arrangement factor f A : Nu Bundle = f A Nu 1 . . The characteristic length is L char = π 2 d b . As void fraction Ψ the average porosity of the bristle pack is used. The velocity w related to the fence height is calculated as follows: In Pr a is the thermal conductivity of the flow medium. Pr = ν a , 0.6 < Pr < 10 3 (A11) In the case of staggered arrangement, the arrangement factor f A is: The material properties of the flow medium are related to the average temperature of the inlet and outlet of the bristle package. Appendix A.2. Calculation of Heat Transfer Coefficients According to Žukauskas The average Nusselt number for heat transfer in a cross-flow tube bundle is calculated according to Žukauskas [36] as: The correlation is valid for: 0.7 Pr 500 and 10 Re D, max 2 × 10 6 and a number of tubes in axial direction of N x ≥ 20. All material properties of the flow medium, with the exception of Pr s , are related to the average temperature of the inlet and outlet of the bristle package. For a staggered arrangement with S T /S L < 2 and 10 3 < Re D, max < 2 × 10 5 applies. C 2 = 0.99 since N x < 20. The Reynolds number is calculated as: with For w, Equation (A10) applies. Appendix A.3. Calculation of Heat Transfer Coefficients According to Martin An alternative to the approaches by Gnielinski [35] and Žukauskas [36] is the calculation approach by Mart in [37]. The method used is based on the so-called Lévêque analogy. This links the pressure loss and the heat transfer coefficients. The following applies to staggered tube bundles: with the Lévêque number for b = S L /d b < 1: The Hagen number is: with Hg lam = 140 Re D, max (b 0.5 − 0.6) 2 + 0.75 / a 1.6 (4 a b/π − 1) (A20) and Hg turb = f t,s Re 1.75 D, max + f t,n Re 2 D, max , where f t,n for N x >10 becomes zero
2021-05-05T00:07:50.492Z
2021-03-29T00:00:00.000
{ "year": 2021, "sha1": "1a3b07f150fbf6fd3e15e250dc156433d3d1af4a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/14/7/1888/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6edc4ffac03eae57d86264b8ba639f7d1879e121", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
91862147
pes2o/s2orc
v3-fos-license
Aggression and discrimination among closely versus distantly related species of Drosophila Fighting between different species is widespread in the animal kingdom, yet this phenomenon has been relatively understudied in the field of aggression research. Particularly lacking are studies that test the effect of genetic distance, or relatedness, on aggressive behaviour between species. Here we characterized male–male aggression within and between species of fruit flies across the Drosophila phylogeny. We show that male Drosophila discriminate between conspecifics and heterospecifics and show a bias for the target of aggression that depends on the genetic relatedness of opponent males. Specifically, males of closely related species treated conspecifics and heterospecifics equally, whereas males of distantly related species were overwhelmingly aggressive towards conspecifics. To our knowledge, this is the first study to quantify aggression between Drosophila species and to establish a behavioural bias for aggression against conspecifics versus heterospecifics. Our results suggest that future study of heterospecific aggression behaviour in Drosophila is warranted to investigate the degree to which these trends in aggression among species extend to broader behavioural, ecological and evolutionary contexts. Fighting between different species is widespread in the animal kingdom, yet this phenomenon has been relatively understudied in the field of aggression research. Particularly lacking are studies that test the effect of genetic distance, or relatedness, on aggressive behaviour between species. Here we characterized male-male aggression within and between species of fruit flies across the Drosophila phylogeny. We show that male Drosophila discriminate between conspecifics and heterospecifics and show a bias for the target of aggression that depends on the genetic relatedness of opponent males. Specifically, males of closely related species treated conspecifics and heterospecifics equally, whereas males of distantly related species were overwhelmingly aggressive towards conspecifics. To our knowledge, this is the first study to quantify aggression between Drosophila species and to establish a behavioural bias for aggression against conspecifics versus heterospecifics. Our results suggest that future study of heterospecific aggression behaviour in Drosophila is warranted to investigate the degree to which these trends in aggression among species extend to broader behavioural, ecological and evolutionary contexts. Fruit flies in the genus Drosophila present a unique opportunity to investigate aggressive behaviours, both within and between species and in a broad phylogenetic context. There are approximately 1500 described species of Drosophila, many of which overlap spatially and temporally [9 -11] and use similar territories and food resources for feeding, breeding and ovipositing [12]. Yet, while it is well established that Drosophila use aggression within species to establish territories and social dominance and to compete for mates and food resources [13 -18], heterospecific aggression is largely uncharacterized, except for limited qualitative observations of heterospecific aggression among the Hawaiian Drosophila [19]. Here we characterized male-male aggression in Drosophila in a multi-species context using a behavioural choice assay, in order to (i) quantify the extent to which male Drosophila discriminate between conspecifics and heterospecifics during aggressive social interactions and (ii) test the effect of phylogenetic distance between opponent species on the distributional bias in aggressive targeting (heterospecific versus conspecific). We report that males showed significant bias in aggression towards either conspecifics or heterospecifics in a majority of species -species interactions. Among species pairs that were more distantly related, the direction of aggression was biased towards conspecifics, whereas closely related species treated conspecifics and heterospecifics equally. To our knowledge, this is the first study to quantify aggression between Drosophila species and to establish a behavioural bias for aggression against conspecific versus heterospecific opponents. Drosophila species and husbandry Seven species were selected from the ananassae, melanogaster and pseudoobscura subgroups within the subgenus Sophophora (figure 1). Among these seven species, we assayed aggressive interactions between two species at a time for a total of six species pairs. Three of these species pairs are relatively closely related sibling species: (i) D. ananassae and D. pallidosa, (ii) D. melanogaster and D. simulans, and (iii) D. pseudoobscura and D. persimilis. Whereas the other three species pairs are more distantly related: (i) D. ananassae and D. atripex, (ii) D. ananassae and D. melanogaster, and (iii) D. ananassae and D. simulans (figure 1). All seven species have broad geographical distributions, and for each species pair the geographical distributions overlap [20]. To the best of our knowledge, none of these species exhibit lekking behaviour. Male -male aggression has previously been documented in D. melanogaster and D. simulans [15,21], but not in the other five species. We used one isogenic (isofemale) line from each species that was originally established from wild collections. Aggression assay To quantify agonistic social interactions in a multi-species context, we used a slightly modified version of the standard dyadic aggression assay [22]. For each species pair, aggressive behaviours were quantified by placing two socially naive adult males from each opponent species-a total of four males-in a standard aggression arena (figure 2) and measuring (i) the delay to onset of aggression (latency to aggression) and (ii) the total number of aggressive lunges-a key indicator of aggression [21]-by each male towards both conspecific and heterospecific opponents. We examined a total of twelve sets of interactions, as we tracked aggressive behaviours for each focal species across six species pairs. In contrast to dyadic assays typically employed in aggression studies in Drosophila [23,24], our multiindividual, multi-species paradigm allows examination of social behaviour in a context where multiple individuals from different species compete for shared resources or territory, and it also allows us to quantify choice behaviour-i.e. bias in aggression towards heterospecifics versus conspecifics. Male pupae were isolated in 16  100 mm borosilicate glass tubes containing 1.5 ml of standard food medium and aged individually for 3-4 days to prevent social conditioning or formation of social dominance hierarchies prior to testing. Three-day-old adult males were extracted under CO 2 anaesthesia and marked on the thorax with a dab of white or blue acrylic paint (assigned randomly) for species identification during assay set-up and scoring. After painting, males were transferred to new isolation tubes containing 1.5 ml agarose-based nutritionally deficient media (without cornmeal, yeast or sugar) and allowed recovery from handling and anaesthesia. The following day, two 4-5-day-old, socially naive adult males from each opponent group-a total of four males-were gently aspirated into one of the wells of a 12-well polystyrene plate (Thermo Fisher Scientific #130185) with a small cup in the middle containing food-representing focal point of contest (figure 2). All four males were introduced to the chamber at the same time to prevent a potential resident -intruder confound. All behavioural assays were set up and recorded within 0-2 Zeitgeber hours, i.e. the first two hours of the lights ON time in a 12 : 12 light -dark cycle. Aggression scoring The number of lunges against conspecific and heterospecific males was counted for a period of 30 min after the first lunge, for consistency with the scoring duration of aggression assays reported elsewhere [23,24]. The amount of time between the introduction of males to the aggression chamber and the first aggressive lunge was used as the measurement of delay to the onset of aggression, or latency to lunge. The latency to lunge was scored separately for the two directions of lunges, conspecific or heterospecific. Scoring was terminated after 1 h if no aggressive encounter was recorded during that period. Electronic supplementary material, figure S3 shows the proportion of aggression trials in which lunges were recorded. Aggressive behaviours were scored manually by two independent scorers using iMovie '09, version 8.0.6 (Apple Inc., Cupertino, CA, USA). The number of aggression trials is indicated in electronic supplementary material, figures S1 and S2. Phylogenetic distance We inferred evolutionary relationships (figure 1) and calculated pairwise genetic distances (figure 5) among species using the maximum-likelihood method based on the Tamura-Nei model [25] using MEGA7 [26]. We compared 3196 nucleotide positions among one mitochondrial gene (CoI) and two nuclear genes (Gpdh and kl2), for which sequence data were readily available on NCBI for all species included in the study. Although full genome sequences are available for D. melanogaster, D. simulans, D. pseudoobscura, D. persimilis and D. ananassae, there is relatively little nucleotide sequence data available for D. pallidosa and D. atripex. Nonetheless, the evolutionary relationships that we report herein are consistent with previous studies [27,28]. We downloaded sequences from NCBI of D. pallidosa (CoI, Accession no.: FJ795561; Gpdh, FJ795596; and kl2, FJ795633) and D. atripex (CoI, FJ795575; Gpdh, FJ795601; and kl2, FJ795643) and used these to BLAST the published genome sequences of the other five species included in the study. The phylogenetic tree with the highest log likelihood (26435.05) is shown in figure 1. Initial trees for the heuristic search were obtained by Neighbor-Join and BioNJ to a matrix of pairwise distances estimated with the maximum composite likelihood approach. A discrete Gamma distribution was used to model evolutionary rate differences among sites. All positions containing gaps were removed from the analysis. Branch lengths correspond to the number of substitutions per site. Body-size estimation In many species, aggressiveness correlates with body size both within and between species and smaller males are less likely to initiate and hold aggressive encounters [15]. Body length (mm) from anterior antennae to posterior abdomen of males from all opponent groups used in aggression assays was measured as a proxy for body size using ImageJ [29]. Statistical analyses We compared the number of lunges directed towards conspecifics versus heterospecifics with a negative binomial generalized linear model, as implemented in the MASS package in R version 3.3.2 (R Core Team 2016 [30]). We normalized the number of heterospecific lunges by dividing by two (rounded to the nearest whole number) because there were twice as many heterospecific opponent males as conspecific opponent males in the aggression arena (figure 2). Goodness-of-fit was assessed by a chisquare test of the residual deviance of the negative binomial model. To examine the direction and effect size of aggression bias, we calculated the ratio of mean lunge counts (RL), which is the ratio of the mean number of heterospecific lunges to the mean number of conspecific lunges. RL was obtained by exponentiating the regression coefficient (b) of the negative binomial model, as this coefficient equals the log-ratio of mean lunge counts. The negative binomial model was also used to estimate the 95% confidence intervals of RL, which are reported in figure 3. Significant differences between the distributions of heterospecific and conspecific lunge counts were determined by a z-test of the estimated regression coefficient and standard error from the negative binomial model. We ran separate regressions for each focal species in each species pair for a total of 12 regressions (6 species pairs  2 species in each pair). p-Values were corrected for multiple tests via the Benjamini -Hochberg method [31]. We assessed the relationship between aggression bias (RL) and genetic distance between opponent species via permutation analyses. Given the design of our aggression assay and the phylogenetic relatedness of the focal species, aggression biases among species pairs may not be independent. To account for this lack of independence among data points in our analyses, we performed 10 000 permutations of the genetic distance versus RL relationships-i.e. the genetic distances and RL values for each species pair were randomly shuffled and resampled-and significance was assessed based on the probability distribution of the Spearman rank coefficient. royalsocietypublishing.org/journal/rsos R. Soc. open sci. 6: 190069 We examined the relationship between the latency to lunge and the direction of the first lunge (conspecific versus heterospecific) via a two-way ANOVA, with species pair and direction of lunge as fixed effects. We examined the relationships between (i) body length and number of lunges from focal males and (ii) body length difference between opponent species and total number of heterospecific lunges via the Spearman correlation. All analyses were conducted in R version 3.3.2 (R Core Team 2016) or GraphPad Prism version 7 (GraphPad Software, Inc., La Jolla, CA, USA). Results We observed a significant distributional bias in the targets of aggression-i.e. lunges directed towards either conspecific or heterospecific opponent males-in seven out of twelve species-pair interactions (table 1). The behaviour of closely related species pairs contrasted with that of more distantly related species pairs. Among closely related species pairs, heterospecifics and conspecifics were treated more or less equally (i.e. there was not a strong bias in the direction of aggression), as can be seen in the largely overlapping distributions of heterospecific and conspecific lunge counts (electronic supplementary material, figure S1). In addition, the ratios of mean lunge counts (RL; heterospecific : conspecific) in closely related species pairs hovered around values of one (figure 3; RL % 1), indicating that heterospecifics and conspecifics were equally likely to be targeted by aggression. The only closely related species pair interaction that showed a significant aggression bias was D. melanogaster paired with D. simulans (table 1), where D. melanogaster males were three times more likely to target heterospecifics than conspecifics (figure 3 and electronic supplementary material, figure S1; RL ¼ 3.09). In contrast, among more distantly related species pairs, the distributions of heterospecific and conspecific lunges did not overlap (electronic supplementary material, figure S2), and the ratios of mean lunge counts were all less than one (figure 3; RL , 1), indicating strong conspecific aggression biases. These patterns of conspecific aggression bias were also reflected by the number of lunges per aggression trial ( figure 4). In addition, species that were included in multiple species-pair interactions (i.e. D. ananassae, D. melanogaster and D. simulans) were not more or less aggressive overall than other species ( figure 4). Rather, for species included in multiple species pair interactions, the level and direction of aggression depended on the opponent species (table 1, figures 3 and 4). Further supporting the contrast between the aggressive behaviours of closely versus distantly related species pairs, there was a significant negative relationship between the genetic distance between competing species and the ratio of mean lunge counts (figure 5; Spearman r ¼ 20.82, N ¼ 10 000 permutations, p ¼ 0.002). In other words, more distantly related species pairs were most aggressive to conspecifics, whereas closely related species pairs treated conspecifics and heterospecifics with equal levels of aggression. In fact, males in the distantly related D. simulans -D. ananassae species pair displayed a high degree of tolerance for heterospecific opponents sharing the food cup but escalated quickly to high-intensity lunging when confronted by conspecific opponents (electronic supplementary a n a -p a l p a l -a n a p s e -p e r p e r -p s e m e l -s i m s i m -m e l a n a -a t r i a t r i -a n a a n a -m e l m e l -a n a a n a -s i m s i m -a n a material, video S1). Conversely, the intensity of aggression directed towards heterospecifics was greatest in closely related species pairs, such as D. simulans-D. melanogaster (electronic supplementary material, video S2). There was no significant difference in the latency to initiate aggression towards conspecifics or heterospecifics (electronic supplementary material, figure S4; two-way ANOVA, direction of lunge effect, F 1,177 ¼ 0.00057, p ¼ 0.98, direction of lunge  species pair interaction, F 5,177 ¼ 1.515, p ¼ 0. 19). That is, males from either opponent group were equally likely to be targeted at the initial onset of aggression when low-intensity encounters first escalate to high-intensity lunging. These results suggest an opportunistic, non-selective tendency towards initiating an aggression sequence followed by a species-specific strategy for selectively targeting subsequent aggressive behaviours. a n a -p a l p a l -a n a p s e -p e r p e r -p s e m e l -s i m s i m -m e l a n a -a t r i a t r i -a n a a n a -m e l m e l -a n a a n a -s i m s i m -a n a Body size differences among opponents have been shown previously to influence male fly aggressiveness [15], but we found no significant association between average body size and number of aggressive lunges by a given species (electronic supplementary material, figure S5B; Spearman r ¼ 0.015, N ¼ 155, p ¼ 0.85). Furthermore, the relative body-size difference between opponent species in a given fight showed no significant relationship to the number of heterospecific lunges (electronic supplementary material, figure S5C; Spearman r ¼ 20.14, N ¼ 75, p ¼ 0.24). Discussion To our knowledge, this is the first study to demonstrate discriminatory aggression between species of Drosophila-i.e. the differential aggressive response of males towards conspecifics versus heterospecifics in multi-species social interactions-albeit aggression biases were mostly observed between distantly related species and not closely related species. While males of many species of Drosophila are known to be territorial [15,17,24], particularly in the lekking species that are endemic to Hawaii [32], previous work has only provided limited accounts of heterospecific interactions [19], and heterospecific aggression has never been explicitly quantified. We interpret the differential aggressive responses among closely versus distantly related species pairs as innate responses that are mediated by species recognition cues. Because all interacting individuals in this study were extracted as pupae and socially isolated as adults with no direct contact with other males from either species, the biases in aggressive targeting are not likely to be learned behaviours. Several potential molecular mechanisms may underlie these behavioural responses to male-male encounters of different species. Recently, it has been reported that epigenetic mechanisms, such as DNA methylation, serve as an interface between the genome and the environment and can facilitate speciesspecific behavioural plasticity in the context of courtship by modulating aminergic function [33]. Thus, as a proximate mechanistic cause for the bias in aggressive targeting reported in the present study, octopaminergic systems may play a critical role in relaying species-specific chemosensory information [34], and facilitate species recognition and/or discrimination in the context of mixed-species aggressive interactions. Furthermore, the cues that stimulate the neural substrates of species recognition and subsequent aggressive targeting may incorporate pheromone cues, a mechanism that has been previously shown to mediate aggression among D. melanogaster conspecifics [35]. In other words, males of closely related species may treat each other as conspecifics simply because they smell alike, a case of mistaken identity (see below). Based solely on the data presented herein, we cannot evaluate the ultimate evolutionary causes of male aggression biases among Drosophila spp. Nonetheless, it is important to consider the potential ecological and evolutionary mechanisms that influence these patterns in order to provide a framework for future work. A major outstanding question is whether these behavioural biases for aggression are due to ancestral states, where males treat closely related heterospecifics like conspecifics due to mistaken identity (i.e. falsely identifying a heterospecific opponent as a conspecific), or if aggression bias is influenced by current and ongoing interference competition. Previous studies in other animal species lend support to the mistaken identity hypothesis. In a meta-analysis of birds and fish, Peiman & Robinson [1] found that, among species that do not share resources, heterospecific aggression is greatest among closely related species. Similarly, in a separate meta-analysis of wood warbler birds, Losin et al. [36] found that, even among sympatric species, patterns of heterospecific aggression can largely be explained by shared ancestry. Thus, in many cases, heterospecific aggression may be an evolutionary artefact that originates from natural selection for conspecific aggression, which erodes over time following species divergence. In fact, it may be difficult to parse this non-adaptive cause of heterospecific aggression from the effects of interference competition between species. To overcome this challenge and potentially account for these confounding effects, Peiman & Robinson [1] suggest comparing levels of heterospecific aggression among allopatric species versus aggression among sympatric species. In the allopatric case, heterospecific aggression should be non-adaptive because species do not directly compete for resources. In comparison to allopatry, higher or lower levels of heterospecific aggression in sympatry could be attributed to the evolution of aggressive behaviours in response to interference competition among species. A further consideration in analyses of heterospecific aggression, and an important caveat to the interpretation of the results we report herein, is the degree to which species pairs directly interact in nature. All of the species pairs included in this study have large biogeographic ranges that overlap to varying degrees [20], and each species pair has similar food preferences [9,10,12]. Thus, it is possible that these species pairs compete in nature, and that direct interference competition between species influences the evolution of heterospecific aggression. However, to our knowledge direct interference competition among the six species pairs included in this study has never been documented in nature. Therefore, future work is required to address whether or not these species pairs directly compete for resources in nature. If, in fact, these species directly compete in nature, then it is interesting to note that our results follow the general predictions of the limiting similarity hypothesis [3], which states that the greatest degree of interference competition exists between closely related species. Further, these trends in male-male aggression behaviour predict that more distantly related species would coexist without posing significant costs in energy and time devoted to heterospecific aggression. We would also like to note that direct competition for mates (i.e. reproductive interference competition) among heterospecifics is another type of interference competition that could influence heterospecific aggression. Indeed, the intensity of reproductive competition among species has been shown to be a key factor that influences heterospecific aggression in other species [37]. We were not able to assess this effect in the present study because, while hybridization is known to occur in the laboratory among the closely related sibling species pairs [11], there is lack of consensus as to whether or not hybridization occurs in nature for these species pairs [11,27,38,39]. On a related note, in our aggression trials we intermittently observed what looked like courting behaviour between males of different species-e.g. D. simulans males exhibited courtship displays to D. melanogaster males that consisted of single wing extensions (electronic supplementary material, video S2). We did not score this behaviour because it is difficult to interpret the significance of these interactions in the context of male-male aggression. We did not observe escalation of this courtship-like behaviour to the point of attempted mounting; thus, it could be that the single wing extensions constituted a form of aggressive interaction and not courtship. If these single-wing extensions were in fact misdirected courtship towards heterospecific males, this behaviour could further complicate heterospecific interactions and even induce reproductive interference if this behaviour occurs in nature. To address these questions of potential reproductive interference competition, future work should focus on comparisons of taxa among sibling species of Drosophila that are known to hybridize in nature, such as species in the subquinaria complex [40] or among the yakuba, teissieri and santomea sister species [41,42], to examine the role of reproductive interference competition in influencing the evolution of heterospecific aggression. Data accessibility. Raw data from the scoring of aggression videos and representative video clips of aggression trials are included as electronic supplementary material.
2019-04-03T13:07:44.387Z
2018-10-08T00:00:00.000
{ "year": 2019, "sha1": "7912f7e2af2fbe60043a18b0b6a88e3809bc35f5", "oa_license": "CCBY", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.190069", "oa_status": "GOLD", "pdf_src": "RoyalSociety", "pdf_hash": "718ca13c63e5ee3146e9673abec1aa3f6cd5c956", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
261787526
pes2o/s2orc
v3-fos-license
Research on the Development of Beauty Brand Marketing Based on SWOT Analysis: Taking Florasis as an Example : With the rapid development of the domestic economy, China's cosmetics industry has maintained a steady growth momentum, and domestic beauty brands such as Florasis have been loved by a large number of consumer groups. This paper conducts a SWOT analysis on Florasis's current situation and target consumers, so as to specifically analyze Florasis's current marketing strategy. It can be concluded that, with unique brand positioning and safe product ingredients, Florasis is good at using the advantages of network marketing and promotion to make up for its disadvantages in operation and management. By grasping the blank market of oriental classical beauty makeup and using the marketing strategy of online and offline simultaneous promotion such as star effect, IP joint name, and KOL promotion, Florasis enhances its brand awareness and will open up overseas markets in the future. Introduction In the next few years, China's cosmetics industry will continue to maintain a stable growth momentum.The market size of China's cosmetics industry is expected to continue to grow.According to Binyan Li, by 2024, it is expected to reach $372.37 billion, representing a combined annual growth rate of 11.8 percent [1].At the same time, benefiting from the rapid improvement of China's domestic socio-economic level and consumers' demand for appearance, more and more domestic beauty brands have emerged.Under the background of such a good development momentum of domestic beauty makeup, Florasis appeared and was favored by mass consumers due to its exquisite packaging and generally acceptable price.In this paper, the author analyzes the current status of Florasis, as well as the advantages and disadvantages of its current products and marketing strategies, so as to help Florasis in enhancing its brand awareness, establishing a good brand image, and gaining a foothold in China's beauty market. Brand Background of Florasis After the United States, China is the world's second-largest consumer of cosmetics.However, 80% of the cosmetics market is foreign-owned, with domestic companies accounting for only 20% [2].In this context, Florasis appeared.It is a makeup company born in Hangzhou in 2017."Florasis", which is combined by "Flora" and "Sis", means "Flower God".Florasis hopes that Chinese women will be as beautiful as Xizi, who is one of the four ancient beauties, no matter whether they wear heavy makeup or light makeup.Florasis takes the promotion of the oriental culture as its mission, and inherits many exquisite techniques such as micro-carving and relief carving [3].Benefiting from the advantages of China's economic development and the rapid development of social platforms, the development momentum of China's beauty industry is getting better and better.Florasis also stands out from other competitors with the help of the brand story and the use of Chinese elements in the product style and exquisite carvings and patterns [4]. Market Positioning Florasis is positioned as a mid-to-low-end make-up brand, aiming at the vacuum price band of 100-200 yuan for domestic cosmetics, forming a differentiated competition in pricing.It is different from national-brand makeup with an average price of less than 100 yuan or European and American big-name makeup with an average price of more than 250 yuan. User Positioning As shown in Figure 1, the target users of Florasis are mainly oriental women aged 19-34 who frequently use Taobao, little red book, Jingdong, Weibo, and other apps in Guangdong, Beijing, Shanghai, Zhejiang, and other regions.Such customers have certain purchasing power, frequently use social networks, and have a certain sense of identity and pride in their own traditional culture. Strengths First, Florasis is good at using network marketing.Brian Garda Muchardie and other researchers pointed out the importance of social media promotion.They also claimed that at least 95% of users are using social media [5].Florasis is very good at attracting Generation Z with popular stars and top streamers.Through the endorsement of celebrities and live streamers favored by Generation Z, Florasis enters the market with Internet terms familiar to young people, capturing the consumer psychology of Generation Z consumers and stimulating consumption.Through online marketing such as live broadcasts and Key Opinion Leader (KOL), Florasis attracts fans of different ages, improves fan recognition, and increases online turnover. Second, Florasis has unique brand positioning.With the development of the economy, the overall national strength has improved.People's sense of identity with their own culture will be greatly improved, and they also hope that more and more products can convey "Chinese beauty".Florasis has been able to position the brand in the Chinese style since its establishment.Besides, it promotes the oriental culture and Chinese culture as its own responsibility, so as to stand out in the fierce competition in the beauty market. Third, the brand ingredients of Florasis are safe.The highlight of the Florasis brand lies in the ingredients and craftsmanship of the products.All products are based on flower and herbal essences, and the formulas are derived from ancient cosmetics.According to the skin characteristics and makeup needs of oriental women, flower essences and Chinese herbal medicine extracts are used as the core ingredients.Moreover, modern make-up research and development (R&D) and manufacturing processes are applied to create healthy and skin-nourishing make-up products suitable for oriental women.Florasis has well established the brand concept of "beauty with flowers" and deeply planted the image of healthy, safe, and natural products in the minds of the public.According to official information, Florasis copied skincare regimens of Tang Palace, for instance, the eyebrow pencil of Florasis is made of honeysuckle and fleece-flower root essence; the carved lipstick of Florasis is made of fine petals as raw materials based on the ancient recipe of lip-nourishing; the loose powder of Florasis is made of silk powder, peach blossom, camellia, and pearl powder; the air cushion of Florasis is made of "Yu Rong San", the secret recipe of Empress Dowager Cixi, as well as white water lily, peony, camellia essence, etc. Florasis is stringent in quality control.In terms of heavy metal standards, many international brands require less than 30 ppm for nickel, while Florasis requires less than 10 ppm.The good quality of the products can gain the trust of consumers and increase the growth of sales. Fourth, the packaging design of Florasis is delicate, making its product promotion particularly effective.Modern consumers are willing to pay for the appearance.Florasis's product packaging design integrates the brand image with consumers' lifestyles [6].The design involves Chinese elements such as auspicious clouds and ethnic elements are also incorporated in limited products.Exquisite product packaging and attractive lipstick relief make Florasis the first choice of many consumers. Weaknesses Florasis relies too much on the foundry for product production, and at the same time, it has been questioned by the market because of the problem of the foundry.The quality of its products is similar to other products in the market, making Florasis unable to create differentiation or meet the diverse needs of customers.Florasis relies on online channels, which makes it more like an online celebrity makeup brand in terms of sales methods and positioning.It has high popularity online and low popularity offline.There are few offline sales channels and no special counters for systematic sales. Moreover, Florasis, as a newly launched domestic brand, has some deficiencies in management and operation compared with mature international cosmetics brands, and there are not many loyal users.With the same budget price, consumers are more willing to choose international cosmetics first. Opportunities First, the popularity of Chinese domestic cosmetics has increased in the international market.The recognition of Chinese brands in overseas markets is also getting higher and higher.According to Tmall Overseas data, during the Double 11 last year, the turnover of domestic beauty products increased by more than 10 times, ranking first among all export categories.Domestic beauty products have demonstrated a strong ability to attract investment.Among them, four cutting-edge domestic beauty brands, Florasis, Perfect Diary, Little Odin, and Mao Geping, have become the most eye-catching "dark horses".Florasis has even topped the list of domestic products exported to overseas markets and became the biggest winner of Double 11.Chinese domestic cosmetics have also gained a place in overseas markets. Second, there are fewer beauty brands with the same style as Florasis.Compared with Perfect Diary, Florasis has its own unique appearance.Combined with traditional culture, it can attract a new generation of young people to realize their sense of cultural identity and cultural pride.At present, there is a large demand for beauty makeup, and the attractiveness of domestic beauty makeup is much higher than that of cheap beauty makeup products.Generation Z is also more willing to choose the "oriental classical beauty". Threats Florasis lacks core product technology and is highly substitutable.It mainly relies on foundries and cannot produce directly by itself.It has not formed its own supply chain advantages.As more and more beauty brands enter the beauty market, more and more companies want to expand their market share.Although Florasis has always been in the leading position in the country in terms of marketing strategy, more and more beauty brands have also begun to seize the domestic market, and have invited different professional teams to carry out comprehensive marketing. Online Marketing In 2019, China's cosmetics market reached nearly 60 billion yuan, the highest annual growth rate.Sales through online and offline channels are approaching $90 billion [7].So, online channels have become an important sales channel for cosmetics.If online resources are made proper use of, more consumers can be attracted. User Co-Creation In the early days of Florasis, it first created brand traffic through the method of "user co-creation", and then developed products and built brands based on customers.Florasis looked for cooperative users in the early stage, and then followed up on their feedback and got close to them to study their consumption preferences and habits.Finally, Florasis updates its product R&D.At the same time, when a product is being developed, Florasis will first achieve a 60%-70% degree of completion.Then, it selects users from WeChat and Weibo platforms and sends them free samples.Users will try new products for free and post feedback for subsequent improvement and R&D. Celebrity Endorsement Benefiting from today's "fan economy", Florasis promotes its products by cooperating with celebrities who are in line with the brand's temperament.It opens official accounts on several major online platforms, such as Weibo, little red book, and TikTok, continues to output customized content, and has an influence on users of different platforms.Florasis attaches great importance to celebrity endorsements, which can help it quickly increase its brand awareness.Florasis invites Ju Jingyi, Du Juan, Li Jiaqi, and other celebrities who fit the brand's tonality to endorse.Additionally, Zhou Shen, a traditional Chinese music singer, composed the song "Florasis" with the same name as the brand. IP Joint Name By co-branding with different brands, loyal customers of other brands can be attracted and become Florasis's potential users.Florasis focuses on the "oriental culture" and applies excellent traditional culture to establish emotional resonance with consumers. Florasis & Legend of Gaia They joined hands to appear at China's International Fashion Week.Inspired by the famous Chinese allusion "Luo Shen Fu" and Miao ethnic silver ornaments, they jointly created the "Luo Shen Fu" high-end co-branded clothing, Miao Impression high-end co-branded clothing, and the brand-new makeup product gift box: the "Luo Shen Fu" gift box, showing the world the spiritual connotation of oriental beauty. Florasis & Luzhou Laojiao Florasis teamed up with Luzhou Laojiao, one of the four oldest famous wine brands in China, to jointly launch the "Florasis & Luzhou Laojiao • Peach Blossom Drunk" limited-edition gift box. Network KOL As Rongjuan Chen et al. said: "in the cosmetics industry, celebrities, influencers, and bloggers are gaining popularity on Chinese social media [8]."Therefore, Florasis cooperates with Key Opinion Leaders (KOLs) on multiple platforms, such as little red book, Taobao, TikTok, and Weibo, to plant grass for users with high-quality content, and set up live broadcasts on various platforms.The choice of short videos, a short and concise way of communication, better meets individual needs [9].Florasis will plan different emphases according to the tonality of the online platform.For example, bilibili focuses on the resonance of multiple fields.It carries high-quality content and spreads brand culture in vertical penetration fields such as Hanfu, two-dimensional, singing and dancing, and imitation makeup.The target consumer groups of online KOL marketing for cosmetics trust the Internet and are more likely to accept the opinions of KOLs [10].The layout of Florasis's social platform has two aspects: KOLs at the head and KOLs at the waist and tail.The head KOLs are responsible for recommendations in all directions.The main content direction is to create topics, professional evaluations, beauty tutorials, etc., with the purpose of improving brand recognition and credibility.Florasis mainly cooperates with waist KOLs on the platforms such as TikTok and little red book, and Weibo cooperates with tail KOLs.The purpose is to take over the popularity of the head KOLs, spread the long-tail effect, and continuously amplify the brand's voice (Fig. 2). Figure 2: The KOL positioning distribution map. User Interaction Florasis holds offline activities to interact deeply with users, co-create brands and products, and focus more on user experience and brand services.The first is Florasis's 10,000-person experience plan.It has held many offline experience meetings in Hangzhou, Shanghai, Beijing, and other cities, inviting consumers to try new products and selecting the products with 90% satisfaction to continue their production.Florasis also holds offline user gatherings, like sculpture art and lipstick making, to narrow the distance between consumers and brands. Investment in Offline Billboards Florasis placed advertisements on large screens in business circles in cities such as Hangzhou, Beijing, and Shanghai.It also placed advertisements on large screens in trendy overseas cities such as Tokyo and New York to enhance its global popularity and make loyal domestic users proud of the brand.Meanwhile, Florasis also launched a large number of ladder media TVC advertisements, entered the community to enhance brand power, and penetrated into consumers' life scenes.Finally, Florasis has also made some attempts in the brand's overseas expansion, such as appearing in the International Fashion Week and integrating Miao elements and Chinese mythology into Chinese clothing. Conclusion In conclusion, the market positioning of Florasis is relatively clear, which is the key for Florasis to increase its reach and conversion of customers, and it is also the key for brands to reduce communication costs with customers.Florasis has established a relatively systematic KOL system and is building its own brand circle and public opinion position by connecting with customers.The vast majority of Florasis's sales come from various live broadcasts, of which Li Jiaqi's contribution accounts for a high proportion.At the moment when live broadcasts bring goods and traffic media is gradually normalized, customers will choose Florasis because of the differentiated value of the brand.
2023-09-14T15:02:01.930Z
2023-09-13T00:00:00.000
{ "year": 2023, "sha1": "6076537ef0b241e35f957ec6ed927c7a35b39a17", "oa_license": "CCBY", "oa_url": "https://aemps.ewapublishing.org/media/205aec7d3c124db0980cf59c41b592c0.marked_gQ0oAlE.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4ab3b8ba9febc1dc1f174541ba075d18a02fce01", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
12365903
pes2o/s2orc
v3-fos-license
Does Invasion Success Reflect Superior Cognitive Ability? A Case Study of Two Congeneric Lizard Species (Lampropholis, Scincidae) A species' intelligence may reliably predict its invasive potential. If this is true, then we might expect invasive species to be better at learning novel tasks than non-invasive congeners. To test this hypothesis, we exposed two sympatric species of Australian scincid lizards, Lampropholis delicata (invasive) and L. guichenoti (non-invasive) to standardized maze-learning tasks. Both species rapidly decreased the time they needed to find a food reward, but latencies were always higher for L. delicata than L. guichenoti. More detailed analysis showed that neither species actually learned the position of the food reward; they were as likely to turn the wrong way at the end of the study as at the beginning. Instead, their times decreased because they spent less time immobile in later trials; and L. guichenoti arrived at the reward sooner because they exhibited “freezing” (immobility) less than L. delicata. Hence, our data confirm that the species differ in their performance in this standardized test, but neither the decreasing time to find the reward, nor the interspecific disparity in those times, are reflective of cognitive abilities. Behavioural differences may well explain why one species is invasive and one is not, but those differences do not necessarily involve cognitive ability. Introduction Species invasions are one of the largest threats to native species worldwide, but our ability to predict invasion success remains weak. To become a successful invader, a species must pass through several discrete stages of the introduction process [1]. Rather than being just a random subset of taxa, invasive species are thought to have behavioural traits that improve their chances of advancing through each of these stages [2]. Some behavioural traits may enhance a species' invasiveness across all introductory stages, whereas other traits may facilitate one stage of the invasion process but impair success at another stage. For example, ''bolder'' individuals may be more likely to enter a transport vector and be shipped to a new location (transport stage) but might also have a greater risk of being detected at biosecurity checkpoints (introduction stage) [2,3]. Recently, researchers have used a variety of species-level behavioural traits to predict species' invasiveness [2]. One trait that may reliably predict species invasiveness is intelligence. We generally consider an animal to be intelligent if it is able to 1) rapidly solve novel challenges that are ecologically relevant to that species, 2) solve a single relevant challenge using multiple strategies, and 3) solve several different types of relevant challenges [4]. Once an animal arrives in a new location (i.e. after transport and introduction), it must still overcome a variety of challenges. In order to successfully establish a new population, the new arrivals must identify and avoid novel predators, locate potential mates, obtain resources, and react appropriately to unfamiliar climatic regimes. Organisms that quickly modify their behaviours to meet these challenges are more likely to survive and reproduce in their new environment and therefore, we might expect intelligence to correlate positively with invasiveness [5,6]. Indeed, across all four classes of terrestrial vertebrates, studies that use relative brain size as a proxy for intelligence have reported that large-brained species are more successful invaders than are smallbrained species [5,7,8]. Nonetheless, brain size is only a rough guide to intelligence, and thus these studies do not provide any direct evidence that successful invaders are better at solving novel challenges than are unsuccessful invaders. To test the hypothesis that intelligence predicts the success of species introductions, we need to specifically measure and compare the learning ability of species that have established invasive populations, compared to related but non-invasive species. The congeneric scincid lizards Lampropholis delicata and L. guichenoti provide an ideal model system with which to test this hypothesis. These species are both small (,35-55 mm adult snout-vent length [SVL]), oviparous (average clutch size ,3 eggs), ground-dwelling, generalist insectivores that are broadly sympatric in suburban habitats throughout southeastern Australia [3,9,10]. Yet, despite these similarities, only L. delicata has successfully established populations outside of its native range (e.g. Lord Howe Island, The Hawaiian Islands and New Zealand), whereas L. guichenoti has not [3 and references therein]. However, these species co-occur in each of the areas identified as source regions for L. delicata introductions. Likewise, both species have been intercepted during biosecurity checks of goods entering New Zealand [3,11,12], suggesting that both taxa have had introduction opportunities but that only L. delicata has capitalised on these opportunities to become invasive. Could differences in cognitive ability explain the apparent disparity in the ability of the two species to become established after being introduced to a new location? Certainly, behavioural traits differ between these species. For example, Chapple et al. [3] found that L. delicata was more exploratory than L. guichenoti, plausibly allowing L. delicata to locate critical resources and mates in novel habitats (and thus, increasing L. delicata's likelihood of establishing invasive populations). However, more exploratory individuals would also encounter dangers (e.g. predators and environmental hazards) more frequently than less exploratory individuals, increasing their chances of injury and death [13]. Thus, exploratory behaviour alone seems unlikely to explain why L. delicata has been more successful than L. guichenoti at establishing populations in new locations. Superior cognitive abilities (e.g. learning and memory) might have helped L. delicata to meet the challenges associated with translocation to a new habitat [5]. For example, L. delicata may remember the location of profitable resource patches and sensory cues associated with predators more rapidly than L. guichenoti, allowing L. delicata to maximize its chances of obtaining critical resources while reducing encounter rates with predators. Here, we test this hypothesis using a simple Y-maze with a food reward to explore whether or not L. delicata is able to solve a novel cognitive challenge more rapidly than does L. guichenoti. Because intelligence is thought to be advantageous during species introductions, we predicted a priori that the invasive L. delicata would significantly outperform its non-invasive congener L. guichenoti in the maze task. Such differences in cognitive ability may explain disparities in the capacity of these two species to establish populations in novel environments. Ethics Statement The University of Sydney Animal Ethics Committee approved all of the procedures described in this manuscript (approval #: L04/8-2010/3/5449). All animals were released upon completion of the study. Collection and Housing We collected 16 adult L. guichenoti (8 adult females and 8 adult males) and 16 adult L. delicata (8 adult females and 8 adult males) in suburban Sydney, New South Wales, Australia. Lizards were housed in individual plastic containers (200 mm6140 mm670 mm) lined with paper towel. Each lizard was provided with a shelter (100 mm long6230 mm diameter) and ad libitum access to water. We withheld food from all lizards 48 hr prior to the first maze trial in order to standardize hunger levels. Maze Task As our novel cognitive challenge, we used a simple Y-maze with a food reward to assess learning rates in L. delicata and L. guichenoti. The use of simple T-and Y-mazes to test learning ability is a standard technique in studies of reptilian cognition [14]. Mazes were constructed from opaque U-channelled electrical conduit fitted with clear plastic tops (Tripac Distribution PTY LTD, Sydney, Australia). Two arms of each maze contained a wooden platform with a single plastic feeding well. The remaining arm in each maze was empty and designated as the starting location for all trials. There was also a central decision point used to determine turning errors. As lizards use visual cues during foraging [15], the two reward-containing arms of each maze were painted with different colours (blue and orange) and patterns (striped and solid) to provide local cues. Each colour-pattern combination was Figure 1. Y-mazes used to assess learning ability in L. delicata and L. guichenoti. Each maze had three arms of equal length. Two maze arms were painted with contrasting colours (orange and blue) and patterns (stripes and solids) to provide visual cues. All colourpattern combinations were replicated and reversed in our study (four mazes total). Two arms contained feeding wells (A and B) whereas the third arm was empty and designated as the starting position for each trial (C). There was also a central decision point (D) we used to determine turning errors. doi:10.1371/journal.pone.0086271.g001 replicated and reversed to account for colour bias and side preference (figure 1). Also, the mazes remained in the same location within the room used for experimental testing, providing the lizards with the opportunity to navigate the mazes using positional cues external to the maze environment. Testing occurred over 15 days, with each lizard completing one learning trial per day. A learning trial consisted of a lizard locating a food reward (cricket, Acheta domestica) placed in one of the two feeding wells. For each lizard, the food reward was randomly assigned to a feeding well (left or right) prior to the start of the trial, and the location of the reward remained constant throughout the testing period. Trials began after lizards were introduced into the empty arm of the maze. We recorded the time it took for each lizard to locate the food reward (to a maximum of 30 minutes) and the direction the lizards turned after they first entered the decision point. Lizards that did not locate the food within 30 minutes were placed next to the correct feeding well and offered a cricket with forceps. All behavioural trials were run at 27uC [16] and recorded using overhead surveillance cameras (Aucom, Security, Bundoora, Australia). Cues and Navigation Lizards are able to sense prey using tongue-flicking to sample chemical cues [17]. We ensured that cricket scent was present in both food wells, to eliminate scent as a long-distance direct cue to the location of the food reward (i.e. before the lizard entered one arm of the maze rather than another). We were not interested in which learning mechanisms (e.g. visual discrimination versus spatial memory) lizards used to locate their food rewards. This is because the ability to efficiently learn a novel behavioural task is likely to benefit translocated species, regardless of the mechanism they use to accomplish the task. For example, a lizard that is able to reliably locate a thermally optimal basking site may benefit by increasing the amount of time it spends at its preferred body temperature while reducing the amount of time it spends searching for basking sites in thermally sub-optimal microhabitats [18] this is true regardless of the mechanism the lizard uses to locate the site. Therefore, we provided lizards with several different types of cues (described above) that they could use to navigate the maze and locate the food reward. Analyses In a maze, we recognise that an animal is capable of learning if it 1) decreases its time to locate a reward, and 2) progressively takes a more direct route to the reward over successive trials. Therefore, we used two criteria for learning: a decrease in latency to the reward across the 15 trials, and an increase in the probability of taking the most direct route to the reward across the 15 trials (described in more detail below). Because we took repeated measurements on the same individuals over time (i.e. the assumption of independence between observations was not met) and we were interested in the average responses of both species in the maze rather than subject-specific responses, we used generalized estimating equations (GEE) to determine whether or not L. delicata and L. guichenoti were capable of maze learning and whether or not the two species differed in learning rate [19]. Model 1-We were interested in whether or not both species decreased the amount of time it took them to locate the food reward across the 15 trials. We used GEE with a Gamma error distribution (log link function) and an autoregressive AR(1) working correlation matrix, to assess the relationship between mean latency to the reward (outcome variable), species and trial number (explanatory variables). We were also interested in whether or not L. delicata and L. guichenoti behaved differently in the maze. When using latency to reach a reward as an outcome variable, consistent interspecific differences in behaviour can influence individual performance scores. For example, neophobia may cause individuals to remain motionless for long periods of time in early trials, when the maze environment is unfamiliar [20]. A more exploratory species (such as L. delicata) may move through a maze more readily and locate the reward faster than a less exploratory species (such as L. guichenoti). Such behavioural differences might lead us to infer interspecific disparities in learning abilities that do not actually exist (type I error). To control for differences in species' behaviour in the maze, we calculated the amount of time a lizard spent immobile in each trial, and included this measurement as an explanatory variable in the above GEE model. Model 2-We were also interested in whether or not L. delicata and L. guichenoti progressively took a more direct route to the reward across the 15 trials. We assessed whether lizards turned towards the reward (i.e. took the most direct route; scored as 1) or away from the reward (i.e. deviated from the most direct route; scored as 0) when they first entered the decision point. If a lizard is learning the location of the food reward, then its probability of turning towards the reward should increase over the 15 trials. We used GEE with a Binomial error distribution (logit link function), and an AR(1) working correlation matrix to assess the relationship between direction of first turn (outcome variable), species and trial number (explanatory variables). For both models, we chose to use AR(1) working correlation matrices because in this model the output variable depends linearly on its own previous values [21]. Similarly, we expect that a lizard's performance in the Y-maze is a function of its previous maze experience and that lizard performance will improve as the number of maze trials increases. We used a robust variance estimator, which reduces the risk of confounding effects if the empirical working correlation matrix deviates from the theoretically assumed one. Gender has the potential to influence maze learning in other taxa [22], so we initially included sex as a variable in both models. However, sex was not a significant predictor of maze performance (P.0.05 in all cases), so we omitted it from both models. For models 1 and 2, we included a species6trial interaction term. This interaction did not significantly predict lizard performance so we omitted it from both models. Corrected quasilikelihood under the independence model criterion (QICC) verified that the best models included only main effects. Our data did not include any missing cases. For all statistical analyses, we used SPSS v.21 and an alpha level of P = 0.05. Model 1-Latency to the Reward In our initial model, there was a significant effect of trial number (P,0.01) on latency to the reward, with the amount of time it took to locate the reward decreasing across the 15 trials in both species ( figure 2). The lack of a significant species6trial interaction suggests that L. delicata and L. guichenoti decreased their mean latency times at the same rate. A significant species effect on latency to the reward (P,0.01) reflects the fact that L. guichenoti had lower mean latency times across all 15 trials than did L. delicata (table 1; figure 2). However, the two species also differed in the amount of time that they spent immobile (P,0.01), with L. delicata spending more time immobile than L. guichenoti across all 15 trials (figure 3a). Inclusion of ''time spent immobile'' in our model eliminated the significant difference between the two species in terms of latency to the reward (P = 0.17). That is, the reason that L. delicata took longer than L. guichenoti to reach the reward was simply because it spent a longer proportion of the trial immobile (table 2; figure 3b). Model 2-Direction of First Turn We did not find a significant effect of trial number (P = 0.99) or species (P = 0.054) on direction of first turn, suggesting that neither L. delicata nor L. guichenoti increased their probability of turning in the ''correct'' direction (i.e. toward the reward) as the trials progressed (table 3; figure 4). Discussion Our two indicators of maze learning ability were 1) a decrease in the amount of time it takes to locate the food reward, and 2) a progressively more direct route to the food reward over successive trials. Both L. delicata and L. guichenoti decreased their mean latency to the reward across 15 trials in our Y-mazes (figure 2), meeting our first criterion for maze learning. Moreover, we did not find a species6trial interaction, suggesting that both species decreased the time it took them to solve the maze at the same rate. Lampropholis guichenoti had lower mean latency times across all 15 trials (figure 2), suggesting that they consistently outperformed L. delicata; however, once we considered the time both species spent immobile (i.e. compared latency times when the lizards were actually moving), there was no significant interspecific difference in mean latencies (figure 3b). The interspecific differences we recorded in maze behaviour (i.e. time spent immobile) may be due to habitat preferences. Not only are L. delicata more commonly found in vegetated rather than open areas [3], but Chapple et al. [3] reported that they spend more time hiding than L. guichenoti when provided with the opportunity to seek shelter in a laboratory experiment. Thus, L. delicata may be less ''comfortable'' in the open setting of the maze than are L. guichenoti. Modification of the maze environment might allow a more accurate measure of L. delicata's ability to complete the task. We predict that inclusion of refuges or covered ledges within the maze would reduce the time that L. delicata spends immobile, and increase the species' overall performance -perhaps to the point that this species performs as well as does its bolder congener, L. guichenoti. If animals that prefer open habitats perform better in mazes, this result could have important implications for the use of mazes in intraspecific as well as interspecific comparisons of cognitive ability. Juvenile reptiles often spend more time in covered habitats than do conspecific adults [23]. Also, pregnant female reptiles often prefer more sheltered habitats than do non-pregnant females or males [24]. This ecological heterogeneity means that variables such as age, sex and reproductive status may influence the performance of individuals in a maze task. In any cognitive test, contextual variables can influence the performance of individuals [25]. Maze designs that take into account a species' preference for sheltered or open habitats may drastically improve the performance of animals in a maze and reduce the magnitude of type I errors in comparative cognition studies. Our next goal was to determine whether or not L. delicata and L. guichenoti decreased their mean latency times by taking a more direct route to the food reward. Neither species increased their probability of turning in the correct direction toward the reward across the 15 trials, and we did not find a species effect on direction of the first turn ( figure 4). This result suggests that neither L. delicata nor L. guichenoti progressively took a shorter route to the food reward across the 15 trials and thus, our second criterion for maze learning was not met. If a species steadily decreases its time to locate a reward within a maze without taking a more direct route, it suggests that the species has habituated to the maze environment and simply searches the maze more rapidly over successive trials [20]. In keeping with this interpretation, our data show a rapid decrease in the amount of time spent immobile in successive trials (figure 3a). Based on these data, neither L. delicata or L. guichenoti were capable of learning the position of a food reward within a Y-maze. Instead, both species appeared to locate the reward by performing increasingly rapid serial searches of the maze environment. However, our results do not preclude the possibility that L. delicata and L. guichenoti are capable of learning the location of a food reward within a Y-maze. Given a greater number of trials, L. delicata and L. guichenoti may have learned to reliably locate the reward. Species that can rapidly solve novel ecological challenges are predicted to have an advantage during introduction events [6,8]. Although neither L. delicata nor L. guichenoti learned the location of a food source within 15 trials, this result does not mean that intelligence is irrelevant to establishment success. We only tested learning ability in a single behavioural context, which is unlikely to provide an accurate representation of species' intelligence [4,26]. Further cognitive testing using multiple experimental frameworks may reveal that L. delicata is capable of solving a variety of ecologically relevant challenges more efficiently than can L. guichenoti. Such experiments would give a more comprehensive comparison of cognition between these two species, and a more robust test of the hypothesis that intelligence facilitates invasion success. Another factor that may influence learning ability and therefore, invasive potential, is age. As animals age, cognitive functions such as learning and memory deteriorate [27]. Indeed, using the same Y-mazes and a similar experimental design, we found that hatchling three-lined skinks (Bassiana duperreyi) decreased their time to locate a food reward and took a more direct route to the reward across 15 trials, providing strong evidence that B. duperreyi learned the location of the food reward [14]. It would be interesting to see if hatchling L. delicata and L. guichenoti display a similar capacity for learning as do hatchling B. duperreyi. Interspecific disparities in learning ability may also be more apparent in younger age classes. These hypotheses could be easily tested using hatchling L. delicata and L. guichenoti, and the same experimental design we used in the present study. If greater learning ability does correlate positively with establishment success, then the age of individuals at the time of introduction may be a strong predictor of invasive potential. Finally, learning ability might not have a significant influence on establishment success at all. Rather, alternative pre-existing behavioural traits (such as aggressiveness, and habitat preference and flexibility) may predict establishment success more accurately. For example, Chapple et al. [3] suggest that relative to L. guichenoti, L. delicata's exploratory nature and propensity to hide increases its propagule pressure during introductions, which in turn increases the probability that L. delicata will successfully establish in new environments. If other behavioural traits can predict establishment success more effectively than intelligence, does this not invalidate previous studies, which have suggested that relatively large brains provide translocated individuals with a selective advantage [6][7][8]? Not necessarily. A larger brain may have numerous functional consequences, including non-cognitive advantages in sensory and motor functions. Any such attribute plausibly could enhance invasion success [7]. Therefore, the putative role of intelligence as a predictor of invasion success remains unclear. In order to understand this relationship, we will need to explore cognitive disparities among successful and unsuccessful invaders in a variety of different contexts.
2017-04-12T23:04:38.236Z
2014-01-24T00:00:00.000
{ "year": 2014, "sha1": "2aa2cf8943107c2ca05d0025e28656c3079def3f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0086271&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2aa2cf8943107c2ca05d0025e28656c3079def3f", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234170187
pes2o/s2orc
v3-fos-license
Differentiating instruction in the pre-service science education classroom . In today’s increasingly diverse classrooms, instructors must be prepared to use a variety of teaching methods in an attempt at reaching all students. Students enter the classroom with a vast array of experiences, backgrounds, and other diversity markers that can impact their perceptions and skill level in science courses. This is of particular importance in the field of teacher training, where students need to not only study innovative teaching techniques but authentically experience these techniques that are varied. The purpose of this paper is to demystify differentiated instruction in the science methods classroom and provide strategies for assessment, materials access, and activities. Throughout instruction and assessment, students are given voice, the opportunity to provide input regarding what and how they learn, and choice, the opportunity to opt for activities/assessments they find interesting, stimulating, or match their learning preferences. Finally, differentiation is a philosophy of education that not only acknowledges but celebrates diversity and differences in students. As we prepare these students to become teachers, it is imperative that students not only discuss instructional strategies but authentically experience them as well. Differentiation gives the professor the ability to be the “guide on the side” and provide the students a wider range of discussion and demonstration of common goals. Introduction In today's increasingly diverse classrooms, instructors must be prepared to use a variety of teaching methods in an attempt at reaching all students. This is of particular importance in the field of teacher training, where students must witness innovation in their core classes, and engage in educational experiences that are varied, stimulating, and hands on. The differentiation of teaching and assessment methods offers students a variety of options by which they provide evidence of their learning [1]. Differentiated instruction (DI), an approach in which teachers plan lessons strategically to address the needs of individual students is rooted in the belief that any group of learners is full of diversity and that effective teachers prepare for individual differences [2]. Throughout instruction and assessment, students are given voice, the opportunity to provide input regarding what and how they learn, and choice, the opportunity to opt for activities and assessments they find interesting, stimulating, or match their learning preferences. Finally, differentiation is a philosophy of education that not only acknowledges but celebrates diversity and differences in the students we teach [3]; and, "given the changing demographics of United States schools, by ignoring issues of diversity, teachers only serve to perpetuate injustice" [4, p. 173]. This paper looks to simplify the process of differentiation for instructors of the sciences, specifically, those whose students may be preservice teachers, by providing simplified instructions on identifying when to differentiate, offering guidelines for how to easily differentiate and supplying a variety of examples showing how other instructors have used DI in their courses. Methods In this paper, we used the method of systematic literature review and case studies of pedagogical practices. This allowed us to provide a comprehensive definition of Differentiated Instruction, describe various DI strategies and identify contexts for their application. We will discuss DI in relationship to rigorous teaching standards according to Bloom's Revised Taxonomy of Learning (Bloom's), and interests and preferred learning style as discussed in Gardner's Multiple Intelligences Theory (MI) [1]. In this manuscript, rigor/rigorous will refer to activities and assessments that ask students to perform on the top three tiers of Bloom's: Analyzing, Evaluating, and Creating. In contrast, foundation/foundational will refer to the three lower tiers of Bloom's: Remembering, Understanding, and Applying. Similarly, we will use the term MI to indicate the use of instructional strategies that appeal to a variety of learning styles or preferences, such as Visual/Spatial, Linguistic, Bodily/Kinesthetic. Both of these dimensions will be discussed in relationship to student readiness, skill and concept attainment, materials access and learning activities, and assessment of learning. Results Due to the very nature of differentiated instruction, which allows students the opportunity to play an active role in their own education, DI is able to meet the needs of every student, even in widely diverse classrooms [5]. Allowing students choice regularly leads to a sense of ownership in their learning and increased motivation toward academic tasks as well as engenders a sense of trust between the teacher and the student [6]. One area of instruction that can be particularly challenging is working with English Language Learners (ELLs) in a fully/primarily English Language setting. Arnold [7] asserts that DI methods of teaching are impactful for use with ELL students in their vocabulary acquisition. An additional concern is that, "English Learners are not a homogenous population [in terms of] language proficiency(ies), cultural backgrounds, prior schooling, and knowledge and skills" [5, p. 1]. This indicates that while differentiation is appropriate to address differences between English Learners and Native English speakers, it is also necessary to differentiate educational experiences among our EL students. DI can also be vital in a classroom setting with students of mixed academic abilities when "differentiation occurs both downwards for remediation, but also upwards for the extension of those learners who show academic promise" [8, p. 283]. While some may consider the benefits of ability grouping to facilitate accelerated pacing of instruction [9], doing so may lead to a one-size-fits-all academic experience, rather than an environment in which, "learners are recognised in relation to their difference, rather than solely as a communalised grade cohort" [8, p. 283]. Additionally, continued practice of pullout programs for students with special needs perpetuates deficit thinking and ignores the benefits to mixed ability classrooms [4]. When teachers also consider the intersectionality of factors that impact students' academic experiences, the argument for DI becomes even stronger. Take for example a student who is both academically gifted and lives in poverty. This student may experience a reality of being "twice oppressed" [3, p. 774] just as might a student who is academically gifted and linguistically diverse. By identifying a student's multiple dimensions of diversity, teachers are able to allow students to pursue lines of investigation that spark inquiry, higher order thinking skills, and promote 21st century skills such as communication, collaboration, and critical thinking [3,9]. Researchers do, however, caution that DI be used thoughtfully and appropriately. One caution is to balance the amount of workload on instructors [10]. Medveš [11] cautions that differentiation not be used as "tracking in disguise", as described of educational reform efforts in post-World War II Yugoslavia, which attempted to offer choices to students but ultimately placed learners into predefined work-readiness or college-readiness tracks. Others caution when implementing DI that educational leaders not assume all teachers are prepared for and skilled at writing and facilitating differentiated opportunities for their students. Frankling, Jarvis, and Bell advocate not only training, but "appropriate, embedded support and direction" [2, p. 84] while teachers begin to apply their knowledge of differentiation to lesson planning and implementation. Discussion The following DI strategies have been defined. Student readiness Most teachers understand that students come to class with varying background experiences, including those from their personal lives and ones from previous science classes. Using a pre-assessment tool with students, a professor is able to determine with which knowledge/skills students enter the classroom. With that information, students can be grouped according to their background knowledge and each group can then be offered scaffolded tasks. In this way, all students engage with the requisite materials, but those who need more support can find it in a group of their peers who also lack the background while their more knowledgeable peers work on more rigorous tasks [9]. Additionally, student groups can be based on interest surveys or students can self-select into groups based on content focus, learning needs, academic preferences, or outcome product [1]. One strategy we find useful is the Mini-Library assignment. This strategy requires teachers to collect various informational resources on a single teaching point. The resources should be diverse in formats (eg., academic text/article, political cartoons, children's books, fiction/nonfiction narratives, YouTube videos) to allow students to choose the format(s) they are most interested in accessing. Skill and Concept Attainment Just as there are a variety of ways to offer students additional background information before beginning a new unit of study, there are a variety of ways to teach students the skills and concepts addressed in the new unit of study. Due to the continuous access to information and entertainment available, students find it more difficult to accept, digest, and retain content that is delivered without, at the very least, a modicum of variety. Methods for differentiating students' opportunities for learning include relatively simple ideas like using a Flipped Classroom model [12] or referring students to a more extensive Mini Library on the subject of study. Additionally, students can be placed in heterogeneous or homogeneous groups depending on the purpose of the given task. While some researchers indicate that homogeneous grouping is preferred in product/outcome activities [9], others argue that heterogeneous grouping is preferred because it will raise the academic achievement of all students [13]. Materials Access and Learning Activities Once students have been introduced to the skills and concepts they are learning in the unit of study, instructors offer activities that will help grow student knowledge. One way of doing this is to offer options from which students can choose the level of socialization required for their learning activity. Another way to offer differentiation during the practice/enrichment phase of the learning process is to Jigsaw access to the materials to which each person has access. In this method, each student or small group is given only a portion of the information, materials, or prompts needed to complete a project. This forces collaboration, a 21st century skill, and allows each participant the opportunity to become an expert on the assigned portion of the project as well as gives students more ownership in their learning. Differentiated Assessments The most critical activity in which teachers engage is the assessment of growth and learning in their students. Because the goal of assessment is to give students the opportunity to showcase their growth and learning, offering students voice and choice in how they showcase their knowledge [6] will give them the opportunity to work with their personal strengths, giving the instructor a clearer picture of actual knowledge and skill gained [14][15][16]. Offering options for assessments is the most accessible form of Differentiated instruction and is often the first step taken by classroom teachers who are interested in updating their pedagogical practices. One of these strategies is offering a menu of choices from which students can choose. "Menus" can come in a variety of formats, for example, setting up options as courses in a meal. Students are asked to choose one item from each course and the number of courses is dependent on the number of skills that need assessing. Another way to offer options is through an Assignment Matrix, which we ask our students to create for their content area. A simple matrix looks like a Tic-Tac-Toe grid and would ask students to complete any three assignments in a straight line, down, across, or diagonal. The assessment our students complete includes the six levels of Blooms on the xaxis, and six to eight learning-style preferences on the y-axis. In each associated cell is an activity that corresponds with the Bloom's level and MI style. Students are then instructed to complete six activities, making sure that no two activities fall on the same x or y line. The benefits to this style of differentiation include that students are able to choose activities that match their interests and strengths, but are also asked to work outside their comfort zone for a portion of the assessment. Conclusion Although differentiating classroom instruction can seem to be an overwhelming undertaking for teaching professors, the benefits to providing students with more voice and choice in their learning can lead to more ownership of learning for students. Differentiation gives the professor the ability to be the "guide on the side" and provide them, as well as the students, with a wider range of discussion and demonstration of common goals/objectives.
2021-05-11T00:07:14.001Z
2021-04-22T00:00:00.000
{ "year": 2021, "sha1": "d527dd870c43ef411c3a96570b9fdfe780f18d5a", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/09/shsconf_ec2020_01007.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f93627de466c1f8e16e24e1e11c13e948e5f8a83", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
132208618
pes2o/s2orc
v3-fos-license
Physiological and Biochemical Evaluation of Fomesafen Toxicity in Female Albino Wistar Rats The use of herbicides is increasing in worldwide crop production. The value of the worldwide herbicide market grew by 39% between 2002 and 2011 and is projected to grow by another 11% by 2016 (Philips McDougall, 2013). Herbicides are being rapidly adopted in developing countries that face shortages of hand weeding labor and the need to raise crop yields (Zhang, 2003). Improved weed control with herbicides has the potential greatly to improve crop yields in many developing countries in the near future (Masthan et al., 1989). Increased herbicide use promotes fertilizer use, which leads to even greater yield increases (Manda, 2011). Research has shown that, if enough hand weeding is done at the optimal times, crop yields are not reduced by weed competition (Prasad et al., 2008). In reality, crop fields are seldom adequately weeded by hand; weeding is tedious and time consuming. Laborers are not always available when needed (De Datta and Barker, 1997). Weeding is often done late, causing drastic losses in yield (Rashid et al., 2012). The use of herbicides has gained impetus from the general rise in farm wages as a consequence of overall economic growth and growth in non-farm employment opportunities, particularly in Asia adequate non-chemical controls for weeds are not International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 6 Number 4 (2017) pp. 116-124 Journal homepage: http://www.ijcmas.com Introduction The use of herbicides is increasing in worldwide crop production. The value of the worldwide herbicide market grew by 39% between 2002 and 2011 and is projected to grow by another 11% by 2016(Philips McDougall, 2013. Herbicides are being rapidly adopted in developing countries that face shortages of hand weeding labor and the need to raise crop yields (Zhang, 2003). Improved weed control with herbicides has the potential greatly to improve crop yields in many developing countries in the near future (Masthan et al., 1989). Increased herbicide use promotes fertilizer use, which leads to even greater yield increases (Manda, 2011). Research has shown that, if enough hand weeding is done at the optimal times, crop yields are not reduced by weed competition (Prasad et al., 2008). In reality, crop fields are seldom adequately weeded by hand; weeding is tedious and time consuming. Laborers are not always available when needed (De Datta and Barker, 1997). Weeding is often done late, causing drastic losses in yield (Rashid et al., 2012). The use of herbicides has gained impetus from the general rise in farm wages as a consequence of overall economic growth and growth in non-farm employment opportunities, particularly in Asia adequate non-chemical controls for weeds are not ISSN: 2319-7706 Volume 6 Number 4 (2017) pp. 116-124 Journal homepage: http://www.ijcmas.com Fomesafen is widely used as herbicide for weed control. Fomesafen have both foliar and soil activity. It mostly control broadleaves. Fomesafen is labeled for postemergence applications to soybeans, peanuts, and rice. Although bronzing or burning of soybean leaf tissue is evident after application, yield is rarely affected. The present study was designed to evaluate the effect of repeated exposure of Fomesafen (Herbicide) by oral gavage method on the blood biochemistry of female wistar rats. The study highlights the various changes in biochemical parameters of female wistar rats over repeated exposure by Fomesafen by oral route through gavage. Under the conditions of this study, the repeated oral administration of 'Fomesafen technical' in Female wistar rats at the dosage level of 50 mg/kg b.wt. for consecutive 90 days did not induce any observable toxic effects, alteration in blood biochemistry parameters when compared to its corresponding control group of animals. available, and herbicide use is increasing dramatically as a result of rising opportunity costs of labor across the developing world (Pingali and Gerpacio, 1997). Herbicide use is increasing in many countries where tillage and flooding for weed control are being reduced in order to conserve natural resources: soil, water and energy. Selective herbicides kill certain targets while leaving the desired crop relatively unharmed. Some of these act by interfering with the growth of the weed and are often based on plant hormones. Herbicides used to clear waste ground are nonselective and kill all plant material with which they come into contact. Some plants produce natural herbicides, such as the genus Juglans (walnuts). Herbicides are widely used in agriculture and in landscape turf management. They are applied in total vegetation control (TVC) programs for maintenance of highways and railroads. Smaller quantities are used in forestry, pasture systems, and management of areas set aside as wildlife habitat. Herbicides have been alleged to cause a variety of health effects ranging from skin rashes to death. The pathway of attack can arise from improper application resulting in direct contact with field workers, inhalation of aerial sprays, food consumption and from contact with residual soil contamination. Herbicides can also be transported via surface runoff to contaminate distant surface waters and hence another pathway of ingestion through extraction of those surface waters for drinking. Some herbicides decompose rapidly in soils and other types have more persistent characteristics with longer environmental half-lives. In Asia, particularly in the Philippines; the proportion of rice farmers using herbicides increased from 14% in 1966to 61% in 1974(De Datta and Barker, 1997. Today, 96-98% of Philippine rice farmers use herbicides (Marsh, 2009). A recent study determined that, with increased labor cost, herbicide application in rice fields is superior to manual weeding even at the lowest weed density by $US 25-54 ha. At the highest weed density and highest labor cost, herbicide application is approximately 80% (about $US 200 per ha) more profitable than hand weeding (Beltran et al., 2012). In Bangladesh, the loss in rice yield in farmers' fields as a result of poor weeds control has been determined to be 43-51% (Rashid et al., 2012). The yield gap between herbicide use and hand weeding is as high as 1 metric t ha -1 , with 30% of farmers losing in excess of 500 kg ha -1 in the absence of herbicides Trends of herbicide consumption in the world and its expenditure Annual usage of herbicides in the world was about 4000 million pounds in the 1953's, increasing to nearly 121000 million pounds at the end of 2013 (WAP, 2014). Since then, at the end of each five years 15-24% increment occurred (Fig. 3). The herbicide industry is quite significant in dollar terms. Annual expenditures by users of herbicide totaled about $US33 billion in 1953 and $US 998 at the end of 2013 (Fig. 4). It is clear from the figure that, there is a sharp increasing trend in consuming herbicides which triggers to increase the market expenditure for herbicides. In future, by the end of 2025, it is supposed the herbicides consumption to be increased by 150 000 million pounds which will costs around $US 2000. The present study was designed by dividing the 60 female wistar rats into six groups (G1 -G6), blood collection was done under light anesthesia (CO 2 ) through retro orbital sinuses and different biochemical parameter like Glucose (mg/dl), Serum Glutamate Oxaloacetate Transferase (U/L), Serum Glutamate Pyruvate Transferase (U/L), Blood Urea Nitrogen (mg/dl), Serum Alkaline Phosphatase (U/L), Total Protein (g/dl), Sodium (mEq/L), Potassium (mEq/L), Cholride (mEq/L) and Cholinesterase (U/L) were studied using Beckman Coulter AU480 Clinical Chemistry autoanalyser system. Animal selection The Female Wistar Rat of age 5-8 weeks and body weight in the range 100-140 gm of was selected for use in this study due to availability of comprehensive background data relating to pathological and clinical parameters, at this laboratory; widely used as a species to predict toxicity of the test item in human and larger animals. Animal identification and acclimatization Animal identification was done with the help of marking ink. Each cage was tagged with appropriate label mentioning the description of study number, study name, dose level, group name, animal number, sex of the animal, date of initiation of experiment, date of dosing and date of completion of the experiment. Acclimatization of the animal was done before initiation of dosing during experiment period the animals were housed in animal house and the husbandry done under good environmental Conditions. Environmental conditions and maintenance of animals The experimental room was monitored for temperature, humidity, light intensity and air changes. The room temperature was maintained at 22±3°C with 50-60 % relative humidity. The room was ventilated at the rate of approximately 15 air changes per hour and lighting was controlled to give 12 hours artificial light (8 a.m-8 p.m) each day. Whereas; housing of animals is done randomly selected animals were caged in a group of 5 according to sex in polypropylene rat cages fitted with wire mesh tops and having autoclaved clean corn Cobb bedding. A sample of bedding material was analyzed for microbiological and chemical contaminants on a routine basis (Chauhan et al., 2015). There were no known contaminants in the bedding material. Animals were feed with Sterilized standard pellet feed (Amrut Feeds Ltd.) and available ad libitum to the experimental animals. The quality of feed was regularly monitored at the NABL accredited laboratory of Shriram Institute for Industrial Research. There were no known contaminants in the feed at levels that would have potential to influence the outcome of this study. Drinking Filtered drinking water was available ad libitum to the experimental animals through polypropylene bottles fitted with nozzles. The quality of water was regularly monitored at the NABL accredited laboratory of Shriram Institute for Industrial Research. There were no known contaminants in the water at levels that would have potential to influence the outcome of this study. Animal welfare All animals were handled with similar due regard for their welfare and the conditions in accordance to the standard operating procedures in compliance with the regulations of the Committee for the Purpose of Control and Supervision of Experiments on Animals (CPCSEA), Govt. of India. Room sanitation was done on routine basis, the floor, work tops of the experimental room was swept and mopped with a disinfectant solution (D -125/D-256). Acute oral toxicity study In the assessment and evaluation the toxic characteristic of a test item, determination of 'Acute oral toxicity in wistar rats' is usually a stepwise procedure. This study was hence, performed to assess the acute oral toxicity of 'Fomesafen Technical' in wistar rat, A study at the dose of 2000 mg/kg B.wt. was conducted, taking 3 females rats (nulliparous and non pregnant) as per the recommendation of the guideline (OECD No.423). A single oral gavage dose was administered to the animals with the help of cannula attached to a syringe. The animals were fasted overnight prior to dosing. Repeated oral exposure study A total of 60 females were selected and randomly distributed into six groups with 10 animals /group. At the commencement of the study, the weight variation of animals used, was minimal and did not exceed ± 20 % of the mean weight of each group. Four groups of 10 female rats were administered with test item 'Fomesafen technical' at the dose levels of 0, 50, 100 and 250 mg/kg B.wt and two additional recovery groups of 10 female each at the dose level of 0, 100 mg/kg B.wt. were administered with test item 'Fomesafen technical' by oral route over a period for 90 days. The blood collection of rats was done by deeply anesthetized by exposure to CO 2 . The depth of anesthesia was assured by the constriction of the pupils as well as simple sensory tests, such as the absence of eye blinking when the eyelid was touched and the absence of foot withdrawal when the foot was pinched. Blood was collected by orbital sinuses for interim evaluation of blood biochemistry parameter and for terminal sacrifice the thoracic cavity was opened. Whole blood was collected in EDTA vacutainer tubes via abdominal aorta The biochemical parameters e.g. Glucose (mg/dl), Serum Glutamate Oxaloacetate Transferase (U/L), Serum Glutamate Pyruvate Transferase (U/L), Blood Urea Nitrogen (mg/dl), Serum Alkaline Phosphatase (U/L), Total Protein (g/dl), Sodium (mEq/L), Potassium (mEq/L), Cholride (mEq/L) and Cholinesterase (U/L) were studied using Beckman Coulter AU480 Clinical Chemistry autoanalyser system. Dose preparation Different doses were prepared in corn oil in calibrated volumetric flasks at the dose levels of 50 mg/kg b.wt, 100 mg/kg b.wt. and 250 mg/kg b.wt. for low, intermediate and high dose groups respectively and 100 mg/kg b.wt. for recovery intermediate dose group. Doses were prepared freshly prior to dosing. Administration of dose was done 10 ml/kg body weight was maintained for each rat. All rats were dosed by gavage using a cannula attached to a syringe. Statistical analysis All the Statistical analysis were done using MiniTab 16.0. Standard errors and one way ANOVA were calculated for given data Results and Discussion The Acute study was performed before the initiation of main study i.e. repeated exposure study with Fomesafen on female wistar rats. In Acute study the expose animal via oral route showed No treatment related toxic signs and symptoms or mortality in any of the animal at the dose level of 2000 mg/kg B.wt. Under the conditions of this study, no toxic sign and symptoms/mortality was observed in any of the animals at the maximum dose level of 2000 mg/kg B.wt. Hence, the LD50 range of 'Fomesafen Technical' lies between >2000-5000 mg/kg B.wt. and is categorized as 2000 mg/kg < LD50 < 5000 mg/kg (Category 5) as per the Globally Harmonized Classification System (GHS). Based on the observation of acute study a main study was designed with 60 females wistar rats and grouped into six group (G1 -G6) Four groups of 10 female rats were administered with test item 'Fomesafen technical' at the dose levels of 0, 50, 100 and 250 mg/kg B.wt and two additional recovery groups of 10 female each at the dose level of 0, 100 mg/kg B.wt. were administered with test item 'Fomesafen technical' by oral route over a period for 90 days. The analysis of biochemical parameters showed that all the parameters of low dose group-(G-2), intermediate dose group (G-3) and high dose group (G-40) were comparable to their control group(G-1), when evaluated on 0 day (pretest), 45 th day (interim) of the study and all the parameters of low dose group (G-2) and intermediate dose group (G-3) were comparable to their control counterparts, when evaluated on 91 st day (terminal sacrifice) of the study. However, a slight increase in SGOT, SGPT were noticed in the intermediate dose group and high dose group animals at terminal sacrifice i.e. day 91 st and slight increase in SGOT, SGPT were noticed in the recovery intermediate dose group on terminal sacrifice i.e. 119 th day. Reversibility of the toxic effects were seen in recovery intermediate dose animals (G-6) as all the biochemical parameters were comparable to their recovery control counterparts (G-5) as they fell within the accepted laboratory limits. In conclusion the rat model is a key element in advancing biological research. The prevailing assumption that the responses to exercise obtained from rat models mimic human responses to exercise is supported by our study at least regarding most of the blood parameters measured. Rat demonstrated adequately reflected human responses to repeated exercise in blood parameters linked to various organs, tissues, functions, and diseases. Although it is plausible to anticipate similar blood profile changes in humans and rats after multiple exercise sessions. It is vital that future research directly compares rat and human responses to acute and chronic exercise in additional variables and sampling points. Our study highlights the various change in biochemimal parameters of female wistar rats over repeated exposure by Fomesafen by oral route through gavage. Under the conditions of this study, the repeated oral administration of 'Fomesafen technical' in Female wistar rats at the dosage level of 50 mg/kg b.wt. for consecutive 90 days did not induce any observable toxic effects, alteration in blood biochemistry parameters when compared to its corresponding control group of animals.
2019-04-26T14:26:59.379Z
2017-04-15T00:00:00.000
{ "year": 2017, "sha1": "eb7e4c4e92cbb1a875b024d19198c19e37697923", "oa_license": null, "oa_url": "https://www.ijcmas.com/6-4-2017/Alok%20Paliwal,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c8b3ec1974e907912d476b39efc13a9432597022", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
256934992
pes2o/s2orc
v3-fos-license
A Complementary Scale of Biased Agonism for Agonists with Differing Maximal Responses Compelling data in the literature from the recent years leave no doubt about the pluridimensional nature of G protein-coupled receptor function and the fact that some ligands can couple with different efficacies to the multiple pathways that a receptor can signal through, a phenomenon most commonly known as functional selectivity or biased agonism. Nowadays, transduction coefficients (log(τ/KA)), based on the Black and Leff operational model of agonism, are widely used to calculate bias. Nevertheless, combining both affinity and efficacy in a single parameter can result in compounds showing a defined calculated bias of one pathway over other though displaying varying experimental bias preferences. In this paper, we present a novel scale (log(τ)), that attempts to give extra substance to different compound profiles in order to better classify compounds and quantify their bias. The efficacy-driven log(τ) scale is not proposed as an alternative to the affinity&efficacy-driven log(τ/KA) scale but as a complement in those situations where partial agonism is present. Both theoretical and practical approaches using μ-opioid receptor agonists are presented. G protein-independent pathway: β-arrestin-2 recruitment assay. Chinese hamster ovary (CHO)-K1 cells engineered to co-express the ProLink ™ (PK) tagged human μ-opioid receptor and the Enzyme Acceptor (EA) tagged β-Arrestin-2 from DiscoverX were used (93-0213C2). 5000 cells/well were seeded in 20 μL of PathHunter Cell Plating Reagent in 384 well plates. Twenty-four hours later, 5 μl ligands (dissolved in Hanks' balanced salt solution (HBSS) containing 20 mM Hepes) were added to the plate. Cells were incubated for 90 min at 37 °C. 6 μL of detection reagent (PathHunter Detection Reagent) were then added and the incubation continued at room temperature for 60 min. Luminescence was recorded (integration time of 1 s) in a Tecan Infinite M1000 Pro reader. G protein-dependent pathway: Measurement of cAMP responses by Homogeneous Time Resolved Fluorescence. cAMP measurements on CHO-K1 cells that stably express the human μ-opioid receptor (Perkin Elmer ES-542-C) were performed by using a system based on Homogeneous Time Resolved Fluorescence (HTRF). The HTRF cAMP kit from CisBio (62AM4PEJ) was used according to the manufacturer's recommendation. 2500 cells/well were seeded the day before the experiment in 10 μl of Opti-Mem (Gibco, 11058-021). On the following day, β-funaltrexamine (β-FNX, Sigma Aldrich, O003) was prepared in OptiMem and cells were treated with 5 μl of either concentration of β-FNX (0, 1, 3, 10, 30, 100, 300 nM) for 2 hours. After that time, cells were washed twice with 40 μl of Optimem. 10 μl of Optimem were finally added and cells were left for one hour at 37 °C. Opioid agonists were prepared in Optimem with 3-isobutyl-1-methyl-xanthine (Sigma-Aldrich, I5879-5G) and forskolin (Tocris, 1099) at 0.5 mM and 7.5 μM respectively and 10 μl added to the cells. After 45 min at 37 °C the reaction was stopped by lysing the cells with a mixture of 10 μl of each HTRF detection reagents. Plates were incubated for an additional hour at room temperature and read at 665 nm/620 nm using a RubyStar Plate reader (BMG LabTech). The conditions were followed as described in ref. 16 . Parameter estimation. Curve fitting was performed by using nonlinear least squares regression. The NLIN procedure of SAS statistical package was applied (SAS/STAT 9.2; SAS Institute, Cary, NC, USA). The Gauss iterative method was employed in solving the nonlinear least squares problem. Equation 1 (this article) of the operational model of agonism 15 was used for affinity and efficacy parameter estimation. It is known that the operational model cannot be applied to fit a single effect/agonist concentration (E/[A]) curve because there is not a single solution for the estimated parameters 12 . In this regard, two different fitting procedures, namely, the receptor inactivation and the comparative methods, were followed depending on the experimental assay performed. For the G protein-dependent cAMP assay, the receptor inactivation method 17 was used: seven curves for each tested ligand were obtained by varying the concentration (0, 1, 3, 10, 30, 100 and 300 nM, respectively) of the irreversible antagonist β-FNX. Common operational E m , n and K A parameters were shared between curves whereas a τ parameter was defined for each β-FNX concentration-dependent curve 15 . The τ parameter corresponding to the curve yielded in the absence of β-FNX was used for the biased agonism analysis. For the G protein-independent β-arrestin-2 recruitment assay the same compounds as in the cAMP assay were used. However, because of the absence of an appropriate irreversible antagonist for the β-arrestin-2 assay, an alternative method was necessary. The comparative method 18 was considered suitable because the tested compounds behave as partial agonists in the assay. In the comparative method, it is assumed that the maximal response (E max ) and the slope parameter (m) yielded by a full agonist through the Hill equation (E = E max [A] m /(A 50 m + [A] m )) match, respectively, the operational parameters maximum response of the system (E m ) and slope parameter (n). Once determined, E max and m can be used as fixed values in Equation 1 (this article) of the operational model for the estimation of K A and τ parameters of partial agonists. Damgo was used as the full agonist in the present study and its curve data were fitted through the Hill equation. The E max and m parameters of Damgo curve were then used as fixed E m and n values in the fitting of the selected compounds under the operational model (Equation 1, this article). In all fitting procedures K A and τ were estimated as logarithms to approximate the assumption of normality distribution 19 . The two scales compared in the present study are based on log(τ) and log(τ/K A ) estimates. Parameter estimates for log(τ) and log(K A ) and their corresponding standard errors were obtained from the nonlinear-regression curve fitting described above. Log(τ/K A ) was estimated as log(τ) − log(K A ). Standard errors for log(τ/K A ) were calculated from the standard errors of log(τ) and log(K A ) by including the correlation (r) between both parameters because they are not independent properties. Thus, representing log(τ) as x and log(K A ) as y, the standard error (se) of log(τ/K A ) was calculated as = + − − se s e se 2rse se x y x 2 y 2 x y . The 95% confidence intervals of log(τ/ K A ) were calculated as x y t se x y 0 025; x y  , where the value of ν for the degrees of freedom of the Student's t-function depends on whether the variances of x and y are statistically equal or different (F-Fisher test). In the calculation of bias through both ΔΔlog(τ) G-protein, β-arrestin and ΔΔlog(τ/K A ) G-protein, β-arrestin scales we conclude that there is no bias in one or the other when the confidence interval includes zero. However, inasmuch as a collection of compounds is evaluated, the issue of multiple testing appears and a corresponding correction is necessary. The issue of multiple testing was considered by adjusting the significance level through the Holm's method. To do that we first transformed the IC95% of each of the compounds in a p-value for a t-test with a null hypothesis of μ = 0. Then the p-values for the selected compounds were adjusted according to the Holm's method. Afterwards, the IC95% were recalculated by adjusting the α value according to the relative position of the previously calculated p-value. This resulted in adjusted IC95% more prone to include the zero value, which parallels the conventional conservative process of multiple testing involving p-values (a lesser propensity to reject the null hypothesis). Due to the statistical consistency of both inference methods, confidence intervals and hypothesis testing produced the same conclusion (biased agonism or not) for each of the compounds. Results and Discussion The log(τ/K A ) and log(τ) scales. The two scales for biased agonism we discuss herein are based on the operational model of agonism, presented in a seminal work by Black and Leff 15 15,20 . Under the operational model of agonism 15 , τ is defined as τ = [R T ]/K E , where [R T ] represents the total receptor concentration and K E the value of the concentration of agonist-receptor complex, [AR], for half the maximum possible effect, E m ; in other words, the inverse of K E reflects the intrinsic efficacy of the AR complex. Thus, τ contains both tissue and ligand-receptor efficacy parameters. Moreover, K A is not a thermodynamic equilibrium dissociation constant but a conditional or functional constant. This is because K A does not correspond to an individual equilibrium step, the binding of the agonist to the inactive receptor conformation, but it incorporates, in addition, the receptor conformational change associated with receptor activation. From a molecular perspective, the concept of receptor activation is present in both τ and K A parameters. As it has been shown 15 , τ may reflect the binding of the transducer G protein to the receptor. More precisely, in the presence of GTP and GDP, receptor activation is proportional to the active state of the quaternary complex (AR*G-GDP), with R* indicating the active conformation of the receptor 21 . In addition, K A is a combined expression of the parameter values for the agonist binding to the bare receptor and the receptor conformational change from the inactive (R) to the active (R*) state [22][23][24] . The asymptotic maximum (E max in Equation 2) and location (logEC 50 in Equation 3) parameters allow for the quantification of E/[A] curve shape 20 . LogEC 50 provides information about the potency of the agonist and E max reflects agonist efficacy. We see that logEC 50 , or its commonly used negative value, pEC 50 , includes operational K A and τ parameters whereas K A is not present in the definition of E max . Kenakin et al. 12 , combining the K A and τ parameters of the operational model 15 , defined a parameter designated the transduction coefficient, , which provides a one-parameter scale able to classify agonists acting through one receptor. They demonstrated that this scale can be transferred between systems with differing receptor densities and, what is more, a ratio of this parameter relative to a reference ligand provides normalization by taking into account the natural bias of the system, and so is useful for comparing experimental and physiological tissues. Precedents of the transduction coefficient can be found in some publications by Ehlert [25][26][27] , who used either the ε/K A ratio, with ε being intrinsic efficacy, or the τ/K A ratio. Is the Δlog(τ/K A ) scale sufficient for biased agonism description? Combining the log(τ/KA) and log(τ) scales: A theoretical example. As explained in the Appendix (Supplementary Material), both Δlog(τ) and Δlog(τ/K A ) represent useful scales to classify ligands independently of receptor density. An initial look at both scales reveals the fact that while the former only takes into account ligand operational efficacy, the latter also balances this efficacy in respect to ligand affinity, and so naturally these two scales classify ligands differently and the bias calculated from those scales differs as well. Because the Δlog(τ/K A ) scale is currently being used in a routine way, the proposal of a complementary scale invites justification. At this point it is worth comparing both scales with a theoretical example: Let us suppose a drug screening study consisting of two pathways that is aimed at identifying ligands with a positive bias effect of Pathway 2 with respect to Pathway 1. Figure 1 shows the concentration-response curves for three ligands with agonistic properties acting through a given receptor in the two pathways, where the values of τ and K A for each ligand at each pathway are displayed in Table 1. We have assumed that the ligands have the same operational affinities and efficacies in Pathway 1 and different operational affinities and efficacies in Pathway 2. A normalized value of 100 has been assumed for E m in both pathways. For a proper comparison between a collection of ligands within a single pathway and in various pathways, a reference ligand must be defined. This allows for the cancellation of system effects. Assuming Ligand 1 as the reference ligand, calculated parameters for both scales, Δlog(τ) and Δlog(τ/K A ), at each pathway and the bias of Pathway 2 relative to Pathway 1, ΔΔlog(τ) and ΔΔlog(τ/K A ), are shown in Table 1. It can be seen that both scales classify ligands in a different order. For the Δlog(τ) scale the order is Ligand 2 > Ligand 1 > Ligand 3 while for the Δlog(τ/K A ) scale the order is Ligand 3 > Ligand 2 > Ligand 1. Thus, taking Ligand 1 as the reference ligand whose bias is to be optimized, the second scale, Δlog(τ/K A ), would provide that both Ligand 3 and Ligand 2 are optimized to a greater degree and Ligand 3 to a larger extent than Ligand 2, while with the first scale, Δlog(τ), only Ligand 2 is optimized with respect to Ligand 1. Another practical output from the combination of the two scales comes from the comparison of the results obtained for Ligand 2 and Ligand 3 in both scales. While for Ligand 2 the two parameters ΔΔlog(τ/K A ) and ΔΔlog(τ) result in a positive number, suggesting an improvement of the bias with respect to Ligand 1, a different situation is found for Ligand 3. For this last agonist the two parameters show opposing results due to an improvement in affinity but a worsening in efficacy versus Ligand 1. Although at first sight these results seem contradictory, we understand that the two scales are complementary, each offering information not found in the other. Parameter derivations from the operational model (Equation 1) Figure 1. A theoretical example of receptor activation through two different pathways. In the panel on the left it is represented the concentration-response curve of three agonists for a given receptor acting through Pathway 1. In the panel on the right, concentration-response curves are represented for the same agonists acting through Pathway 2. For the sake of clarity, the three agonists show the same effect in Pathway 1 while different behaviors are exerted through Pathway 2. Concentration is the key: Two concentrations are marked (C1 and C2). For both concentrations and in pathway 1, all three ligands show the same response. In pathway 2, differences appear. At concentration C1 the effect observed for agonist 3 is larger than that observed with the two other ligands. However, at concentration C2, the effect of agonist 3 remains greater than that of agonist 1 but smaller than that observed for agonist 2. Pathway 1 Pathway 2 where it is assumed that E m and n are system-dependent parameters 15 where [A] is the ligand concentration, τ 1 and τ 2 the operational agonist efficacies of Ligand 1 and Ligand 2, respectively, and K A1 and K A2 the operational agonist equilibrium dissociation constants. By substituting the value of τ in Equation 6, we obtain Equation 7. [R ] K A1 [R ] K A2 The ligand concentration ([A]) at which both concentration-response curves cross does not depend upon total receptor concentration ([R T ]) and it is constant over the whole range of receptor densities. Looking at the E/[A] curves for Pathway 2 in Fig. 1 we see that Ligand 3 shows greater effect than Ligand 2 at concentrations below that at which both concentration-response curves cross, in agreement with the Δlog(τ/K A ) scale; while at concentrations above that of curve crossing, Ligand 2 shows the greater effect, in agreement with the Δlog(τ) scale. Represented again by the same concentration-response curves as in the example above, in Fig. 1 we have also marked two concentrations, C1 and C2, below and above the concentration at which both curves cross in Pathway 2. It can be seen that at C1 the effect of Ligand 3 is greater than that of Ligand 2, and because of this the bias favors Ligand 3 versus Ligand 2. The opposite is true at C2. A similar consideration is exemplified in Kenakin and Christopoulos, 2013 (Fig. 5 in cited article) 22 , where the authors describe how an agonist with a defined calculated bias of one pathway over the other, that is a single value with the Δlog(τ/K A ) scale, can show variable effective bias in vivo in tissues with differing receptor density. In their simulations the authors show how the agonist displays a clear bias throughout the full concentration range in the tissue with a high receptor density. However, in the tissue with low receptor density, the agonist exhibits a change of the preference of one pathway over the other at the ligand concentration at which the E/[A] curves for the two pathways cross. Therefore, for this particular ligand in these particular in vivo conditions, the Δlog(τ/K A ) scale does not reflect correctly the experimental results. The approximation presented here, a joint consideration of the Δlog(τ) and Δlog(τ/K A ) scales aids in identifying experimentally-found differences, provides better fine tuning of the classification of compounds and allows for the calculation of a concentration value that determines the relationship between the scales. Figure 2 shows a diagram for the analysis of agonist bias using the Δlog(τ) and Δlog(τ/K A ) scales calculated by fitting functional data to the Black and Leff operational model 15 . If all agonists studied behave as full agonists in all the pathways analyzed, then Δlog(τ/K A ) scale alone can be used. But, if those agonists are not all full agonists, then both scales should be used, ending up with four different situations. The first situation, where both scales Table 2. Operational parameters (estimates ± standard errors) for the G protein-dependent pathway. Data obtained from concentration-response curves of µ-opioid agonists in presence of various concentrations of the irreversible antagonist β-FNX analyzed with the operational model of agonism (Fig. 3). Parameter estimates and standard errors of operational parameters E m , n, log(K A ) and log(τ) were produced by global fitting. Common E m , n and K A parameters were shared between curves whereas a τ parameter was defined for each β-FNX concentration-dependent curve. In the Table, log(τ) for β-FNX concentration equal to 0 is shown. For buprenorphine, the fitting did not converge when E m was included as a free parameter; thus, we set E m equal to the mean of the values obtained for the other ligands (96.75) and kept it fixed as such in the fitting process. Log (τ/K A ) values and their standard errors were calculated from estimated τ and K A parameters (see Parameter estimation in Methods). show an improvement in the bias they calculate (ΔΔlog(τ) and ΔΔlog(τ/K A ) > 0) meaning that bias has been optimized. A second one where ΔΔlog(τ/K A ) > 0 but Δlog(τ) < 0 would represent a situation where only the first scale would point to an improvement in the bias pursued. As the difference between both scales resides in the K A value of the first one, we identify this bias improvement as affinity-driven, due to an increase in affinity (K A ) of the ligand studied compared to the reference ligand. A third scenario, where ΔΔlog(τ/K A ) < 0 but Δlog(τ) > 0 would represent a situation where only the second scale points to an improvement in bias. In this case we identify this bias improvement as efficacy-driven due to an increase in efficacy (τ) of the ligand studied versus the reference. Finally, in the last situation both scales point out that bias improvement has not been achieved (ΔΔlog(τ) and ΔΔlog(τ/K A ) < 0). A practical example. Biased signaling has already been analyzed for the μ-opioid receptor 13,28 (see also 29 as a review). What is more, TRV130, a ligand described as a biased μ-opioid agonist favoring the G protein signaling pathway over that of β-arrestin, is already in Phase II clinical trials 30 . Bearing this in mind, we used the μ-opioid receptor to apply our proposal for bias calculation. When µ-opioid receptors couple to Gi/o subtypes they inhibit the production of cAMP and they can also recruit β-arrestins. An HTRF (Cisbio) cAMP determination assay was used to determine the activity of this receptor on the G protein signaling pathway, while an enzyme complementation assay (DiscoverX) was used to determine its ability to recruit and signal through the β-arrestin pathway. The ligands used in this study were: morphine and fentanyl, two opioids commonly used for pain-relief; buprenorphine, a classically classified partial agonist; and finally endomorphine-2 and TRV130, which are biased agonists for the μ-opioid receptor 13,31 . Figure 3 shows the results of the five μ-opioid receptor agonists in the cAMP determination assay. It is known 32 that the operational model cannot satisfactorily fit a single experimental E/[A] curve. A solution to the problem can be reached by using the irreversible inactivation method 17 (see Parameter estimation in Methods), which produces a collection of experimental curves with lower maximal effect by decreasing receptor density. With this procedure, a single solution for each of the curves, with particular τ and common E m , n and K A parameters, is obtained. We followed this approach for parameter determination in the G protein pathway. Results are shown in Table 2. For the β-arrestin pathway (Fig. 4) we used a different experimental approach: the comparative method 18 (see Parameter estimation in Methods). In this method, it is assumed that the maximal response (E max ) and the slope parameter (m) yielded from a full agonist by fitting curve data with the Hill equation match, respectively, the operational E m and n parameters, and, once determined, can be used as fixed values for the estimation of the . μ-opioid agonist β-arrestin recruitment assay. The concentration-response curves for five different opioid agonists were determined and compared to the concentration-response curve of damgo as the standard full agonist for this assay. Results were obtained in at least three independent experiments. In each experiment, data points were obtained in quadruplicates. Table 3. Operational parameters (estimates ± standard errors) for β-arrestin pathway. Data obtained from concentration-response curves of µ-opioid agonists using the comparative method 18 with Damgo as full agonist (Fig. 4). The Hill equation was used for fitting to Damgo data. The values obtained for Damgo for maximal response (95.14) and slope parameter (1.48) were used for all ligands in the Table as E m and n parameters in the operational model and kept fixed as such in the fitting process. Parameter estimates and standard errors of operational parameters log(K A ) and log(τ) were produced by global fitting. Log (τ/K A ) values and their standard errors were calculated from estimated τ and K A parameters (see Parameter estimation in Methods). Morphine efficacy and affinity of partial agonists within the operational model. In this assay, we used Damgo as the full agonist. As all the other opioids in the assay behaved as partial agonists, their K A and τ values could be directly calculated using the operational model by substituting E m and n parameters with Damgo E max and m and keeping them fixed as such in the fitting process. Results are shown in Table 3. It is worth noting the discrepancies in K A between the two pathways for each of the agonists (Tables 2 and 3). This is an acceptable result under the operational model of agonism because K A is a functional affinity of the agonist which includes the interaction of the activated receptor with the signaling protein either the G protein or β-arrestin 22 . At this point it is worth mentioning, for the sake of correct data interpretation, that it is convenient to analyze all the studied pathways in the same cell line to minimize any functional influence of receptor tagging or modification that needs to be performed. Unfortunately, this is not always possible because, depending on the signaling pathway, different receptor or signaling protein constructs must be used 33 . This is particularly evident when more than two pathways are analyzed, as in the study by Thompson and colleagues 28 , where bias was calculated for cAMP, GTPγS and pERK1/2 determinations using the wild-type μ-opioid receptor, but for other signaling pathways such as β-arrestin-1, β-arrestin-2 or receptor internalization an Rluc-tagged receptor was used 28 . It is worth noting that concerns about receptor tagging have been extensively addressed in the literature. Barak and colleagues, back in 1997 34 , described a β 2 -adrenoceptor variant tagged with eGFP at its C-terminal part which showed ligand binding, second messenger stimulation, receptor phosphorylation and internalization properties closely resembling those of the wild-type receptor. In another example, Scherrer and colleagues compared an eGFP tagged δ-opioid receptor with its wild-type counterpart with both transfected in HEK293 cells 35 . They showed that the binding of different opioid ligands remained the same between the two receptors and more importantly there was no difference in the capacity of the deltorphin II agonist to stimulate the receptor as measured by using a [ 35 S]-GTPγS binding assay. A final example can be found in the orphan GPR17 receptor 36 . In this case, a label-free dynamic mass redistribution assay was used to compare the functional response of the wild-type receptor with that of an N-terminal hemagglutinin-tagged receptor, as well as with either a C-terminal Rluc-tagged or a C-terminal GFP 2 -tagged receptor. Also, the ability of the wild-type receptor to stimulate cAMP response was compared with that of the C-terminal GFP 2 -tagged receptor. In all cases, the introduction of the corresponding tag had no effect on receptor functionality. Finally, the main concern when using different cells or differently tagged receptors is that system bias and observational bias may differ between the different cell systems used. To address these issues and cancel out both system and observational bias, which can be expected to affect all agonists to the same extent, all bias factors are related to a common reference agonist 12,22 . In the present work, to calculate bias factors between the two studied pathways, morphine was selected as the reference agonist. The calculation of bias involves parameters (τ and K A ) which were estimated as logarithms for reasons of normality. Thus, bias evaluation implies the calculation of differences in logarithmic values and the estimation of their corresponding confidence intervals (see Parameter estimation in Methods). Bias estimates of the (G protein-β-arrestin) − ΔΔlog(τ) and −ΔΔlog(τ/K A ) scales for the five opioids used in this study are shown in Table 4 and represented in Fig. 5. As can be seen in the right panel of this figure, all the ligands show a positive bias compared to morphine in the ΔΔlog(τ/K A ) scale, though not statistically significant (zero is included in the confidence interval) in the case of TRV130 and endomorphine-2. On the contrary, when analyzing the data in the ΔΔlog(τ) scale (Fig. 5 left panel), a different picture is obtained. In this case, only TRV130 shows a clear positive bias favoring the G protein effect versus that of β-arrestin. Fentanyl, endomorphine-2 and buprenorphine show bias favoring the β-arrestin pathway, though not statistically significant in the last case. DeWire and colleagues (2013) 14 reported a bias favoring the G protein pathway for TRV130 using relative intrinsic activities (RA i ) and because RA i values can reduce to τ/K A (when n = 1) 12 , their data resembles the transduction coefficient scale (ΔΔlog(τ/K A )) that we present here. Regarding endomorphine-2, some studies 13,37 have reported a bias favoring the β-arrestin pathway for this ligand. In these studies 13,37 , τ values were estimated from the operational model using K A values obtained from independent binding experiments. These results for endomorphine-2 favoring the β-arrestin pathway are in agreement with our results in the ΔΔlog(τ) scale, though in our fitting procedure τ and K A are both estimated from the operational model. Quantitative pharmacology of signaling bias may offer a structure-function framework which can be useful for drug discovery purposes. An elegant study on the M 2 muscarinic acetylcholine receptor combining various approaches including mutation and molecular modeling identified orthosteric and allosteric site mutations that contribute to ligand-selective signaling bias 38 . The authors suggested that the functional selectivity of some of the compound might arise from a bitopic mechanism 38 . Other examples with similar methodological approaches can be cited as, for example, studies focusing on the glucagon-like peptide-1 receptor 39 or the M 1 muscarinic acetylcholine receptor 40 . Moreover, a detailed review on the functional analysis of receptor states can be found in ref. 21 . In the theoretical example used, we determined the concentration at which the concentration-response curves of two agonists for a given receptor cross each other at a given signaling pathway. We showed that this concentration does not depend on the total amount of receptor present. In Fig. 6 we have represented the concentration-response curves for morphine and buprenorphine in the cAMP inhibition assay in the presence or absence of β-FNX to illustrate this point. Visual inspection of Fig. 6 shows that the concentration-response curves of morphine and buprenorphine at various β-FNX concentrations cross at similar concentration values (shown by blue dots), in agreement with theoretical predictions (Equation 7). Applying Equation 6 to the data generated from these two agonists gives the following concentration values (critical concentrations) at which the curves cross: 80 nM, 102 nM, 124 nM, 90 nM, 82 nM, 71 nM and 88 nM, respectively, for each of the β-FNX pretreatment conditions. We see that the effect elicited by buprenorphine is always greater than that produced by morphine at concentrations lower than ~100 nM whereas the opposite is true at concentrations above ~100 nM. Table 4. Calculation of (G protein -β-arrestin) ΔΔlog(τ) and ΔΔlog(τ /K A ) bias. Raw data for Log (τ) and log(τ /K A ) were taken from Tables 2 and 3. Morphine was taken as the reference compound. Parameter estimates ± standard errors are shown. The confidence intervals of 95% (CI95%) for ΔΔ estimates are shown in parentheses. Multiple testing was considered in the calculation of confidence intervals through Holm's method (see Parameter estimation in Methods). A * has been added to those CI95% for ΔΔ estimates which do not include zero and show thus statistical significance for bias signaling. Figure 6. μ-opioid agonist inhibition of forskolin-stimulated cAMP production assay. The concentrationresponse curves for morphine and buprenorphine determined in both the absence and presence (1, 3, 10, 30, 100, 300 nM) of the irreversible antagonist β-funaltrexamine are represented jointly in the same graph. The concentrations at which the curves for each agonist cross with each other are marked. Results were obtained in at least three independent experiments. In each experiment, data points were obtained in quadruplicates. Conclusions Biased agonism is a hot topic in current pharmacologic research with known therapeutic implications. Accurate and standardized measurement of this property is fundamental to drug discovery and development. Currently, the most widely used scale is one based on log(τ/K A ). It has the advantage of combining efficacy and affinity properties in a single parameter thus providing simplicity. However, in those situations in which different maximal responses are found, the log(τ/K A ) scale appears to be insufficient. In this regard, because the efficacy parameter τ is directly related with the maximal response achieved by an agonist, the log(τ) scale can complement the log(τ/ K A ) scale in those cases which include ligands with different maximal responses. Of note, we have shown that the log(τ) scale accomplishes the same requirement as that of the log(τ/K A ) scale, namely the ratio of τ values for two ligands across receptor systems with varying receptor density remains constant. We have also shown that concentration plays a role in these cases and how the decision of whether to use a biased agonism approach based on either pure efficacy (the ΔΔlog(τ) scale) or a combination of efficacy and affinity parameters (the ΔΔlog(τ/K A ) scale) depends on the experimental concentration window used. In this regard, the signs of the (ΔΔlog(τ/K A ), ΔΔlog(τ)) pairs provide an indication on whether there is (+, +) or not (−, −) an optimization of the bias in one pathway relative to the other also whether it is mainly affinity-or efficacy-driven, (+, −) and (−, +), respectively. Finally, we have illustrated the application of the proposed methodology to the μ-opioid receptor scenario by considering the G protein and the β-arrestin pathways and selected full and partial agonists.
2023-02-17T14:22:45.150Z
2017-11-13T00:00:00.000
{ "year": 2017, "sha1": "4170fe04c39f3ee1446906756d5db14f715af916", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-15258-z.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "4170fe04c39f3ee1446906756d5db14f715af916", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
135298572
pes2o/s2orc
v3-fos-license
Preface: Special Issue on Sustainable Territorial Management David Rodríguez-Rodríguez 1,2,* ID and Javier Martínez-Vega 1 ID 1 Institute of Economy, Geography and Demography, Spanish National Research Council (IEGD-CSIC), Associated Unit GEOLAB, Albasanz, 26–28, 28037 Madrid, Spain; javier.martinez@cchs.csic.es 2 European Topic Centre-University of Malaga, Andalucía Tech, University of Malaga, 29010 Malaga, Spain * Correspondence: david.rodriguez@csic.es; Tel.: +34-916-022-322 or +34-951-953-102 Introduction Human development has made remarkable social and economic progress possible for most of us [1,2], but has also entailed a range of negative consequences on natural resources, local communities, and the economy at multiple scales.Soil sealing [3][4][5], erosion, land degradation and erosion [6]; air and water pollution [7,8]; forest fires [9], biodiversity homogenisation and loss [10], isolation and fragmentation of habitats [11,12], poverty [13], human migration [14] and health issues [15,16] are among the most common human-made impacts with a clear sustainability and spatial component.They occur almost everywhere in the territory, be it terrestrial [17], aerial [18] or marine [19], where there is human activity.;Thus, achieving sustainable territorial management that combines healthy and prosperous societies with the long-term maintenance of biodiversity and productive ecosystem services [20] remains the biggest challenge of our modern world [21].Simulation models of future land uses, under different scenarios of change, can help territorial managers in the decision-making process [22]. This Special Issue seeks to collect a coherent set of studies on techniques and experiences (case studies) aimed at increasing the environmental, social, economic &/or institutional sustainability of landscapes and seascapes from a range of geographic and socioeconomic contexts. Highlights Land use-land cover (LULC) changes towards intensive uses, especially towards artificial uses, are one of the main global threats to biodiversity conservation.Thus, LULC changes are considered a cornerstone in sustainable territorial management (Figure 1), so a number of articles in this Special Issue deal with this topic directly or indirectly.Geographically, ten case studies representing urban areas, rural areas (chiefly protected areas; PAs) and coastal areas from four countries in Europe and Asia are shown.Izakovičová et al., 2018 [23] use an integrated approach to attain sustainable LULC management in an agricultural area of western Slovakia.They analyse drivers of LULC change and their impacts on the environment.They propose optimal land uses accounting for interactions among available natural capital, environmental conditions and human needs in the long term to achieve socioeconomic development.They use multi-criteria analysis to guide managers' decision-making. Forest fires also hamper sustainable territorial management and cause substantial environmental and socioeconomic losses.Viana-Soto et al., 2017 [24] assess regeneration of vegetation in burned areas of the Mediterranean region (Iberian Peninsula).They seek to understand species' recovery dynamics in order to implement suitable restoration actions.Regeneration modelling has been performed through multiple regressions, using ordinary least squares and geographic weight regression.They measure the severity of fire through the composite burn index and a set of environmental variables.They estimate the dynamics of regeneration through the Normalized Difference Vegetation Index obtained from Landsat images. Furthermore, flooding is a persistent problem in coastal areas.Under scenarios of climate change, it is expected that flooding events will become more frequent and potentially more intense.This risk represents a potential threat to coastal communities that depend on coastal resources to a large extent.Toubes et al., 2017 [25] develop a methodology for coastal flooding risk assessment based on an index that compares 16 hydro-geo-morphological, biophysical, human exposure and resilience indicators, with a specific focus on tourism.They assess the vulnerability to floods of 724 beaches in Galicia (northern Spain).Their results are useful for coastal adaptation and management.Izakovičová et al., 2018 [23] use an integrated approach to attain sustainable LULC management in an agricultural area of western Slovakia.They analyse drivers of LULC change and their impacts on the environment.They propose optimal land uses accounting for interactions among available natural capital, environmental conditions and human needs in the long term to achieve socioeconomic development.They use multi-criteria analysis to guide managers' decision-making. Forest fires also hamper sustainable territorial management and cause substantial environmental and socioeconomic losses.Viana-Soto et al., 2017 [24] assess regeneration of vegetation in burned areas of the Mediterranean region (Iberian Peninsula).They seek to understand species' recovery dynamics in order to implement suitable restoration actions.Regeneration modelling has been performed through multiple regressions, using ordinary least squares and geographic weight regression.They measure the severity of fire through the composite burn index and a set of environmental variables.They estimate the dynamics of regeneration through the Normalized Difference Vegetation Index obtained from Landsat images. Furthermore, flooding is a persistent problem in coastal areas.Under scenarios of climate change, it is expected that flooding events will become more frequent and potentially more intense.This risk represents a potential threat to coastal communities that depend on coastal resources to a large extent.Toubes et al., 2017 [25] develop a methodology for coastal flooding risk assessment based on an index that compares 16 hydro-geo-morphological, biophysical, human exposure and resilience indicators, with a specific focus on tourism.They assess the vulnerability to floods of 724 beaches in Galicia (northern Spain).Their results are useful for coastal adaptation and management. Knowing the value of the services provided by different ecosystems is essential for sustainable territorial management.Nevertheless, the Standard Economic Accounts for Agriculture and Forestry do not measure the ecosystem services and intermediate products embedded in the final products and ignore the private non-commercial intermediate products and self-consumption of private amenities.Campos et al., 2017 [26] apply the Agroforestry Accounting System to simulate sustainable forestry of holm oak and cork oak in Dehesa de la Luz, a Mediterranean tree-grass ecosystem.The net value added is more than 2.3 times greater than the estimated net value using the standard accounts. Hewitt & Macleod, 2017 [27] use an Environmental Decision Support System (EDSS) to support the management of land and freshwater resources in Scotland, UK, with multiple applications to environmental management.They design a structured participatory process to determine stakeholder requirements, establish principles to meet these requirements and test the prototypes.The resulting specification of this bottom-up process is a free EDSS that is spatially explicit and compatible with portable devices.This application, still under development, does not resemble most existing EDSSs.Its focus on adaptive, stakeholder-centred environmental management strategies based on outcomes offers an opportunity to make better use of these new technologies to aid decision-making processes. The rapid growth of urban areas close to large metropolises causes negative impacts on natural resources.This Special Issue includes two case studies focused on LULC changes in urban areas.In the first one, Ishtiaque et al., 2017 [28] analyse the increasing urbanization of the Kathmandu Valley (Nepal), in the foothills of the Himalayas.They use four Landsat images of the years 1989, 1999, 2009 and 2016 to compare changes.They relate LULC changes with a set of immediate causes and driving-factors of those changes.They employ a pixel-based hybrid classification approach and analyse the LULC trajectories.The results show that the urban area expanded to 412% in the last three decades. Cantergiani & Gómez Delgado, 2018 [29] develop AMEBA, a prototype of an exploratory, spatial, agent-based model that considers the main stakeholders involved in the urban development process (urban planners, developers and the population).It consists of three sub-models, one for each agent.The first two are based on a land use allocation technique and the last one, as well as their integration, on an agent-based model approach.The authors describe the conceptualisation and performance of the sub-models that represent urban planners and developers, who are the agents responsible for officially expanding urban land and defining its spatial allocation.The prototype is tested in Corredor del Henares (an urban-industrial area in the Region of Madrid, central Spain), but it is flexible to be adapted to other study areas under different urban growth contexts. PAs are also affected by LULC changes and other pressures from global change.Some processes such as intensive recreational use, forest fires or the expansion of artificial areas inside and around them jeopardise their environmental sustainability and effectiveness.Martínez-Vega et al., 2017 [30] analyse the LULC changes that took place between 1990 and 2006 in two Spanish national parks (NPs).They also simulate LULC changes between 2006 and 2030 through Artificial Neural Networks, taking into account a business-as-usual scenario and a green scenario.The simulation of LULC changes that are expected in the following decades under different scenarios is a strategic issue to carry out preventive protected area planning and management.Finally, they perform a multi-temporal analysis of natural habitat fragmentation in each NP. López & Pardo 2018 [31] design an indicator system to monitor and assess the socioeconomic impacts of climate change on Sierra de Guadarrama NP (Spain) that could be used in other PAs in Spain and elsewhere.Indicators assess natural resource use, population change, economic activities and socio-political interactions.They use statistical sources and surveys according to the Driving forces-Pressure-State-Impact-Response framework. As global biodiversity trends worsen, PA environmental effectiveness evaluation becomes an urgent need to identify strengths and areas to improve.Through a participatory process including PA managers and scientists, Rodríguez-Rodríguez et al., 2017 [32] refine the System for the Integrated Assessment of Protected Areas (SIAPA) in order to increase its legitimacy, credibility and salience to end users in Spain.Then, they test the optimised version of the SIAPA on two emblematic Spanish NPs: Ordesa y Monte Perdido NP, and Sierra de Guadarrama NP. Results show that potential environmental effectiveness is moderate for Ordesa NP and low for Guadarrama NP, according to the indicators that could be evaluated.PA managers and scientists largely coincided in the ratings of SIAPA's indicators and indices. We hope that the methods developed, the results obtained, and the discussions included in the above-mentioned papers are useful to understand the potential of data modelling techniques, support future research, raise awareness about the complex problems of the territory and provide robust knowledge upon which to base sustainable territorial management. Figure 1 . Figure 1.Simplified conceptual framework of ten papers compiled in this Special Issue (picture from Google Earth).MCA: Multi-Criteria Analysis; DPSIR: Driving force-Pressure-State-Impact-Response; GWR: Geographically Weighted Regression; CBI: Composite Burn Index; NDVI: Normalized Difference Vegetation Index; AAS: Agroforestry Accounting System; LCM: Land Change Modeller; ANNs: Artifical Neural Networks; SIAPA: System for the Integrated Assessment of Protected Areas; EDSS: Environmental Decision Support System; ABMs-AMEBA: Agent-Based Models; CORINE Lcover: CORINE Land Cover; SIOSE: Geographic Information System on Land Use-Land Cover of Spain. Figure 1 . Figure 1.Simplified conceptual framework of ten papers compiled in this Special Issue (picture from Google Earth).MCA: Multi-Criteria Analysis; DPSIR: Driving force-Pressure-State-Impact-Response; GWR: Geographically Weighted Regression; CBI: Composite Burn Index; NDVI: Normalized Difference Vegetation Index; AAS: Agroforestry Accounting System; LCM: Land Change Modeller; ANNs: Artifical Neural Networks; SIAPA: System for the Integrated Assessment of Protected Areas; EDSS: Environmental Decision Support System; ABMs-AMEBA: Agent-Based Models; CORINE Lcover: CORINE Land Cover; SIOSE: Geographic Information System on Land Use-Land Cover of Spain.
2018-08-19T13:52:45.097Z
2018-08-05T00:00:00.000
{ "year": 2018, "sha1": "55c01e71a14b02be435df627271258e8ec66bad1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3298/5/8/90/pdf?version=1533450797", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "55c01e71a14b02be435df627271258e8ec66bad1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
231962335
pes2o/s2orc
v3-fos-license
Do people with rheumatoid arthritis maintain their physical activity level at treatment onset over the first year of methotrexate therapy? Abstract Objectives To describe how many people with RA reduce their baseline physical activity level over the first year of MTX treatment, and which factors predict this. Methods Data came from the Rheumatoid Arthritis Medication Study (RAMS), a prospective cohort of people with early RA starting MTX. Participants reported demographics and completed questionnaires at baseline, and 6 and 12 months, including reporting the number of days per week they performed ≥20 min of physical activity, coded as none, low (1–3 days) or high (4–7 days). The physical activity levels of participants over 12 months are described. Predictors of stopping physical activity were assessed using multivariable logistic regression. Results In total, 1468 participants were included [median (interquartile range) age 60 (50, 69) years; 957 (65.2%) women]. At baseline, the physical activity levels of the people with RA were: none = 408 (27.8%), low = 518 (35.3%) and high = 542 (36.9%). Eighty percent of participants maintained some physical activity or began physical activity between assessments (baseline to 6 months = 79.3%, 6 months to 12 months = 80.7%). In total, 24.1% of participants reduced physical activity and 11.3% of participants stopped performing physical activity between baseline and 6 months (6 months to 12 months: 22.6% and 10.2%, respectively). Baseline smoking, higher disability and greater socioeconomic deprivation were associated with stopping physical activity. Conclusion Many people with early RA were not performing physical activity when starting MTX, or stopped performing physical activity over the first year of treatment. These people may require interventions to stay active. These interventions need to be mindful of socioeconomic barriers to physical activity participation. Introduction Physical activity (including exercise [1]) provides benefits for people with RA in terms of stamina, muscle strength, pain and function [2][3][4][5]. This has led to EULAR recommending physical activity for all people with inflammatory arthritis [6]. While evidence from the Netherlands suggests that physical activity is increasing in RA [7], many people with RA do not meet physical activity Rheumatology key messages . Twenty-eight percent of people with RA performed no exercise when starting MTX. . Ten percent of those exercising when starting MTX stopped over the first year. . Socioeconomic deprivation predicted stopping exercise; interventions should be designed to mitigate socioeconomic barriers to participation. guidelines [e.g. EULAR [6] or the World Health Organization (WHO) [8]]. A pan-European cross-sectional study of 5235 people with RA from 21 countries reported that only 13.8% of participants reported performing physical activity three or more times per week, and the majority of participants performed no regular physical activity each week (>80% in 7 countries, 60-80% in 12 countries, and 45% and 29% in the final two countries) [9]. A cross-sectional study from the UK reported that women with RA performed 40% less moderate-to-vigorous physical activity compared with healthy controls. Only half of the RA group met WHO guidelines, compared with 82% of controls [10]. A cross-sectional study of the general population of the UK (UK-Biobank) showed that low levels of physical activity were more prevalent in those with self-reported RA compared with controls [N (%): RA ¼ 1010/4396 (23.0%), controls ¼ 67 394/433 680 (15.5%)] [11]. Furthermore, another study reported that people with RA spent more time sedentary than matched healthy controls (71% vs 62% of the day) [12]. These cross-sectional studies show that many people with RA do not perform sufficient physical activity, but it is unclear whether these people have always performed less physical activity, or whether people reduce their physical activity in the first few years following symptom onset. A study of 617 Swedish people with RA showed that only 8% of participants reported being physically inactive 5 years prior to RA onset [13]. Therefore, it is likely that many people with RA are reducing their physical activity levels in response to the symptoms of RA. This has implications for interventions; it may be easier to intervene early and maintain existing physical activity levels rather than trying to promote physical activity once individuals have stopped. However, at present we do not know how many people with RA stop performing physical activity in the early stages following the onset of symptoms. Furthermore, a greater understanding of the factors driving reductions in physical activity in RA is important for determining, first, the content of interventions aiming to maintain physical activity levels, and second, the group at greatest risk of stopping exercising in order to target such interventions towards them. Studies have demonstrated that several factors are associated with lower physical activity levels in people with RA. The UK-Biobank study reported that the number of comorbidities participants reported was associated with lower physical activity in those with RA, although reverse causality cannot be excluded [11]. A study of 41 people with RA from the USA reported that exercise time was related to exercise self-efficacy and inversely related to disease activity and disability [14]. The association between function, selfefficacy and exercise level has been shown in other US [15], South Korean [16], and Swedish [17] studies, as well as a systematic review [18]. However, it is unclear whether these factors are also associated with reductions in physical activity following symptom onset, as well as absolute levels of physical activity. Therefore, the objectives of this study were (i) to describe how many people with early RA reduce their baseline physical activity level over the first year of treatment with MTX, and (ii) to assess factors associated with reducing physical activity level and stopping physical activity over the first year of treatment. Methods Data for this analysis came from the Rheumatoid Arthritis Medication Study (RAMS) [19], a UK-based, multicentre prospective observational study of people with early RA recruited as they started MTX treatment for the first time. For the purpose of the current study, RAMS participants were included if they reported data on their physical activity level (see below) at baseline. Participants with established RA were excluded (established RA defined as having >24 months symptom duration at baseline). RAMS ethical approval was obtained from the National Research Ethics Service Central Manchester Research Ethics Committee (ref: 08/H1008/ 25) and all participants gave their written informed consent. Assessments RAMS participants were assessed at baseline by research nurses working in participating rheumatology clinics (i.e. when they started MTX) and at 6 and 12 months follow-up, reporting demographics [age, gender, smoking status, ethnicity (coded as either white or non-white due to low numbers for each of the non-White ethnicities), height and weight], undergoing 28-joint swollen and tender joint assessments, and completing questionnaires. Each participant's BMI was calculated from their height and weight and categorized using WHO cut-offs [20]. Each participant's socioeconomic status was defined based on their postcode using the Index of Multiple Deprivation 2010 (IMD) [21], coded as quintiles of the total population with the lowest quintile as the most deprived. Participants also reported on comorbidities from a set list, which were categorized into no comorbidities, one comorbidity, or two or more comorbidities. Blood samples were taken at each assessment and stored in freezers at À80 C. RF status (Beckman Coulter BLOSR6x105 and ELISA Genie HUFI03136) was determined from baseline samples and CRP (Beckman Coulter BLOSR6X99 and ELISA Genie HUFI00088, UK; mg/l) measured from samples at each time-point. Physical activity Participants completed three physical activity-related Likert Scale questions at each assessment: (i) 'During the past month, on average, on how many days per week have you taken exercise that has lasted at least 20 minutes?' (scale: none, 1 day, 2-3 days, 4-6 days, everyday); (ii) 'During the past month, on average, on how many days per week have you taken exercise that has made you sweat?' (same scale; used to capture data on high intensity physical activity); (iii) 'In comparison to others of your own age, do you think your physical activity is:' (scale: much less, less, the same, more, much more). The participants were stratified into three exercise groups at each assessment based on their answer to question one: no physical activity, low physical activity (1 day and 2-3 days) and high physical activity (4-6 days and everyday). Statistical analysis Baseline demographics, physical activity and diseaserelated variables were summarized using descriptive statistics, for the whole cohort and stratified based on the three exercise groups. The levels of physical activity at 6 and 12 months are also reported using descriptive statistics, and the number of people who changed physical activity group between each assessment is described. To assess predictors of decreasing physical activity level, participants were categorized into those that decreased their physical activity level between two consecutive time-points (i.e. baseline and 6 months, or 6 months and 12 months) and those who maintained some physical activity or improved their physical activity level. Changes from high to low, high to no physical activity, or low to no physical activity categories were counted as decreases in physical activity. People who maintained some physical activity (low physical activity or high physical activity at two consecutive assessments) or those who improved their physical activity (changed from low to high, no physical activity to low or no physical activity to high physical activity categories) were combined and acted as the reference. People who consistently performed no physical activity were excluded from this analysis. A multivariable random effects logistic regression model was used to identify baseline predictors of decreasing physical activity. Candidate baseline predictors were: age, gender, symptom duration, ethnicity, IMD quintile, smoking status, BMI, DAS28, HAQ, pain VAS, fatigue VAS, HADS-A, HADS-D, RF status, number of comorbidities and illness perception. Participants were classified into two latent classes of illness perceptions using latent profile analysis, one class representing positive illness perceptions and the other negative [27]. To assess predictors of stopping exercise completely, the same analysis was performed, with the outcome being changing from either high or low physical activity category to no physical activity. The comparison group were those who maintained some physical activity (including those who changed from high to low physical activity) and those who improved. Multiple imputation was used to impute missing data for covariates included in regression analyses. Analyses were performed using R version 3.6.0 (packages: foreign, grid, gridExtra [28], htmlwidgets [29], networkD3 [30], reshape2 [31], tidyLPA [32], tidyverse [33], wesanderson [34]) and Stata version 14 (Stata Corp., College Station, TX, USA). At baseline, 408 (27.8%) participants reported conducting no physical activity on average, 518 (35.3%) reported low physical activity levels (1-3 days per week) and 542 (36.9%) reported high physical activity levels (4-7 days per week). The level of physical activity was likely to be of predominantly moderate intensity, as just under half (47.9%) of those in the low physical activity group and 33% of those in the high physical activity group reported performing no exercise that caused sweating (Fig. 1A). The majority (69.3%) of those in the high physical activity group reported performing the same, more or much more physical activity compared with healthy people of a similar age, whereas the majority (53.6%) of people in the low physical activity group reported performing less or much less compared with healthy people of a similar age (Fig. 1B). A large proportion (77.2%) of those in the no physical activity group perceived performing less or much less physical activity compared with healthy people of a similar age. The group who performed no physical activity at baseline had more women, more people reporting being of non-White ethnicity, higher BMI, lower socioeconomic status, more severe disease activity, more comorbidities and higher scores on the patient-reported outcomes compared with the other physical activity groups (Table 1). Changes in physical activity level over the first year of treatment with MTX The majority of participants who were seen at 6 months stayed in the same physical activity category as baseline [565/994 (56.8%)]. Four-fifths of the participants [788/ 994 (79.3%)] either maintained some physical activity (maintained high, maintained low or moved from high to Physical activity of people with RA https://academic.oup.com/rheumatology low; N ¼ 534) or improved their physical activity level (moved from none to low, none to high, or low to high; N ¼ 254) over the first 6 months of treatment. The most common change from baseline to 6 months was a change from low physical activity to high physical activity [109/994 (10.9%)]. Of those performing physical activity at baseline, 24.1% (175/725) reduced their physical activity by 6 months, with 11.3% (82/725) stopping physical activity completely (Fig. 2). Again, the majority of participants seen at 12 months stayed in the same physical activity category as at 6 months [480/748 (64.2%)]. Four-fifths of participants [604/748 (80.7%)] either maintained some physical activity (maintained high, maintained low or moved from high Table 2). Lower levels of deprivation were numerically associated with lower odds of reducing physical activity over 1 year, but the associations were not statistically significant ( Table 2). Baseline predictors of stopping physical activity completely were similar, but the effect sizes were stronger. Current smokers had >5-fold increased odds of stopping physical activity over the first year of MTX therapy compared with never-smokers [OR 5.83 (95% CI 1.98, 17.20)] and each unit increase in HAQ was associated with a >2-fold increase in odds of stopping physical activity [OR 2.43 (95% CI 1.20, 4.91)]. Socioeconomic deprivation was also strongly associated with stopping physical activity altogether over follow-up (Table 2). Lastly, men were less likely to stop physical activity compared with women. Discussion This large cohort study of people with early RA has shown that the majority of participants reported performing some physical activity when starting MTX, although 28% of participants reported no physical activity. During the first year of treatment with MTX, 80% of participants were able to start or maintain some physical activity, even if some reduced their activity from high to low levels. This physical activity was likely Physical activity of people with RA https://academic.oup.com/rheumatology to be low-to-moderate intensity, given the reports of low average number of days per week participants performed exercise that caused them to sweat. However, between a fifth and a quarter of participants who performed physical activity reduced their physical activity between each assessment, with around 10% of participants stopping physical activity altogether. Key socioeconomic indictors (smoking, socioeconomic deprivation) predicted stopping physical activity, as well as increased disability. A similar distribution of physical activity levels to the current study was reported in a large cross-sectional study in the UK [11], with around a third of participants in each category. The high proportion of people with RA who do no physical activity when starting MTX treatment is concerning, given the known benefits of exercise with regards to general health and disease-related outcomes [2][3][4][5]. Potentially, interventions aiming to encourage people with RA to start exercising need to be delivered to these people at or close to the start of treatment [35], as it may become progressively harder to start physical activity as disease progresses. Despite 28% of participants reporting no exercise at baseline, there was a high proportion of participants maintaining at least low physical activity levels over time. This has been reported in other studies, such as a study of 2752 Swedish people with prevalent RA which reported that the majority of participants (80%) had stable levels of physical activity over 2 years of follow-up [17]. Furthermore, 20-25% of participants increased their physical activity levels between assessments, potentially in reaction to improving symptoms due to successful treatment. However, there was a significant proportion of participants who reduced their physical activity or stopped physical activity altogether over follow-up. Perhaps unsurprisingly, those with higher disability were more likely to reduce and stop physical activity over follow-up, an observation demonstrated in previous studies. For instance, participants in one study with a high baseline HAQ score (score from 1.1 to 3 out of 3) had 72% lower odds of being in the high physical activity group compared with the low physical activity group over 2 years of follow-up [OR 0.58 (95% CI 0.34, 0.96)] [17]. On the other hand, our study found no association between baseline multimorbidity and odds of reducing physical activity, despite studies reporting a correlation between number of comorbidities and physical activity level [11]. Potentially people with multimorbidity at baseline in this study had already reduced their physical activity level in response to the development of other health conditions (as seen by the higher number of comorbidities in the no physical activity group), and therefore did not reduce their physical activity level in the early phases of their RA. Our study also illustrated the large role socioeconomic deprivation likely has on physical activity participation, with both smoking and IMD quintile strongly predicting reducing and stopping physical activity over follow-up. People in the general population with lower socioeconomic status are more likely to perform less physical activity [36], people with RA who had lower education were less likely to use physiotherapy services [37] and those people with RA who were employed were more likely to meet physical activity recommendations [15]. This contrasts with a 2014 systematic review which reported that many studies found no correlation between education, job status and physical activity [38]. People with RA may struggle to start or continue performing the recommended level of physical activity due to the significant barriers they face [5], and these barriers are likely to be higher for those with lower socioeconomic status. Some of these are similar to barriers faced by members of the general public, such as lack of time, motivation and the cost of exercise [16,39]. People with RA also suffer disease specific barriers, such as lower functional ability, as highlighted in the current study, as well as a lack of knowledge and advice on whether exercise is safe and what exercise to perform [39]. Furthermore, these barriers are likely to increase as people move from early to established RA [40,41]. Therefore, from a public health perspective, interventions aiming to help maintain or begin physical activity in RA should be delivered early in the disease course and may best be targeted towards those with lower socioeconomic status, given these people are most likely to stop physical activity. Furthermore, these interventions should be designed to mitigate socioeconomic barriers to participation, such as high cost, lack of childcare, lack of time and lack of awareness [42]. In addition, qualitative studies of people with RA show that physical activity maintenance strategies should focus on providing support and monitoring to help people make positive changes in their lives with appropriate incentives, developing communities for mutual support, and increasing people with RA's feelings of autonomy and independence [43,44]. Our study has a number of strengths. It is a large cohort study of people with early RA who are all at the same point in their disease history, namely starting MTX treatment for the first time. Therefore, the population that the findings from this study are applicable to is readily identifiable. Limitations include the fact that physical activity was self-reported, meaning that there may be variation in the way people reported their physical activity level. The strong correlation between the three physical activity variables suggests that people's ranking of physical activity level was relatively reliable, even if the absolute level of physical activity may be inaccurate. However, some people may have reported pre-RA exercise rather than current exercise at treatment onset. The correlation between disease activity and symptoms at baseline suggests this may not be the case. The participants were asked to recall their physical activity level over the previous month, a relatively long interval particularly during Reducing physical activity includes both reductions from high to low physical activity and stopping physical activity completely. HADS: Hospital Anxiety (HADS-A) and Depression (HADS-D) Scale; IMD: Index of Multiple Deprivation; OR: odds ratio; VAS: visual analogue scale. the early phases of RA. This was chosen to identify participants' recent physical activity levels, but to avoid the influence of weeks in which, by chance, the participants experienced abnormally high or low physical activity just before assessments. The lack of a non-RA comparison group means it is difficult to assess whether the people with RA in this cohort were performing less physical activity than otherwise healthy people of a similar age, although previous research has shown this to be the case in general [10,11]. Furthermore, there was no measure of self-efficacy, which has been shown to be associated with physical activity level in the past [14,18,45], and therefore self-efficacy could not be included in the analyses. Lastly, the physical activity categories (none, low, high) were quite wide, and therefore smaller changes in physical activity levels would not be included in the analyses. The decision to group the participants into three physical activity categories was made for the sake of power, to avoid having many small groups of physical activity change. In conclusion, this study demonstrates that the majority of people with RA are performing some physical activity as they start MTX therapy, and that many people are able to start or maintain some physical activity over the first year of treatment. However, a significant proportion of people with RA performed no physical activity, and some people stopped performing physical activity completely over follow-up. These groups may need interventions to keep them physically active. Higher disability and increased socioeconomic deprivation were associated with reducing and stopping physical activity. This illustrates the societal barriers impeding people with RA from continuing to perform physical activity after starting treatment, and public health strategies aiming to maintain or promote physical activity in RA need to take socioeconomic barriers into consideration when designing and delivering interventions.
2021-02-20T06:16:19.341Z
2021-02-19T00:00:00.000
{ "year": 2021, "sha1": "3f0a15d741b5626e80882eaefdf5c7ead823834e", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/rheumatology/advance-article-pdf/doi/10.1093/rheumatology/keab060/36599706/keab060.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c3324002ae9cb97e9e904a49e1c07f0f5550099b", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
247681871
pes2o/s2orc
v3-fos-license
Determinants of COVID-19 testing among late middle-aged and older adults: Applying the health belief model Objectives The purpose of this study was to examine correlates of taking a COVID-19 test among late middle-aged and older adults using nationally representative data. Methods Data were obtained from the 2020 Health and Retirement Study midway release COVID-19 module. Our sample was representative of community residing adults aged 51 and over in the United States (n = 2,870). Measurements We regressed taking a COVID-19 test on demographic characteristics, medical comorbidities, and measures related to the health belief model (i.e., perceived severity, perceived susceptibility, cues to action, and perceived barriers) using logistic regression, stratifying the model by 10-year age categories. Results Concern about the pandemic was associated with an increase in the likelihood of taking a test among late middle-aged adults. Knowing someone who was diagnosed with COVID-19 was associated with taking a test in most age categories. Financial barriers and knowing someone who died of COVID-19 were not associated with taking a test. Conclusions How late middle-aged and older adults perceive the COVID-19 pandemic may significantly influence their likelihood of taking a COVID-19 test. Introduction As of March 10th, 2022, there have been 79,248,406 reported cases of COVID-19 and 961,620 deaths in the United States [1] . Late middleaged and older adults have the highest hospitalization rates among all age groups and are more likely to have risk factors such as hypertension and cardiovascular disease [2] . Higher rates of testing are associated with reduced transmission and lower COVID-19 mortality [3 , 4 , 5] . However, the success of testing largely depends on public acceptability and accessibility [6] . Limited research has examined determinants of late middle-aged and older adults' decisions to obtain a COVID-19 test. Previous research on determinants of COVID-19 testing has not exclusively focused on these age groups. Among the general population of adults in the United States, Black and Hispanic individuals are more likely to be tested [7 , 8] , which likely relates to their disproportionate risk of exposure [7] . High comorbidity burden is also associated with increased likelihood of getting tested [7 , 9] . One state-specific study found that individuals who were worried about COVID-19 were more likely to seek a COVID-19 test. Conversely, financial strain was a barrier in the decision to obtain a test [10] . National Institute on Aging (grant number NIA U01AG009740) and conducted by the University of Michigan [22] . The midway release data was collected beginning in June 2020 and released in November 2020. The HRS is a nationally representative longitudinal survey of adults over age 50 and their spouses. Every two years, a random subsample is administered a psychosocial questionnaire with the other half receiving it in alternating waves. Of this subsample, 50% of participants were selected randomly to participate in the midway release version of the COVID-19 module in 2020. This project asked respondents' experiences with COVID-19 such as family and friends' diagnoses, getting COVID-19 tests, health care use, and financial issues. The 2018 RAND HRS dataset was used to provide cleaned and imputed demographic and health characteristics for the participants in the COVID-19 module, as these measures were not yet available in the 2020 midway release COVID-19 data. The midway release sample of the HRS COVID-19 module had 3266 participants. We excluded age-ineligible individuals and individuals of other races than non-Hispanic White, non-Hispanic Black, and Hispanic due to small sample size, creating an analytic sample of 2870 participants. The outcome measure came from the following survey question: "Have you been tested for the coronavirus? " We created a binary vari-able indicating whether the respondent took a test for COVID-19. We chose survey items using an abbreviated HBM model, which we display in Fig. 1 . We created a binary variable designating whether someone the respondent knew had been diagnosed with COVID-19, which we believed to be related to perceived susceptibility. A possibility of exposure to COVID-19 was the most common reason cited for higher perceived susceptibility in one study [23] . Additionally, we created a binary variable indicating whether a participant was highly concerned about COVID-19 as a measure of perceived severity. Participants were asked to rank on a scale of 1 to 10 how concerned they were by COVID-19, with 1 being the least concerned and 10 being the most concerned. We classified scores at or above the median (8) as highly concerned. We also used a measure of financial hardship, which was related to the perceived barrier component of the HBM. We created an indicator that denoted whether the respondent experienced at least one of 7 hardships such as trouble buying food and missed payments. Previous work that incorporated financial difficulties as a perceived barrier found that it was related to obtaining a colorectal cancer screening test [24 , 25] . We also created an indicator for whether a participant knew someone Note. SE = standard error; N represents the number of valid observations. who died of COVID-19. Cues to action represent internal or external factors that can trigger decision-making. Previous work used a measure of knowing someone who died of COVID-19 as a variable representing a cue to action to receive a COVID-19 vaccine [26] . We did not include a measure of perceived benefit or self-efficacy due to a lack of comparable HRS questions. Please see Supplemental Table 1 for more information regarding how the HBM measures were coded. To control for greater risk of exposure to COVID-19, we included an indicator of whether the participant or someone in the house participated in essential work . Essential work, broadly defined, is a range of services critical to infrastructure operations (e.g., energy, childcare, critical retail) [27] . Additionally, we included an indicator of whether the participant lived with a child or within ten miles of a child. We also included sociodemographic factors as covariates. We incorporated selfreported gender (female, with male as the reference group), age in years, indicators for race (non-Hispanic Black and Hispanic, with non-Hispanic White as the reference group), marital status (married/partnered, with not married/partnered as the reference group), education level (less than a high school education, with at least a high school education as the reference group), and a count of chronic conditions (0-8). These conditions included hypertension, diabetes, coronary heart disease, stroke, lung disease (COPD, emphysema, or chronic bronchitis), psychometric diseases, cancer (a malignancy of any kind), and arthritis (rheumatoid arthritis, gout, lupus, or fibromyalgia). Statistical analyses We first examined the population weighted descriptive statistics of the sample, using preliminary weights from the HRS COVID-19 module that accounted for attrition and selection. To account for the complex sample design, we used the stratification weight and the cluster weight variables from the HRS. Weighted logistic regression was performed to examine the relationship between components of the HBM model and whether the participant took a COVID-19 test after adjusting for demographic characteristics and health conditions. For covariates with missing values, we used multiple imputation by chained equations. The percentage of observations with missing values ranged from 0% to 34.3%. A total of 25 datasets were used to conduct statistical analysis. All data preparation and analyses were conducted via SAS version 9.4 [28] . We stratified the models by 10-year age brackets because the risk and severity of COVID-19 varies by age and may differentially impact testing decisions. Results Descriptive characteristics of the sample are shown in Table 1 . Approximately 21% of the sample took a COVID-19 test. Additionally, 39% of the participants knew someone, including anyone in their household, diagnosed with COVID-19. Approximately 63% of the sample rated their concern about the pandemic at or above 8 out of a scale of 10. About 27% of the sample had been through some kind of financial hardship. Approximately 16% of the sample knew someone who died of COVID-19. The majority of the variables had less than 5% values missing, except for the one that asks if the respondent or someone in the household participated in essential work (35% missing). We addressed this issue with multiple imputation. Please see Supplemental Table 2 for the characteristics of participants who had missing values on the essential work measure. The results from the logistic regression models stratified by age are shown in Table 2 . Individuals in the late middle-aged group who were highly concerned about the pandemic were 64% more likely to take a test holding demographic and health characteristics constant (adjusted odds ratio [OR] = 1.64, 95% CI = 1.05-2.55). Being concerned about the pandemic was not associated with test uptake among older age groups. Perceived susceptibility was a significant predictor of test uptake among most age groups. Specifically, individuals aged 51-64, 75-84, and 85 and over who knew someone diagnosed with COVID-19 were 105% (OR = 2.05, 95% CI = 1.36-3.09), 119% (OR = 2.19, 95% CI = 1.33-3.06) and 187% (OR = 2.87, 95% CI = 1.25-6.61) more likely to take a COVID-19 test, respectively. Experiencing financial difficulties and knowing someone who died of COVID-19 were not associated with taking a test for all age groups when holding other measures constant. Note. OR = adjusted odds ratio; CI = confidence interval; ref = reference group; All numbers are rounded to two decimal places; Low = lower limit; Upp = upper limit. For the oldest old age group, living with or within close proximity to a child was associated with taking a test (OR = 2.90, 95% CI = 0.98-8.60). Individuals aged 75-84 with a greater number of chronic conditions were more likely to take a COVID-19 test (OR = 1.19, 95% CI = 1.03-1.36). Demographic characteristics were related to COVID-19 testing for the comparatively younger age groups. Individuals in the late middle-aged category who were married were less likely to obtain a test holding other measures constant (OR = 0.61, 95% CI = 0.40-0.92). Among individuals aged 65-74, Black individuals were 106% more likely to test (OR = 2.06, 95% CI = 1.16-3.66). Participating or living with someone who participated in essential work was not associated with taking a test. Discussion Very little is known about what influences individuals' decisions to take a COVID-19 test, especially older adults who are at greatest risk of complications from the virus. Understanding what influences decisions to obtain a test may help inform public health strategies to increase testing rates. Using a nationally representative sample of older adults, we examined correlates of older adults' decisions to obtain a COVID-19 test. We used the HBM to guide our analysis. Our study found that concern about the pandemic, our measure of perceived severity, was associated with an increased likelihood of taking a test among late middle-aged adults. This conforms with other's finding that middle-aged people show greater concern for COVID-19 than other age groups [29] . Perceived severity has also been associated with health behaviors during COVID-19 such as social distancing and wearing a face mask in some previous studies [17] , but not in others [18] . Effective public health messaging may help improve testing rates. Information about COVID-19 has proliferated on social media since the beginning of the pandemic, yet not all messages are based on accurate information [30] . Compared to people who rely on traditional news sources for information about COVID-19, people who use social media for information are less likely to perceive that COVID-19 is severe [31] . Because perceptions of severity are associated with taking a COVID-19 test among individuals aged 51-64, credible and accurate information about COVID-19 ′ s severity for this age group should be created that can be easily disseminated through social media. Knowing someone who was diagnosed with COVID-19, our measure of perceived susceptibility, was also associated with an increased likelihood of obtaining a test for most age groups except those aged 65-74. A study in the early stages of the pandemic found that older adults' perceived susceptibility to COVID was low and they preferred to stay at home [32] . Potentially, adults in this age bracket are retired from work and do not have occupational exposures. Additionally, they may not yet necessitate a caregiver to assist with activities of daily living. Interestingly, the association between cues to action (knowing someone who died of COVID-19) and obtaining a COVID-19 test was not statistically significant. Death due to COVID-19 in the early pandemic may not be high enough to make an impact on testing decisions. Financial difficulty was also not associated with taking a COVID-19 test. A potential reason for this finding is that COVID-19 is covered by Medicare Part B; thus, the cost may be less of a concern for the older adult population [33] . Other types of barriers, such as a lack of correct information and timely healthcare use, may play a more important role. This study is not without limitations. First, we did not have a direct measure of exposure to COVID-19 such as whether the participant lived in a state with a high COVID-19 case rate. We were also not able to control for the specific date of the survey, which may be related to how participants perceived the severity of the pandemic given changes in death rates and public health guidance. We used indirect measures of the HBM theoretical model. For example, the HRS did not directly ask participants if they perceived that they were susceptible to COVID-19. Instead, we used a measure of whether the participant knew someone who had COVID-19. The reason that in-dividuals take a COVID-19 test after knowing someone with the virus could be because a healthcare provider required them to take a test. Moreover, these survey questions do not capture every construct of the HBM. There were also likely other factors that acted as a perceived barrier such as a lack of time to obtain a test. However, we were limited by the measures available in the HRS survey. Additionally, when the HRS COVID-19 survey was conducted, the pandemic had not reached its peak mortality rates. Thus, perceived levels of concern and rates of testing were likely lower than they would be at other times during the pandemic. Testing supplies were also very limited during this time period in the United States. Many people may have wanted a test but were unable to obtain one because they did not meet a local eligibility criterion such as displaying symptoms . Future research can examine whether perceived levels of concern change over time due to COVID-19 peaks and "pandemic fatigue ", as well as how fluctuating levels of concern influence rates of testing. Finally, we used the midway release version of the HRS COVID-19 data, which may not be completely free from errors. How older people perceive their pandemic-related experiences may significantly influence their likelihood of taking a COVID-19 test , and this is salient to policymakers in terms of disease control and prevention. First, for the ongoing pandemic, public surveillance is critical to disease control and treatment. Since testing is voluntary, it's important to understand what the determinants of the public's taking the test are so as to promote willingness to participate in testing, especially among the older population who face a higher risk of the disease. Second, for future new pandemics or epidemics, timely and mass testing could assist health systems and governments in the early detection of diseases in order to respond quickly in constructing preventive and treating strategies. Understanding what is associated with taking a test among older adults can assist with early detection. To that point, decision-makers should consider enhancing informational resources for older adults to encourage them to participate in necessary testing. Funding This research was funded in part by the AHRQ T32 training grant ( T32HS000011 ). Declaration of interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2022-03-26T13:04:54.146Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "84a6b1a6544175167caec5bc0592347104b50b60", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ahr.2022.100066", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0218c57a798033eb8fe9f28c62031a02e37e17d6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202539622
pes2o/s2orc
v3-fos-license
Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks Knowledge distillation (KD) is a very popular method for model size reduction. Recently, the technique is exploited for quantized deep neural networks (QDNNs) training as a way to restore the performance sacrificed by word-length reduction. KD, however, employs additional hyper-parameters, such as temperature, coefficient, and the size of teacher network for QDNN training. We analyze the effect of these hyper-parameters for QDNN optimization with KD. We find that these hyper-parameters are inter-related, and also introduce a simple and effective technique that reduces \textit{coefficient} during training. With KD employing the proposed hyper-parameters, we achieve the test accuracy of 92.7% and 67.0% on Resnet20 with 2-bit ternary weights for CIFAR-10 and CIFAR-100 data sets, respectively. Introduction Deep neural networks (DNNs) are extremely important in various applications including computer vision (He et al. 2016a), speech recognition (Graves, Mohamed, and Hinton 2013), and natural language processing (Vaswani et al. 2017). However, DNNs require a lot of parameters, it is necessary to reduce the size of the model by using quantization to operate the network in an embedded system rather than a server environment. Quantized deep neural networks (QDNNs) do not degrade performance when quantized to a high bit, such as 8 bits, but the memory compression rate is low. Quantize to 1-or 2-bit can increase the memory compression rate, but severe quantization will cause huge performance degradation. To solve this problem, many QDNN papers suggest various types of quantizers or complex training algorithms (Hubara et al. 2017;Xu et al. 2018;Zhou et al. 2016;Zhou et al. 2017). Knowledge distillation (KD) is widely used as a teacherstudent training method that trains small networks using larger networks with better performance (Hinton, Vinyals, and Dean 2015). Leveraging the knowledge contained in previously trained networks has attracted a lot of attention in a variety of applications for model compression (Tang, Wang, and Zhang 2016;Song et al. 2018;Asami et al. 2017;Wang et al. 2019). Recently, a combination of quantization and knowledge distillation has emerged as a popular solution to restore the performance of QDNN, which has fallen due to quantization errors (Mishra and Marr 2018;Polino, Pascanu, and Alistarh 2018). These papers mainly studied the macroscopic method for training QDNN with KD. A student quantized network is trained from scratch with soft loss produced by a good teacher network for QDNN training with KD (Polino, Pascanu, and Alistarh 2018). (Mishra and Marr 2018) focused on the initialization of the teacher and student models when training QDNNs with KD. More specifically, they analyzed teacher network and student network should be trained simultaneously or independently. Their results showed that the best performance was achieved when the teacher network was trained independently and the student network was finetuned with the soft labels produced by the teacher network. Most of the previous studies have been conducted from the macro perspective for combining QDNN and KD. However, micro perspective research has not been conducted on how the important hyper-parameters of knowledge distillation such as temperature, coefficient and size of the teacher network affect the performance of QDNN. In this paper, we analyze how KD's hyper-parameters can be operated to achieve good performance while using macroscopic training mechanism (e.g. pretraining teacher and student networks respectively and fine-tuning student network only using KD). Based on the analysis results, we propose the coefficient reducing (CR) technique. CR is a simple training technique for KD training to improve the performance of QDNNs dramatically. The contributions of this paper are as follows: • We analyze how KD's hyper-parameters influence each other and how QDNN's final performance is affected. • We show how well-used hyper-parameters can improve QDNN performance dramatically. • Based on the analysis results, we proposed a simple training technique, coefficient reducing, for KD and obtained higher performance than the previous studies. This paper is organized as follows. We firstly introduce related works in Section 2. Section 3 describes how the QDNN can be trained with KD and simultaneously explain why the arXiv:1909.01688v1 [cs.LG] 4 Sep 2019 hyper-parameters of KD are important in QDNN training. Section 4 show the experimental results and we conclude the paper in Section 5. Related Works In this section, we discuss related literature in knowledge distillation and quantized neural network. Quantization of Deep Neural Networks QDNN has been researched for a long time. Hwang andSung (2014) andCourbariaux, Bengio, andDavid (2015) are suggested training methods in quantization domain to restore the performance that reduced by the quantization error. The gradient is usually smaller than the quantization scale factor, they maintain full-precision weights to accumulate the gradients while the quantized weights are used for computing forward and backward propagation Courbariaux, Bengio, and David 2015;Zhou et al. 2016;Zhou et al. 2017;Hubara et al. 2017;Xu et al. 2018). Knowledge Distillation Knowledge distillation is popular model compression method that transfers the knowledge from accurate large teacher models to small student model (Hinton, Vinyals, and Dean 2015;Bucilu, Caruana, and Niculescu-Mizil 2006). The promising performance improvement of the knowledge distillation technique, it utilizes in variety of applications (Chebotar and Waters 2016; Dai and Van Gool 2018;Chen et al. 2017;Oord et al. 2017) and learning algorithms (Romero et al. 2014;Kulkarni, Patil, and Karande 2017;Park et al. 2019;Yim et al. 2017). Knowledge Distillation with QDNN Recently, several papers have begun to employ knowledge distillation to restore the performance loss of quantized deep neural networks Zhuang et al.;Polino, Pascanu, and Alistarh;Mishra and Marr (2018;. Zhuang et al. (2018) trained jointly the same size of the full-precision teacher network and quantized student network simultaneously. They defined guidance loss that minimizes the l2 norm between the same located hidden layer in teacher and student model. (Mishra and Marr 2018) proposed three methods to find out how to train QDNN effectively with KD. The first scheme is training the teacher and student network jointly. Second scheme trains only the quantized student network with pretrained teacher networks using KD. The last scheme is both networks are pretrained independently, and only fine-tunes the student network using KD. Polino, Pascanu, and Alistarh (2018) also suggests two methods to combine QDNN and KD which include quantized distillation and differentiable quantization. The previous literature mainly concentrates on the way to combine QDNN training and Knowlege distillation. Unlike the previous studies, however, we focus on improving QDNN's performance using KD with a microscopic point of view. We concentrate on analyze the hyper-parameters of KD that affects the accuracy of QDNN significantly. Quntized Deep Neural Network Training Using Knowledge Distillation In this section, we first briefly describe the conventional neural network quantization method and also explain how QDNN training can be combined with KD. We also present the hyper-parameters of KD and state why they are very important in QDNN training. Quantization of Deep Neural Network & Knowledge Distillation The deep neural network parameter vector, w, can be expressed in 2 b level when quantized in b-bit. This can be generalized to Equation (1) and Equation (2) for the case of b = 1 and b > 1 through the symmetric uniform quantization function Q b (·) as follows: where M is quantization level 2 b −1 and ∆ represents quantization step size. ∆ can be computed by L2-error minimization between floating and fixed-point weights or by the standard deviation of the weight vector (Hwang and Sung 2014; Rastegari et al. 2016;Zhou et al. 2016). Severe quantization such as 1-or 2-bit causes huge performance degradation. Retraining on quantization domain is very important to recover the performance degradation (Sung, Shin, and Hwang 2015). When retraining the student network on quantization domain, forward, backward, and gradient computations should be computed using quantized weights but the computed gradients must be added to full-precision weights (Hubara et al. 2017;Xu et al. 2018;Zhou et al. 2016;Zhou et al. 2017). In many cases, deep neural networks generate probabilities with the softmax layer. Logit, z, is fed into the softmax layer and generates the probability of each class, p, using p i = exp(zi/τ ) j exp(zj /τ ) . τ is a hyper-parameter of KD as known as 'temperature'. A high value of τ generates soften probability distribution. KD employs the probability generated by the teacher network as a soft label to train the student network so that the following loss function is minimized during training. Algorithm 1: QDNN training with KD Initialization: w T : Pretrained teacher model, w S : Pretrained student model, λ: Coefficient, τ : Temperature Output : w q S : Quantized student model while not converged do w q S = Quant(w S ) Run forward teacher (w T ) and student model (w q S ) Compute distillation loss L(w q S ) Run backward and compute gradients H(·) denotes a loss function, y is the ground truth hard label, w S is weight vector of student network, p T and p S are the probability of teacher and student network, and λ is a 'coefficient for adjusting the ratio of soft and hard target. A recent paper Mirzadeh et al. (2019) reports that the performance is gradually decreased when the size difference between the teacher and student networks becomes too large. This phenomenon is because the model capacity of student network is too small to simulate the incoming information (i.e. soft target from teacher network). Since QDNN limits the representation level of the weight parameter, the capacity of a quantized network is much smaller, even with the same number of parameters as the network with full precision. Therefore, QDNN is to be more sensitive to the size of the teacher network. We consider the three hyper-parameters described above (temperature, coefficient, and size of the teacher network) as the hyper-parameters that have a significant impact on the performance on QDNN training with KD. Algorithm 1 shows how these three hyper-parameters play a role in QDNN training using KD. Discussion on Hyper-parameters of KD Previous papers that trained QDNNs using KD mainly focused on finding a macroscopic method of how to apply KDs to QDNNs (Mishra and Marr 2018; Polino, Pascanu, and Alistarh 2018). At present, the best known QDNN training with KD method is that firstly train full-precision teacher and a full-precision student network independently, and apply KD when fine-tuning the student network in quantization domain. We agree that the above method is the best way to train a QDNN with KD, so we also proceed with the QDNN training in the same way for all experiments. However, it is still not fully studied how the hyper-parameters of KD should be applied to QDNN training. As we mentioned in Section 3.1, the hyper-parameters temperature (τ ), coefficient (λ), and size of teacher network can significantly impact on QDNN performance. Existing papers usually fixed these hyperparameters when training QDNN with KD. For example, Mishra and Marr (2018) always fixes τ to 1, and Polino, Pascanu, and Alistarh (2018) holds it to 1 or 5 depending on the dataset. However, these three parameters are closely inter-related. For example, Mirzadeh et al. (2019) points out that when the teacher model is very large compared to the student model, the softer labels produced by the teacher network become sharper, making it difficult to transfer the knowledge of the teacher network to the studen. However, even in this case, finecontrol of temperature may be able to transfer the knowledge. Therefore, when the value of one hyper-parameter is changed the others are also needed to fine-tune carefully. We can also employ KD to obtain a better pretrained fullprecision student network before applying to retrain in quantization domain with KD. In general, if the pretrained fullprecision model has high accuracy, the quantized model ob- Considering the pretraining method of the student model and the hyper-parameters of KD during retraining, we can build up the following experimental setups. • How to pretrain the student model: employ KD or conventional training method. • How to retrain the student model on quantization domain: employ KD or conventional training method. • Investigate of the hyper-parameters: temperature, coefficient, and size of teacher network. We empirically analyze how the above list affects to QDNN training with KD in Section 4. In addition, as a result of analyzing, we introduce coefficient reducing (CR) technique which is aids to improve the performance of QDNN dramatically. CR is a training method that gradually increases the reflection rate of the hard target while training QDNN with KD. A detailed explanation will appear in Section 4. Experimental Setup Dataset: To analyze QDNN training with KD we employ CIFAR-10 and CIFAR-100 dataset. CIFAR-10 and CIFAR-100 consist of 10 and 100 classes, respectively. Both datasets contain 50K training images and 10K testing images. Thus the CIFAR-10 has 5000images per class and CIFAR-100 include 500 images per class. The size of each image is 32x32 with RGB channels. Model Configuration & Training Hyper-parameter: To analyze the impact of hyper-parameters of KD on QDNN training, we train WideResnet20xN (Zagoruyko and Komodakis 2016) as the teacher networks. We set N to 1, 1.2, 1.5, 1.7, 2, 3, 4, ,5, and 10. It should be noted that when N is 1, the network structure is the same with ResNet20 (He et al. 2016a). All the train and test accuracies of teacher networks on CIFAR-10 and CIFAR-100 datasets are reported in Table 2 and Table 3, respectively. We employ ResNet20 as the student network for both the CiIFAR-10 and CIFAR-100 datasets. If the network size is large enough against the amount of the dataset, the accuracy drop due to the quantization error is reduced. So any quantization method seems to be worked well. Therefore, to evaluate the performance of the quantization algorithm, the best choice might be employing a small network which is located in the under-parameterized region (Sung, Shin, and Hwang 2015; Boo, Shin, and Sung 2019). Full-precision ResNet20 model is located in the over-parameterized region and the model turns down to the under-parameterized region when employing severe quantization on CIFAR-10 dataset. Likewise, on the CIFAR-100 dataset, both the full-precision and the quantized model are located in the under-parameterized area, so it is the good network configuration to evaluate the effect of KD QDNN training. It should be noted that one way to determine that the network is laying at under-or overparametrized is check the train accuracy. If the train accuracy is not reached almost 100%, the model may be located in under-parameterized region. We reports the train and test accuracy for ResNet20 for CIFAR-10 and CIFAR-100 in Table 1. Results We firstly show that the importance of the hyper-parameters of KD when training QDNN in Model Size & Temperature We reports the results of 2-bit ResNet20 using KD on the CIFAR-10 dataset in Figure 1 (a). To demonstrate the effect of temperature for QDNN training, we train 2-bit resnet20 student model with varying the size of the teacher network from 'WideResNet20x1' to 'WideResNet20x5'. Each experiment is repeated for three τ values (small, medium, and large). It should be noted that WideResNet20xN means that the number of channel maps is increased by N times. When tau is small (blue line in the figure), its performance increases and decreases rapidly as fowling the x-axis. This steep slope becomes soften as the value of τ increase to medium (orange line) or large (blue line). The reason for this is related to the accuracy (red line) of the teacher model. The larger the teacher model, the higher the confidence for the correct answer. In other words, for the same input image, the small teacher model can be sure that the answer has 80% of the confidence, while the large teacher model can be sure it has 99.9% of the confidence. That is, a shape of the soft label may become similar to a ground truth hard label. In this case, even though employ KD, the results that are not very different from training with hard labels. Therefore, if tau is 1, the performance decreases to 91.9 % when the teacher network becomes larger than WideResnet20 * 2. This is similar to 91.48 % of the performance of a 2-bit ResNet20 trained on hard labels. Increasing τ larger value such as 5 or 10, the sharp soft labels produced by the large teacher network can become softened, and it aids to improve the QDNN performance very much. This phenomenon can occur when training a full-precision model with KD, but it is more important when training QDNN considering the model capacity of the student network, which is lowered due to quantization. Therefore, when training QDNN with KD, it should be considered carefully the relationship between size of teacher model and temperature. (c) KD-KD+CR Figure 2: Results of 2-bit ResNet20 models that trained by various size of teacher networks and temperature (τ ) on CIFAR-100. "HT-KD" represents the student is pretrained using by hard target and retrained using by KD. "CR" means coefficient reducing technique. In (c), the black horizontal line represents the test accuracy when quantize the network without KD. Figure 1 (b) shows the results of training 2-bit ResNet20 with KD on the CIFAR-100 dataset. Since CIFAR-100 has 100 classes, the value of τ is usually lower than that of CIFAR-10 More specifically, if τ is larger than 5 (purple line), the test accuracies are lower than 65.49% (green dotted line) which is the accuracy of the 2-bit ResNet20 that trained using the hard label. This indicates that the number of classes greatly affects the distribution of the soft label, and the soft label produced by the teacher network becomes flat due to the high τ , the teacher's knowledge does not transfer well to the student network. When τ is smaller enough (e.g. less than 5), the tendency is similar to the experiment in CIFAR-10. When τ is 1 (blue line), the best performance is observed with ResNet20. As τ increases to 2 (yellow line) and 4 (grey line), the size of the best performing teacher model also increases gradually to WideResnet20x1.5 and WideResnet20x1.7, respectively. As the size of the network grows so that the floating-point performance increases, the probability from the softmax layer is similar to the hard label. This implicates that the proper value of temperature can improve the performance but not set too high since the knowledge from teacher network can be disappeared. Network Pretraining Methods We applied KD to QDNN as suggested by Mishra and Marr (2018). The method is first train teacher and student network in full-precision independently, and then fine-tune the student network in low-bit using with the soft label generated by the teacher network. Thus, there are two options for pretraining the full-precision student network. The first is training with a hard target which is a conventional way to train deep neural networks and the second is also employing KD for full-precision student network training. The results are reported in Figure 2 (a) and (b). Figure 2 (a) represents that the full-precision student network is pretrained by the hard target and the quantized student network is retrained using by KD. Figure 2 (b) shows both the full-precision and quantized student networks are trained using by KD. We run both experiments with varying temperature (τ ) from 1 to 50 and the size of teacher network (N ) from ResNet20 to WideRes-Net20x10. The results clearly show that employing KD for both full-precision and quantized student networks can in-crease the performance very much. Coefficient Reducing Throughout the paper, we have discussed the effects of temperature and size of teacher network on QDNN training with KD. Since the two hyper-parameters are inter-correlated with each other, careful fine-tuning is required, which can be challenging when training the QDNN with KD. Therefore, it requires a way to alleviate this exhaustive search. In general, when a well-trained teacher network gives hints (soft label) to the student network, the performance of the student network can be able to increase. Strictly speaking, however, soft labels are the answer expected by the teacher network, and it might not be the absolute answer. Therefore, the following method can be considered. At the beginning of the training, where the gradient changes a lot, use the soft label and the hard label half and half and gradually reduce the amount of hints provided by the teacher network as the training progresses. We name this simple method as coefficient reducing (CR) technique and use it for QDNN training with KD. To evaluate the effectiveness of the CR in QDNN training with KD, we applied the same experiment as shown in Figure 2 (b) with CR and report the results in Figure 2 (c). The results clearly show that CR greatly aids to improve the performance of the QDNN training with KD. In almost of the hyper-parameters setup, the results with CR improved the performance significantly. The important point is that in case of performance that is not working well according to too large size of teacher network or temperature in KD-KD, the performance also improves or at least ties with the results that trained without KD (black horizontal line in Figure 2 (c)). This is because coefficient (λ) is a hyper-parameter that can control the applying degree of size of teacher network and temperature as shown in Equation eqref3. Therefore, CR is a promising technique that can prevent performance degradation (at least as similar to the results that trained without KD) even if an inappropriate hyper-parameters are selected. Ensemble of Multiple Teacher Networks Many papers that related to knowledge distillation are often used to train a student network by averaged the soft label from multiple cumbersome models (). We can also consider the ensemble of multiple teachers in QDNN training with KD. The performance of the teacher networks used in the ensemble experiment is reported in Table 2. As the model gradually widened, the performance is continually increasing and saturated at 95.24% on WideResNet20x5. Quantizing the student network by using the ensemble of multiple teacher networks is shown in Table 5. In the case of training with one Teacher model, the highest performance is 92.67% with a temperature of 10 and the ensemble of five teachers shows 92.69% with a temperature of 5. Even with the ensemble of multiple teacher models, the best performance is similar to the result that trained with single teacher model. It means that if τ is adjusted properly, the number of teacher model may not be that important We also employ the multiple teacher models that have different size each other, but the student performance is not that much different. Therefore, considering computational efficiency, it is better to use only one teacher network to train the quantized student network with careful temperature selection. Concluding Remarks In this study, we investigate the impact of the hyperparameters in quantized deep neural networks training with knowledge distillation. We found that the three hyperparameters (temperature, coefficient, and size of the teacher network) are closely inter-related. When the size of the teacher network is growing, increasing temperature aids to boost performance. However, if the temperature is increased too much, the knowledge from the teacher network can be disappeared. We also introduce a simple training technique, coefficient reducing (CR), for quantized deep neural networks training with KD. At the beginning of the training, CR keeps the rate of the hard target and the soft target equally, but gradually reduces the rate of the soft target so that the KD loss become the conventional loss function that employing only hard target at the end of the training. With careful hyper-parameter selection and coefficient reducing technique, we achieve far exceed performances than the previous studies for the 2-bit quantized deep neural networks on CIFAR-10 and CIFAR-100.
2019-09-04T10:47:03.000Z
2019-09-04T00:00:00.000
{ "year": 2019, "sha1": "69e232a29aaf30bce388fdd30b4dadfcd627a9cd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "69e232a29aaf30bce388fdd30b4dadfcd627a9cd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
4082715
pes2o/s2orc
v3-fos-license
The use of the forceps biopsy as an auxiliary technique for the visualization of the major duodenal papilla using the foward-viewing upper endoscopy Background – Conventional esophagogastroduodenoscopy is the best method for evaluation of the upper gastrointestinal tract, but it has limitations for the identification of the major duodenal papilla, even after the use of the straightening maneuver. Side-viewing duodenoscope is recommended for optimal examination of major duodenal papilla in patients at high risk for lesions in this region. Objective – To evaluate the use of the biopsy forceps during conventional esophagogastroduodenoscopy as an additional tool to the straightening maneuver, in the evaluation of the major duodenal papilla. Methods – A total of 671 patients were studied between 2013 and 2015, with active major duodenal papilla search in three endoscope steps: not straightened, straightened and use of the biopsy forceps after straightening. In all of them it was recorded whether: major duodenal papilla was fully visualized (position A), partially visualized (position B) or not visualized (position C). If major duodenal papilla was not fully visualized, patients continued to the next step. Results – A total of 341 were female (50.8%) with mean age of 49 years. Of the 671 patients, 324 (48.3%) major duodenal papilla was identified in position A, 112 (16.7%) in position B and 235 (35%) in position C. In the 347 patients who underwent the straightening maneuver, position A was found in 186 (53.6%), position B in 51 (14.7%) and position C in 110 (31.7%). Of the 161 remaining patients and after biopsy forceps use, position A was seen in 94 (58.4%), position B in 14 (8.7%) and position C in 53 (32.9%). The overall rate of complete visualization of major duodenal papilla was 90%. Conclusion – The use of the biopsy forceps significantly increased the total major duodenal papilla visualization rate by 14%, reaching 604/671 (90%) of the patients (P<0.01) and it can be easily incorporated into the routine endoscopic examination of the upper gastrointestinal tract. HEADINGS – Ampulla of Vater, physiopathology. Adenocarcinoma, diagnosis. Digestive system endoscopy, utilization. Declared conflict of interest of all authors: none Disclosure of funding: no funding received 1 Serviço de Endoscopia do Hospital Universitário de Juiz de Fora, MG, Brasil; 2 Universidade Federal de Juiz de Fora, MG, Brasil. Correspondence: Nathalia Saber de Andrade. Rua Bom Jesus do Itabapoana, 1001, bloco 2, ap. 206. Bairro Recreio – CEP: 28895-389 – Rio das Ostras, RJ, Brasil. E-mail: nathaliasaber@hotmail.com Both, the straightening maneuver, commonly carried out during the endoscopic retrograde cholangiopancreatography (ERCP), and the use of transparent caps fitted to the tip of forward-viewing gastroscopes have improved the full visualization of the MDP. However, the straightening maneuver has only enabled the full identification of MDP in 54.7% of patients studied, and the use of cap-fitted gastroscopes, while more efficient, is restricted to EGDs with the specific purpose of evaluating the MDP(1,4,5). The ideal solution would be an easy and inexpensive method that could be used in all routine EGD, with high rates of MDP complete visualization. Based on such prerogatives, this study has evaluated the use of the biopsy forceps during conventional EGD as an additional tool to the straightening maneuver in the identification of MDP. The objective of the study is to evaluate the use of the biopsy forceps during conventional EGD as an additional tool to the straightening maneuver, for complete examination of the MDP. AG-2017-57 dx.doi.org/10.1590/S0004-2803.201800000-04 Andrade NS, André AMF, Ferreira VHP, Ferreira LEVVC. The use of the forceps biopsy as an auxiliary technique for the visualization of the major duodenal papilla using the foward-viewing upper endoscopy Arq Gastroenterol • 2018. v. 55 no 1 jan/mar • 47 METHODS A transversal study with patients from the Gastroenterology outpatient clinic of the University Hospital of the Universidade Federal de Juiz de Fora (UFJF) or from the Unified Health System (SUS, for the acronym in Portuguese) to undergo esophagogastroduodenoscopies for the investigation of signs or symptoms related to upper GI disorders. Only ASA 1 and 2 patients were invited to participate in the study. Patients have undergone EGD at the Digestive Endoscopy Unit of UFJF’s University Hospital, 1 day per week, from September 2013 to November 2015. All procedures were carried out by an advanced fellow, trained to perform the straightening maneuver, with thorough knowledge of MDP morphology, together with an experienced endoscopist, fully trained to perform ERCP, with the use of 44000 series processors and 530 and 590 Fujinon gastroscopes. Patients were sedated with midazolam (1-2 mg) and fentanyl (25-50 μg)(6). When necessary, propofol (10-50 mg) was additionally administered in individual cases, in order to facilitate patients’ collaboration with the study. During the procedures, all patients received supplemental O2 (3 L/min) and had their SaO2 and blood pressure monitored. The total time of procedure was not recorded and observations exclusively considered data referring to the visualization of the MDP. The study excluded patients with obstructions in the antropyloric region; bulbar stricture or any other lesion that could limit access to the second duodenal part; patients with a previous case of upper gastrointestinal tract surgical intervention; patients in full-dose anticoagulant therapy and those who refused to participate in the protocol after reading the Free and Informed Consent Form. Examinations were carried out under the following procedures 1. Full examination of esophagus, stomach and duodenum with active search for MDP in the second duodenal part with a conventional non-straightened endoscope. 2. Identification of MDP with a non-straightened device: fully visualized (position A), partially (position B) or not visualized (position C). Patients whose papilla was fully visualized were not submitted to auxiliary maneuvers and were included in Group 1. When MDP was partially identified or not visualized, patients continued to the next stage. 3. Straightening maneuver with new active search for MDP. 4. New identification of MDP after straightening in the second duodenal part: fully visualized (position A), partially (position B) or not visualized (position C). Patients whose MDP was partially visualized or not identified continued to the third stage. Patients whose papilla was fully visualized were included in Group 2. 5. Use of the biopsy forceps to push back or laterally displace the duodenal folds, for better MDP visualization. 6. Last identification of MDP with the biopsy forceps: fully visualized (position A), partially (position B) or not visualized (position C). Patients with fully visualized MDP were included in Group 3. A B INTRODUCTION Conventional esophagogastroduodenoscopy (EGD) is currently the best method for evaluation of the upper gastrointestinal tract, which is usually visualized from the esophagus down to the second duodenal part.However, it is not always possible to completely identify the major duodenal papilla (MDP) with the forward-viewing gastroscopy (1) .The full examination of the MDP is essential for the early detection of ampullary and periampullary lesions during screening and follow up of patients at high risk for adenocarcinoma (2) .While American Society for Gastrointestinal Endoscopy recommends the use of side-viewing duodenoscopes for optimal examination of the MDP, this type of endoscope is not available in most private ambulatory endoscopy units (3) . Previous studies demonstrate limitations of the forwardviewing endoscopes for complete examination of the MDP.Such studies resulted in 24% to 80.8% full visualization of the MPD. The use of the forceps biopsy as an auxiliary technique for the visualization of the major duodenal papilla using the foward-viewing upper endoscopy Both, the straightening maneuver, commonly carried out during the endoscopic retrograde cholangiopancreatography (ERCP), and the use of transparent caps fitted to the tip of forward-viewing gastroscopes have improved the full visualization of the MDP.However, the straightening maneuver has only enabled the full identification of MDP in 54.7% of patients studied, and the use of cap-fitted gastroscopes, while more efficient, is restricted to EGDs with the specific purpose of evaluating the MDP (1,4,5) . The ideal solution would be an easy and inexpensive method that could be used in all routine EGD, with high rates of MDP complete visualization.Based on such prerogatives, this study has evaluated the use of the biopsy forceps during conventional EGD as an additional tool to the straightening maneuver in the identification of MDP. The objective of the study is to evaluate the use of the biopsy forceps during conventional EGD as an additional tool to the straightening maneuver, for complete examination of the MDP. METHODS A transversal study with patients from the Gastroenterology outpatient clinic of the University Hospital of the Universidade Federal de Juiz de Fora (UFJF) or from the Unified Health System (SUS, for the acronym in Portuguese) to undergo esophagogastroduodenoscopies for the investigation of signs or symptoms related to upper GI disorders. Only ASA 1 and 2 patients were invited to participate in the study.Patients have undergone EGD at the Digestive Endoscopy Unit of UFJF's University Hospital, 1 day per week, from September 2013 to November 2015.All procedures were carried out by an advanced fellow, trained to perform the straightening maneuver, with thorough knowledge of MDP morphology, together with an experienced endoscopist, fully trained to perform ERCP, with the use of 44000 series processors and 530 and 590 Fujinon gastroscopes.Patients were sedated with midazolam (1-2 mg) and fentanyl (25-50 μg) (6) .When necessary, propofol (10-50 mg) was additionally administered in individual cases, in order to facilitate patients' collaboration with the study.During the procedures, all patients received supplemental O 2 (3 L/min) and had their SaO 2 and blood pressure monitored.The total time of procedure was not recorded and observations exclusively considered data referring to the visualization of the MDP. The study excluded patients with obstructions in the antropyloric region; bulbar stricture or any other lesion that could limit access to the second duodenal part; patients with a previous case of upper gastrointestinal tract surgical intervention; patients in full-dose anticoagulant therapy and those who refused to participate in the protocol after reading the Free and Informed Consent Form. Examinations were carried out under the following procedures 1. Full examination of esophagus, stomach and duodenum with active search for MDP in the second duodenal part with a conventional non-straightened endoscope.The FIGURES 1 and 2 show the different positions of the MDP. Data collected were recorded in specific forms for each patient and evaluated with the use of the 5.0 GraphPad Prism software.The hypothesis test was used to verify if MDP full visualization rates with the use of the biopsy forceps were proven to be higher than results obtained without biopsy forceps.P<0.05 values were considered statistically significant.The study was approved by the Ethics Committee of UFJF's University Hospital and registered at Plataforma Brasil under the number 01796512.5.0000.5147. RESULTS Of the 695 patients invited to participate in the study, four of them refused to join the research and two were fully anticoagulated.Other 17 were excluded due to the presence of lesions that would make the access to the second duodenal part impossible and/or due to previous surgical intervention with anatomical changes.Of the remaining 671 patients, 341 were female (50.8%) and 330 male (49.2%).The age range varied from 18 to 80 years old (mean age of 50 years old).The sequence followed by the research and results obtained are in FIGURE 2. Group 1 (n=671) shows the following results: position A in 324 (48.3%) patients, position B in 112 (16.7%) and position C in 235 (35%).In Group 2, 347 patients were submitted to the straightening maneuver.Among them, position A was identified in 186 (53.6%) patients, position B in 51 (14.7%) and position C in 110 (31.7%).In group 3, the biopsy forceps was used to active search for the MDP in the remaining 161 patients.Within this group, position A was identified in 94 (58.4%), position B in 14 (8.7%) and position C in 53 (32.9%) patients. Considering only MDP full visualization in sequence, position A was observed in 324 (48.3%) patients of group 1, 186 (27.7%) of group 2 and 94 (14%) of group 3, with a total of 604 of the 671 patients examined (90%).Comparing the number of fully visualized MDP without biopsy forceps (510/671 -76%) versus the number of fully visualized MDP with the use of the biopsy forceps (604/761 -90%), the result is P<0.01.During the study, there were two papilla lesions identified -one adenoma and one lymphoma. DISCUSSION MDP can be the site for several benign and malign lesions (7)(8)(9)(10) .However, due to its anatomic position in the posteromedial wall of the descending duodenum, the forward-viewing device has some limitations, both in the identification and examination of the MDP (1,11) .The duodenoscope side-viewing is recommended for optimal evaluation of the MDP.Nonetheless, such kind of endoscope is not available in most outpatient digestive endoscopy units and is almost exclusively found in hospital units performing ERCP (3) . The rates of full MDP visualization with conventional EGD have shown great variation in the few studies published, as demonstrated below.(TABLE 1). Our MDP full visualization rate with forward-viewing gastroscopy was 76% versus 80.8% in study 2, 54.7% in study 1 and only 23.8% in study 3. Nevertheless, study 1 did not perform the straightening maneuver in 81 patients in which the MDP was partially visualized, and only performed it in 144 patients whose MDP was not identified.In those 144 patients, the straightening maneuver improved the full identification of the MDP in 77 (50%) patients.If the same rate was applied to the 81 patients in which the MDP was partially visualized and the straightening maneuver was not performed, we would have at least 40 new cases of fully visualized MDP which, if added to the 185 described in the study, would reach 225 patients or 74.5% (1) .The MDP full visualization rate would then be really close to the values found in our study (76%) and in study 2 (80.8%).Only study 3 presented a very low MDP full visualization rate with conventional EGD (23.8%) (4) .The reasons for such difference in MDP visualization rates could lie in the professional performing those procedures, considering that only 44% of endoscopic examinations in study 3 were performed by ERCP-trained endoscopists.In this study, ERCP-experienced endoscopists were able to locate the MDP during conventional EGD in a significantly higher number than non ERCP-trained endoscopists (80% vs 60% P=0.033).In studies 1 and 2, all examinations were carried out by ERCP-trained endoscopists.In our study, all procedures were performed by advanced fellows, together with ERCP-experienced staff.Another reason for such low success rates in study 3 could be the time established for MDP search.The study defined a maximum of 2 minutes after trespassing the pylorus to locate the MDP, whereas in other studies, including ours, there was no definition of a minimum time for MDP search. The straightening maneuver with forward-viewing gastroscopy described in study 1 by Hew WY et al. (1) improves MDP detection rate, which was also observed in our study.Similarly to study one, in which the straightening maneuver increased MDP full visualization rate by 50%, in our study, the maneuver increased MDP full visualization rate by 53.6% (FIGURE 2). Even though the cap-assisted endoscopy technique, which requires the use of a transparent cap fitted to the tip of the endoscope for manipulation of duodenal folds and better exposure of MDP, may be highly effective for the full detection of the MDP, it will requires additional examination.Unless patients have previous indication for MDP study and cap-assisted endoscopy with forward-viewing EGD, it will be necessary to remove the endoscope in order to insert the cap and reintroduce it for MDP evaluation.Studies 2 and 3 used the cap for MDP identification.In study 2, the MDP full visualization rate with 4 mm cap was 98% (118 out of 120 patients) and reached 100% with the use of an 11 mm cap (5) .In study 3, the MDP full visualization rate was 97% (98 out of 101 patients).In our study, the use of the biopsy forceps increased the MDP full visualization rate by 14%, reaching 604 of 671 patients examined (90%) (4) . The advantage demonstrated in our study is exactly the fact that our technique for MDP examination can be used in all patients submitted to conventional EGD with no previous history for MDP evaluation.Although the cap can be easily fitted to the tip of the endoscope and is a low-cost alternative, it is usually associated with specific therapeutic or diagnosis purposes, and its use is commonly limited to cases of MDP evaluation in patients with suspected MDP lesion or at high risk for MDP lesions.The disadvantage of our technique is the use of the biopsy forceps, which in spite of its low cost, as occurs with the caps, represents an extra expense in the search for a lesion that is not considered highly prevalent.The forward-viewing gastroscopy and the biopsy forceps are the two most basic tools for the development of the endoscopic study of the upper GIT, and are available in all endoscopy services in the world.Therefore, the active search for MDP with this technique can be carried out in virtually all routine procedures, and not exclusively in those with indication for the study of MDP.Considering that in 2010, only in the US, there were 2,895,999 conventional EGD carried out (12) , the use of the biopsy forceps as an auxiliary technique in the examination of the MDP would enable a full evaluation of the MDP in 90% of the upper digestive endoscopies carried out in the world.Such practice would definitely increase the detection of small MDP lesions, which would enable not only early diagnosis, but also their endoscopic treatment, resulting in less morbidity and mortality to patients (13) . CONCLUSION The use of the biopsy forceps as an auxiliary technique to the straightening maneuver in the active search for the MDP with forward-viewing conventional gastroscopy has significantly increased the full MDP visualization rate (P<0.01) and can be easily incorporated to routine GI endoscopic examinations. FIGURE 2 . FIGURE 2. Endoscopic results found following the research design.MDP: major duodenal papilla.
2018-04-03T01:24:06.656Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "a67b16aeb15c1983095c2866ea0112cee7536fd0", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/ag/v55n1/1678-4219-ag-55-01-46.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a67b16aeb15c1983095c2866ea0112cee7536fd0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13030511
pes2o/s2orc
v3-fos-license
Search for sub-mm, mm and radio continuum emission from Extremely Red Objects We present the results of sub-mm, mm (850 um, 450 um and 1250 um) and radio (1.4 and 4.8 GHz) continuum observations of a sample of 27 K-selected Extremely Red Objects, or EROs, (14 of which form a complete sample with K<20 and I-K>5) aimed at detecting dusty starbursts, deriving the fraction of UltraLuminous Infrared Galaxies (ULIGs) in ERO samples, and constraining their redshifts using the radio-FIR correlation. One ERO was tentatively detected at 1250 um and two were detected at 1.4 GHz, one of which has a less secure identification as an ERO counterpart. Limits on their redshifts and their star forming properties are derived and discussed. We stacked the observations of the undetected objects at 850 um, 1250 um and 4.8 GHz in order to search for possible statistical emission from the ERO population as a whole, but no significant detections were derived either for the whole sample or as a function of the average NIR colours. These results strongly suggest that the dominant population of EROs with K<20 is not comprised of ULIGs like HR 10, but is probably made of radio-quiet ellipticals and weaker starburst galaxies with L<10^{12} L_sun and SFR<100 M_sun/yr. Introduction The existence of a population of extragalactic objects with extremely red infrared-optical colours has been known for a number of years (for a recent review, see Cimatti 2000). These objects were initially discovered mainly in nearinfrared surveys of blind fields and quasar fields and are very faint or invisible in the optical bands (Elston et al. 1988;McCarthy et al. 1992;Hu & Ridgway 1994). These extremely red objects (EROs) are defined as those which have R − K colours ≥ 5 and these tend to have K magnitudes fainter than ∼18. Since the time of their discovery, the nature of this population has remained a puzzle. Send offprint requests to: Niruj R. Mohan Correspondence to: niruj@rri.res.in Based on their NIR and optical photometric data, two broad classes of models are consistent with the observed red colours : (a) high redshift starbursts, red because of severe dust extinction. From the required extinction and simple dust models, these galaxies could be normal starburst galaxies or even high-z counterparts of local Ultra Luminous Infrared Galaxies (ULIGs) and (b) old passively evolving ellipticals at redshifts greater than about one. The red colours of the EROs would then be explainable by a large K-correction and an absence of ongoing star formation. If the EROs belong to the former class, then they would be dominant sites of star formation and would be important in determining the star formation history of the universe (Cimatti et al. 1998a). On the other hand, if they belong to the latter class, then the volume den-sity of these objects as a function of redshift would pose strong constraints on the models for the formation of elliptical galaxies, which range from monolithic collapse to dark matter dominated hierarchical structure formation scenarios (Daddi et al. 2000b and references therein). Over the last few years, it has been possible to study in detail a handful of EROs which are bright enough to yield reliable spectra and have multi-wavelength continuum data. These studies were able to determine the nature of these EROs and there are now examples known for both starbursts (Cimatti et al. 1998a;Cimatti et al. 1999;Smail et al. 1999a;Smail et al. 1999b;Gear et al. 2001) as well as for old ellipticals (Spinrad et al. 1997;Cimatti et al. 1999;Soifer et al. 1999;Liu et al. 2000) among the ERO population. Studies indicate though, that not more than ∼30 % of EROs are starbursts (see Sect. 5 for details). Recently, independent wide-field surveys for EROs have been conducted (Daddi et al. 2000a;McCarthy et al. 2000; Thompson et al. 1999) which have shown that these objects are strongly clustered in the sky (see also Chapman et al. 2000 andYan et al. 2000). The surface density of these EROs after correcting for clustering, assuming that these are passively evolving ellipticals, has been shown to be consistent with pure luminosity evolution with a formation redshift greater than 2.5 (Daddi et al. 2000b). HR 10 is one of the reddest EROs and is quite bright in the NIR (I − K=6, K=18.42, Graham and Dey 1996). Cimatti et al. (1998a) detected 850 µm and 1250 µm emission from this object (see also Dey et al. 1999). They derived a star formation rate of several hundred M ⊙ yr −1 and an FIR luminosity in excess of of 10 12 L ⊙ , thus showing that HR 10 is an ULIG at its redshift of 1.44 (Graham and Dey 1996). At the time of the observations presented in this paper, HR 10 was the only ERO with detected submm emission and also the only ERO with a known redshift. Hence it was thought possible that a majority of EROs would be similar to HR 10 and would have observable submm and mm emission (Cimatti et al. 1998b, Andreani et al. 1999. Therefore we began a search for mm and submm continuum emission from other EROs with an aim to detect extreme starburst galaxies. Further, in order to constrain the redshifts of these objects and also to understand their nature, we also decided to search for radio continuum emission from these EROs. The radio and the FIR luminosities of nearby galaxies (z < 0.4) which are dominated by star formation are known to be highly correlated (see Condon 1992 and references therein), whereas E and S0 galaxies are known to be more radio bright than a star-forming galaxy of similar FIR luminosity (Walsh et al. 1989). The radio (mainly synchrotron and some amount of free-free) emission and the FIR (due to dust) emission have different spectral indices. Hence, assuming the local radio-FIR correlation holds at high redshift, the observed ratio of radio to FIR emission strengths can be used to determine the redshift of a star-forming galaxy (the redshift determina-tion method and the associated error estimation are developed in Yun (1999, 2000) and Blain 1999). So, if on the one hand, the ERO population consists primarily of starbursts, this method could be used to determine the nature of EROs and also to constrain the redshifts of these objects. If, on the other hand, the EROs are ellipticals, their IR colours would be used to constrain their properties. Since most EROs are too faint to obtain redshifts even with 10m-class telescopes or even to obtain accurate photometry over the entire optical-IR range, such complementary diagnostics become important in understanding these objects. We first describe the ERO sample, the observations and their results in Sects. 2 and 3. In Sect. 4, we derive statistical properties of the sample and estimate the fraction of starbursts and ellipticals with radio emission in Sect. 5. Those EROs with radio or mm detections are discussed in detail and their properties are derived in Sect. 6. The cosmology adopted in this paper is a flat universe with H 0 =70 km s −1 Mpc −1 and all results are calculated for both Ω Λ =0.7 and Ω Λ =0.0. The spectral index is assumed to be −0.7 throughout this paper for calculating the K-correction for the radio emission. Sample and Observational details A sample of 27 EROs was selected from the literature and also from other samples being studied by the authors. These objects were identified from a variety of observations : deep NIR surveys of random fields, galaxy cluster fields and quasar fields (see references in Table 1). A R−K or an I − K lower cut-off of 5 was used to select these EROs. Given the heterogenous nature of the fields from which these objects were taken, the sample as a whole is not complete or uniform. However, a subset of the objects observed, the EES sub-sample (Elston et al. 2001) does form a complete sample of 14 EROs, selected such that K < 20 and I − K > 5, and are listed separately in Table 1. Hereafter, the term ERO will imply all 27 objects listed in the table and the EES sample will include only the 14 objects forming a complete sample. The average R − K and I − K colours of the EES sample and the rest of the EROs are statistically indistinguishable. A sub-sample of the EROs selected were observed at 1250 µm using the IRAM 30 m telescope (15 objects) and at 850 µm (21 objects) and 450 µm (5 objects) using the SCUBA at the JCMT. Some of these objects have also been observed either at 4.8 GHz or at 1.4 GHz (11 and 7 objects respectively) using the VLA. Due to various telescope scheduling constraints, the same objects could not be observed at all wavebands. Elston et al. (2001) have completed a BRIzJK field survey (hereafter, EES survey) covering ∼100 arcmin 2 over four areas of the sky at high galactic latitudes, down to K∼21.5. These observations were carried out at the 4 m telescope of the Kitt Peak National Observatory. Optical imaging was obtained with the PFCCD/T2KB, which gives 0.48 ′′ pixels over a 16 ′ field. The IR imaging was obtained with IRIM, in which a NICMOS3 HgCdTe array provides 0.6 ′′ pixels over a ∼2.5 ′ field. Details of the observations and data analysis are given in Elston et al. (2001). The BRIzJK survey images were co-aligned and convolved to the same effective PSF of FWHM∼1.5 ′′ . Calibrations of the optical and IR images onto the Landolt and CIT systems were obtained using observations of Landolt and UKIRT standard stars, respectively. NIR observations of the EES complete sub-sample A catalogue of objects within each of the four fields of the EES survey was obtained from the K images using SExtractor (Bertin & Arnout 1996). Object detection was performed down to a level corresponding to 1.5 σ above the sky level, with a minimum object size of 1.2 square arcseconds. Photometry was obtained through 3.3 ′′ diameter apertures using this catalog on the BRIzJK images; the 4 σ limit in the K band is 21.4. Details of these catalogues are available in Elston et al. (2001). SCUBA observations at the JCMT A sub-sample was observed with Submillimeter Common-User Bolometer Array (SCUBA) instrument on the James Clerk Maxwell Telescope (JCMT) at 450 µm and 850 µm in the standard point-source photometry mode during different observing runs from 1998 to 2000. The typical opacity τ was in the range 0.2-0.5 and 2-4 at 850 µm and 450 µm respectively. The on-source integration times varied between 900 and 3600 seconds. The data reduction was performed using the Starlink SURF software (Jenness & Lightfoot 1998). For each integration, the measurements in the reference beam were subtracted from those in the signal beam, rejecting obvious spikes. Flat-field corrections were applied to each observation which were subsequently corrected for atmospheric opacity. The residual sky background emission was then removed using the median of the outputs of the different rings of the bolometer as a background estimate. The flux calibration of the data was performed mainly using a primary calibrator (Uranus or Mars), thus yielding a 10% accuracy for the flux density scale. A poorer calibration accuracy (20%) was obtained for EES Lynx 2 and SA57 2-3 for which the secondary calibrators CRL618 and HL Tau were used respectively. The individual reduced and calibrated observations were concatenated for each source thus obtaining a final co-added data set. IRAM 30m telescope observations The 1.25 mm data reported here were taken with the MPIfR 37-channel and 19-channel bolometers Kreysa et al. (1998) at the focus of the IRAM 30m antenna (Pico Veleta, Spain) during observing runs in March 1998 and December 1998 respectively. The filter set used combined with atmospheric transmission produces an ef-fective wavelength of about 1.25 mm; the beam size is 11 ′′ (FWHM) and the chop throw was set at 50 ′′ and 30 ′′ , during the first and second run, respectively. The expected average sensitivity for each channel, limited principally by atmospheric noise, was 60 mJy/ √ BW , where BW is the bandwidth used, in Hz. However, the 37-channel bolometer observations were noisier than expected and therefore the 19-channel bolometer was used for the second observing run. The effect of sky noise was reduced substantially by exploiting the correlation between signals from the different channels using the standard three beam (beamswitching + nodding) technique, resulting in an average rms 1σ value, after 6000 seconds of integration, of 0.57 mJy and 0.4 mJy for the 37-channel and 19-channel data respectively. The typical on-source integration time was 6000 seconds, distributed over two to three nights. The atmospheric transmission was monitored by making frequent sky-dips. The average zenith opacity was 0.13 during the observations in March, with a maximum of 0.2, and the average opacity was ∼0.3 during the December observing run. Absolute flux calibration was performed using Uranus as the primary calibrator and using Mars and quasars from the IRAM pointing list as secondary calibrators. The different calibration measurements were consistent at a level of 5 % for both planets. Including the uncertainty in the planet temperatures, the average flux calibration uncertainty was estimated to be 10 %. Pointing was checked every hour and the average accuracy achieved was better than 3 ′′ . The data were reduced assuming that the target sources are unresolved, i.e. the source sizes at mm wavelengths are smaller than the size of the central channel. The remaining 36 and 18 channels (excluding one which suffered a large electronic loss) were then used to derive a low-noise sky estimate. The average value of the sky brightness, computed using these outer 35 and 17 channels respectively, was subtracted from the signal in the central channel to derive the final flux density estimates. VLA observations The Very Large Array (VLA) was used to observe a total of 16 EROs distributed in six different fields. 4.8 GHz radio continuum emission was searched for in eleven of these objects in the D configuration at 4.8 GHz, chosen as a compromise between optimising point source sensitivity using a low resolution array and minimizing confusion through the usage of a high resolution array. The correlator was used in the continuum mode and data were acquired in 4 IF bands, each of 50 MHz bandwidth. The flux calibration and initial phase calibration was done using standard techniques described in Taylor et al. (1999) and the data reduction was done using standard algorithms in the software AIPS. Self-calibration of only the visibility phases was done for a few fields using background sources in the primary beam. The rms noise in the final images is within 20% of the expected thermal noise. Seven EROs distributed in two fields were observed using the VLA in the B configuration at 1.4 GHz. The frequency of observation was chosen based on the expected steep spectral index of the continuum emission of EROs (typically −0.7) and poor sensitivity of the VLA at ν < 1 GHz. Optimising between confusion and the slightly extended nature of EROs (≥2 ′′ ) led to the B configuration being used (the number of sources within a synthesised beam area in this configuration with flux densities higher than 1σ for 8 hours of integration is 0.015, using the relation given in Langston et al. 1990). These observations were made in the line mode using 4 IFs, each with a bandwidth of 25 MHz and 7 available channels (the 50 MHz bandwidth system was not used at this frequency as much higher closure errors are expected: Owen, private communication). The dataset were hanning smoothed following standard calibration procedures and were imaged using multi-band synthesis (in order to reduce the effect of bandwidth smearing far from the phase centre). The tangent-plane approximation was used to correct for the effects of the array non-coplanarity and strong sources within two primary beams of the phase centre were isolated by defining multiple fields and simultaneously deconvolved using the CLEAN algorithm. Self-calibration using sources in the primary beam was done only for the visibility phases and the data were flagged based on excessive closure errors. An rms within 20% of the theoretical noise was achieved for the EES-Pisces field but the noise in the EES-Cetus field was substantially higher due to the presence of strong sources near the phase centre. Further details are given in Table 2. Results Out of the seven sources observed at 1250 µm, one source, SA57-1, was marginally detected at the 3σ level. The measured flux densities at the NIR positions of the EROs along with the 1σ errors are listed in Table 3 for the 450 µm, 850 µm and the 1250 µm data. As can be seen, none of the other observed sources were detected at these wavelengths. We have detected 1.4 GHz radio continuum emission from the NIR positions of two EROs : EES-Cetus 1 and EES-Cetus 2 (hereafter, EESC1 and EESC2). None of the other sources observed at either 4.8 GHz or 1.4 GHz were detected and the corresponding measurements at the expected NIR positions and the 1σ errors are given in Table 3. The 1.4 GHz image of the EES-Cetus field with the two detections is shown in Fig. 1. Though there appears to be some extended emission in the image of EESC1, given the signal-to-noise ratios of the two detections, the two sources are essentially unresolved. We estimate the likelihood of these two radio sources being the counterparts of the NIR-detected EROs as follows : The positional error in the radio positions is calculated for the two sources using the relations given in Rieu (1969), assuming the error in the NIR position to be 0.5 ′′ , which is the 1σ error between the radio and the optical reference frames ( Russell et al. 1990). The value of the likelihood ratio (LR) as defined by de Ruiter et al. (1977) is calculated to be 1276 for EESC1 and 43 for EESC2, using the relation in Langston et al. (1990) to estimate the surface density of objects with S 1.4 GHz > 0.1 mJy. This implies that the a posteriori probabilities of the radio detections being actual counterparts are 99.92 % and 97.73 % for EESC1 and EESC2 respectively (assuming that the a priori probability that the ERO does have a radio counterpart is 50 %). Hence the radio counterpart to EESC1 seems real but the identification of the counterpart of EESC2 is slightly less secure. Statistical properties of the sample Though almost the entire sample is undetected at all observed wavelengths, the flux densities measured at the IR positions of the sources can be used to compute the statistical flux densities of the sample and hence derive stricter upper limits. This weighted average flux density at 4.8 GHz is 3.3 ± 4.6 µJy for a sample of 10 objects observed (the value for the 1.4 GHz data is −15 ± 12 µJy, and is far less significant in terms of the implied limits on the star formation rate). The statistical flux density at 850 µm for the entire sample of 21 EROs is 1.0 ± 0.4 mJy and the corresponding number for the 1250 µm measurements is 0.18 ± 0.12 mJy for 15 sources. Clearly, the 1250 µm and the 4.8 GHz data do not yield a statistical detection of the ERO population whereas the 850 µm data yields a marginal 2.5σ statistical detection. A similar exercise was carried out for the EES sample as well and the average flux densities at 4.8 GHz, 1250 µm and 850 µm are 3.5 ± 4.8 µJy, 0.18 ± 0.13 mJy and 0.3 ± 0.6 mJy for nine, seven and fourteen objects respectively; there is no statistical detection for this complete sample either. We have quoted the values for the 4.8 GHz and the 1250 µm data for the EES sample though not all sources have been observed, as the observed sources probably form a random sub-sample of the complete sample since they were selected based on telescope scheduling constraints. The 2.5σ result for the average 850 µm flux density for the entire sample can be traced solely to the non-EES sample of 7 galaxies, whose mean signal to noise ratio is 1σ. Therefore it is probable that some of these EROs not in the EES sample might be detectable at 850 µm if observed further. In this paper, we will take the 3σ upper limit of 1.2 mJy for the average flux density at this wavelength for the entire ERO sample. The sample was also divided into two sub-samples using an I − K colour cut-off (various cut-off values ranging from 5.3 to 5.9 were tried) and it was seen that there is no statistically significant difference in the average flux densities between the two sub-samples. This result holds for the EES sample as well. If the observed population is divided into ellipticals and starbursts using the I − K versus J − K diagram of Pozzetti & Mannucci (2000), described in Sect. 5.1, the two categories of EROs do not differ in their average flux densities either, within errors. Ellipticals or Starbursts ? Though attempts to distinguish between pure starbursts and elliptical galaxies among the ERO population have met with unambiguous results only for sources bright in IR and optical, or for sources with detectable mm/sub-mm emission, a few recent studies seem to show that probably ≥ 70% of the EROs are old high redshift ellipticals. The evidence for this is based on both individual spectroscopic identifications of small samples of EROs Liu et al. 2000) and on high resolution images, using morphological information (Stiavelli & Treu 2001) and through fitting the de Vaucouleurs' law to radial profiles (Moriondo et al. 2000). Additionally, a recent wide-field survey of EROs by Daddi et al. (2000a) has shown that these objects are strongly clustered in the sky and this result has been confirmed in two other fields by McCarthy et al. (2000). The strong clustering of the ERO population is added evidence that the majority of EROs are indeed ellipticals, since ellipticals are known to be more clustered than spirals, and also because of the narrow range of redshift, z=1-2.5, allowed for extremely red ellipticals (Daddi et al. 2000b). Diagnostic techniques In this section, the inferences that can be drawn from the observed multi-frequency flux densities of the EROs using group properties of ellipticals and starbursts are investigated. It is known that the K magnitudes of radio-loud galaxies are correlated with their redshifts (Lilly & Longair 1984). It is also established that galaxies of other types have fainter K magnitudes than radio-loud galaxies at that redshift (van Breugel et al. 1999, de Breuck 2000, which can be used to set upper limits on the redshift of EROs. Given the faintness of EROs in the K band, such an exercise constrains them to lie at z <5. If we consider the upper limit to the 1.4 GHz flux density of the undetected ERO sample to be 0.1 mJy, (extrapolated from the 4.8 GHz upper limit. This excludes the EROs observed at 1.4 GHz which have varying upper limits and excludes the two detections as well) then for z < 5, the rest frame 1.4 GHz luminosity is less than 2×10 25 W/Hz for Ω Λ =0.7 (and less than 6×10 24 W/Hz for Ω Λ =0). From the bi-modal 1.4 GHz luminosity distribution of the IRAS 2 Jy sample ( fig. 15 of Yun et al. 2001), it is clear that our sample cannot be differentiated into galaxies dominated by starbursts versus those by AGNs, which reflects the fact that our radio data is not deep enough to detect weak starbursts. Pozzetti & Mannucci (2000) showed that old ellipticals and dusty starbursts occupy distinctly different areas in the I − K versus J − K diagram and derived the theoretical dividing line in this plane. Due to the faintness of most EROs in the K band, the error bars for the colours are too large for this diagnostic to be used profitably. Additionally, HR 10, the definitive example of a dusty starburst ERO (Andreani et al. 2000;Dey et al. 1999;Cimatti et al. 1998a), lies on the dividing line. Hence, more sensitive NIR photometry is needed in order to use this method of classification. For our sample, based on the few IR-optical colours available, and the upper limits from the radio, mm and sub-mm observations, it is not possible to determine the nature of each of the EROs individually. Instead, we now discuss the statistical properties of EROs, for the dusty starburst galaxy and the old elliptical galaxy components, seperately. Star formation properties of EROs The average 850 µm flux density limit can be used to derive upper bounds on the average star formation properties of the sample. Assuming that all EROs in the sample are at a redshift of 1.5 (i.e., roughly at the redshift of HR 10), we derive the average star formation rate to be less than 150 M ⊙ yr −1 (from the relation given in Carilli & Yun 1999, and assuming the dust emissivity β=1.5) and the average FIR luminosity to be less than 1.6×10 12 L ⊙ (Ω Λ =0). For a dust temperature of 20 K, the corresponding upper limit on the average dust mass is 6×10 8 M ⊙ and for 50 K, the dust mass is less than 1.4×10 8 M ⊙ . For Ω Λ =0.7, SFR < 380 M ⊙ yr −1 , L FIR < 4×10 12 L ⊙ and M dust < 2× 10 9 M ⊙ for T dust =20 K and < 3.6×10 8 M ⊙ for T dust =50 K (for a dust emissivity β=1.5 and mass ab-sorption coefficient of 0.15 m 2 kg −1 at 800 µm; Hughes et al. 1997). However, we can assume that a fraction x of the EROs are dusty ULIGs which resemble HR 10 in their properties (L FIR ∼4×10 12 L ⊙ at z∼1.5) and the rest are ellipticals with no sub-mm and mm emission. Then, from the estimated 5σ average flux densities of the ERO sample at 850 µm and 1250 µm and the measured flux densities of HR 10 (S 850 µm =5.5 ± 0.6 mJy, the weighted average of the values quoted in Cimatti et al. 1998a andDey et al. 1999 and S 1250 µm =4.9 ± 0.7 mJy, Cimatti et al. 1998a), the value of x can be computed. Such an exercise gives x < (36 ± 11) % and x < (12 ± 4) % for the 850 µm and 1250 µm data respectively. This estimate, though approximate, is consistent with other independent estimates of the ULIG fraction in EROs : ≤30 % Moriondo et al. 2000). If we assume, as a conservative estimate, that the starburst fraction in the ERO population is as much as 30 %, then from the surface density of EROs with R − K s ≥ 5 and K s ≤ 19.2 estimated by Daddi et al. (2000a), the surface density of starbursts would be less than 725 ± 33 objects deg −2 . Smail et al. (1999a) calculated that the surface density of SCUBA sources with a 850 µm flux density greater than 0.5 mJy (which corresponds to the cut-off estimated in order to fully explain the Far Infrared Background or FIRB) is 17000 objects deg −2 (for S 850 µm ≥ 2 mJy which would explain half the observed FIRB, the surface density is 3700 objects deg −2 ). Hence the maximum overlap between R − K s ≥ 5 and K ≥ 19.2 EROs which are starbursts, and the high redshift star forming SCUBA sources is 4 % (for a 0.5 mJy cut-off for S 850 µm ; it is less than 20 % for a 2 mJy cut-off). Given the lack of detectable sub-mm emission from the sample of objects, and the estimate of the corresponding fraction of HR 10-like ULIGs, the present study clearly shows that the ULIGs like HR 10 are rather unique objects among EROs and hence dusty strong starbursts are not the dominant component of this population. EROs as elliptical galaxies It can be seen from the upper limits to the radio luminosities of the EROs (derived in section 5.1) that there are no radio-loud ellipticals in the sample. Hence these EROs must either be centre-brightened radio galaxies (or FR I; see Fanaroff &Riley 1974 andLedlow &Owen 1996 for definitions. The optical luminosities have been derived assuming a median redshift of 1.5 and m R derived from Table 1) or radio-quiet ellipticals. There are two detections at 1.4 GHz, and the 4.8 GHz upper limits of 11 EROs scaled to 1.4 GHz are ∼0.1 mJy. Assuming that ≥70% of these 13 EROs are elliptical galaxies, the detection rate of ellipticals for a 0.1 mJy cut-off at 1.4 GHz is calculated to be ≤22 ± 16 %. If the three EES-Cetus sources undetected in the radio are also included, the detection rate becomes ≤27± 19 %. For a redshift range of 1-3, 0.1 mJy at 1.4 GHz corresponds to a 1.4 GHz rest-frame radio luminosity of 2×10 23 -2×10 24 W/Hz (Ω Λ =0). Also, the rest-frame R band magnitude for the sample is between −18 and −25.5 (assuming a K-correction K(z)=1.122z for elliptical galaxies, see Ledlow & Owen 1996). It should be noted that for the same set of parameters, using the data published in Gavazzi & Boselli (1999), Ho (1999), Ledlow & Owen (1996) and Auriemma et al. (1977), we derive the detection rate to be 1% to <3% for low redshift ellipticals. 6. Continuum detections -EESC1, EESC2 and SA 57-1 The two EROs, EESC1 and EESC2, with 1.4 GHz continuum detections (though the radio identification is tentative for EESC2) have only upper limits to their sub-mm flux densities. The optical-IR SED of these objects cannot be used to classify them as either elliptical or starforming galaxies unambiguously. Further, the upper limit to the 850 µm-1.4 GHz spectral index α 850 1.4 is +0.47 and +0.56 for EESC1 and EESC2 respectively. The ratio of radio to sub-mm flux density of a pure starburst decreases with increasing redshift, i.e., for a given sub-mm flux density, the z=0 galaxies have the maximum radio flux density. Hence for a given upper limit to the sub-mm flux density for a galaxy at an arbitrary redshift, the z=0 radio-FIR correlation will yield the maximum possible radio flux density. Therefore a sufficiently radio-bright AGN might possibly have a radio continuum strength higher than this value and will be easily classified as a radio-loud AGN. Since that is not so for either of the two radio detections, they cannot be unambiguously classified as either an old elliptical or a dusty starburst. The properties of these galaxies are derived below assuming either of the two possibilities -that they are pure starbursts, or they are old ellipticals. EESC1 and EESC2 : 1.4 GHz detection If EESC1 and EESC2 are assumed to be dominated by star formation, then from the derived upper limits to the spectral index α 850 1.4 for EESC1 and EESC2, we can derive the upper limits to the redshifts of these objects. Carilli & Yun (2000) have tabulated the values of redshift for a given value of the spectral index α 850 1.4 and have also tabulated the values for the ±1σ curves of redshift versus α 850 1.4 . Using the derived value of α+∆α, where ∆α is the 1σ error on the value of the spectral index α, and the z − curve of Carilli & Yun (2000), updated in Carilli (private communication) 1 , we estimate the upper limits to the redshifts (see Carilli & Yun 2000 for details). The limiting redshifts of EESC1 and EESC2, derived as described above, are z < 1.5 and z < 2.0 respectively. Their radio continuum flux densities, if attributed solely to star formation, imply a star formation rate of about 1100 M ⊙ yr −1 and 1600 M ⊙ yr −1 respectively for the maximum redshifts derived above (these are for Ω Λ =0.7. The values are about 700 M ⊙ yr −1 and 1000 M ⊙ yr −1 respectively, for Ω Λ =0), and less than this value for lower values of redshift (from the relation given in Carilli & Yun 1999). Also, from the observed radio flux density, the FIR luminosity is calculated to be less than ∼10 13 L ⊙ , for z < z max , for the two galaxies. If the two galaxies are assumed to be old ellipticals instead, then using the pure luminosity evolution models, the extreme R−K colours of the two galaxies (R−K >6.1) imply that they lie at redshifts greater than 1.3 (Daddi et al. 2000b). For z between 1.3 and 5, the 1.4 GHz restframe luminosities of EESC1 and EESC2 correspond to the luminosity range of FR I galaxies or radio-quiet ellipticals. SA57-1 : 1.25 mm IRAM detection ? SA57-1 is detected at 1.25 mm with the IRAM 30m telescope at the 3σ level: S 1.25 mm =1.45±0.45 mJy. Assuming that this detection is real, it is not obvious whether this source is a pure starburst or whether it also harbours an AGN. From the 1.25 mm measurement and the upper limit to the radio emission, a lower limit of 0.9 can be derived for the redshift of the object (using the z + curve in Carilli & Yun 2000), as described in section 5.1. This is also the lower limit to z if SA57-1 is an elliptical galaxy. Using the K − z relation for radio-loud galaxies, the derived upper limit to the redshift is 3.5. If this object were a pure starburst, the implied SFR is between 45-800 M ⊙ yr −1 for Ω Λ =0 (and 70-1300 M ⊙ yr −1 for Ω Λ =0.7). The corresponding L FIR is greater than 4.5×10 10 L ⊙ . Conclusions Motivated by the discovery of HR 10 and its star formation properties, a sample of EROs was observed in order to detect radio, mm and sub-mm continuum emission and constrain the redshifts and star formation rates of ULIGs in the sample. One ERO was detected at 1.4 GHz and a possible radio counterpart was identified for another ERO at the same frequency. A third was tentatively identified at 1250 µm. Their redshifts and star forming properties were constrained using their radio-FIR spectral index but their nature could not be unambigously determined. Since the sources are faint, standard techniques to classify the sources in the ERO sample individually as ellipticals or starburst are inadequate. Weighted average flux densities were computed for the sample using measurements at the IR positions of the EROs and these values are 1.0 ± 0.4 mJy at 850 µm, 0.18 ± 0.12 mJy at 1250 µm and 3.3 ± 4.6 µJy at 4.8 GHz. We find no difference within errors in the weighted average values between EROs with NIR colours above and below an assumed I − K colour cutoff. If the sample is divided into ellipticals and starbursts based on the NIR two-colour diagnostic diagram, no differences in their average flux densities are seen between these groups. From the lack of detection of sub-mm emission from any of the EROs in our sample, it is now clear that dusty strong starbursts, or high redshift ULIGs, are not the dominant component of this population. If it is assumed that such an ULIG population would resemble HR 10 in their SED properties, then such galaxies cannot constitute more than about 35 % of the EROs. From the observed source counts of SCUBA sources and the ERO surface density, we suggest that EROs contribute negligibly to both high redshift star formation as well as to the FIRB. On the other hand, if all EROs have similar star formation properties, the average dust mass is calculated to be less than 2 × 10 9 M ⊙ (for Ω Λ =0.7 and T dust =20 K) and the average FIR luminosity, less than 4 × 10 12 L ⊙ . If not more than a third of the EROs in the sample are assumed to be starbursts and the rest are assumed to be ellipticals at a median redshift of 1.5, we calculate the detection rate of ellipticals for a 0.1 mJy cut-off at 1.4 GHz to be less than 22 ± 16 %. The corresponding number estimated in the local universe is ≤3 %. Therefore it is clear that the dominant population of EROs are not ULIGs similar to HR 10 but are probably old ellipticals and weaker starbursts. The determination of the properties and redshifts of the elliptical galaxy component of the ERO population is extremely important for constraining structure formation models (Daddi et al. 2000b). Towards this end, the upper limits to the radio and the sub-mm flux densities derived in this study should be used to plan future observations to detect these objects at these wavebands. a Though the sample was chosen with K < 20, a revision of the photometry after sample selection resulted in a few objects with K > 20. 1: Elston, R. et al. 2001, in preparation;2: Cowie, L. L. et al. 1996, AJ, 112, 839;3: Hu, E. M. & Ridgway, S. E. 1994, AJ, 107, 13034: Knopp, G. P. & Chambers, K. C. 1997, ApJS, 109, 367;5: Cowie, L. L. et al. 1994, ApJ, 434, 114;6: Eisenhardt, P. & Dickinson, M. 1992, ApJ, 399, L47 ;7: Moustakas, L. A. et al. 1997, ApJ, 475, 445;8: McLeod, B. A. et al. 1995, ApJS, 96, 117;9: Dey, A. et al. 1995, ApJ, 440, 515;10: Giallongo, E. private communication;11: Soifer, B. T. et al. 1994, ApJ, 420, L1 ... a The three entries in bold face are detections. The rest are the flux densities measured at the NIR positions of the EROs, with the associated 1σ error. These values have been used to calculate the statistical flux densities of the sample (see Sect. 4). b 3σ upper limit : Richards, private communication. c The upper limit to the 4.8 GHz flux density of ERO-5 is quoted as this source is 1.5 synthesized beams away from a strong confusing source.
2017-09-07T07:05:33.521Z
2001-12-14T00:00:00.000
{ "year": 2001, "sha1": "ad7efed29f0d43f552a03d7876ad029467082d9c", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2002/08/aah3176.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "ad7efed29f0d43f552a03d7876ad029467082d9c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
210072691
pes2o/s2orc
v3-fos-license
Clinical impacts of inflammatory markers and clinical factors in patients with relapsed or refractory diffuse large B-cell lymphoma Background Systemic inflammatory response can be associated with the prognosis of diffuse large B cell lymphoma (DLBCL). We investigated the systemic factors significantly related to clinical outcome in relapsed/refractory DLBCL. Methods In 242 patients with DLBCL, several factors, including inflammatory markers were analyzed. We assessed for the correlation between the survivals [progression-free survival (PFS) and overall survival (OS)] and prognostic factors. Results In these patients, a high derived neutrophil/lymphocyte ratio (dNLR) (PFS, HR=2.452, P=0.002; OS, HR=2.542, P=0.005), high Glasgow Prognostic Score (GPS) (PFS, HR=2.435, P=0.002; OS, HR=2.621, P=0.002), and high NCCN-IPI (PFS, HR=2.836, P=0.003; OS, HR=2.928, P=0.003) were significantly associated with survival in multivariate analysis. Moreover, we proposed a risk stratification model based on dNLR, GPS, and NCCN-IPI, thereby distributing patients into 4 risk groups. There were significant differences in survival among the 4 risk groups (PFS, P<0.001; OS, P<0.001). Conclusion In conclusion, dNLR, GPS, and NCCN-IPI appear to be excellent prognostic parameters for survival in relapsed/refractory DLBCL. INTRODUCTION Diffuse large B cell lymphoma (DLBCL) is the most common lymphoid malignancy, accounting for 25-30% of all the newly diagnosed cases of adult non-Hodgkin's lymphoma (NHL) [1]. Despite an improvement in the overall survival of patients with DLBCL after the introduction of rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP) therapy into clinics, one-third of the patients remain refractory to the initial therapy or relapse afterward [2]. Therefore, it would be beneficial to identify prognostic markers for the prediction of any subgroups with a poor prognosis within the patients with relapsed or refractory DLBCL. In the Collaborative Trial in Relapsed Aggressive Lymphoma (CORAL) study, an early relapse (<1 yr) after diagnosis, previous exposure to rituximab, and the age-adjusted International Prognostic Index (IPI) were demonstrated to be significant prognostic parameters associated with the survival rate of patients with relapsed or refractory DLBCL [3]. Recently, the National Comprehensive Cancer Network (NCCN)-IPI has been introduced as a more meaningful prognostic parameter than the traditional IPI for newly diagnosed DLBCL cases [4]. A recent study has shown that a higher NCCN-IPI is significantly associated with a low overall response rate and poor survival in relapsed or refractory DLBCL [5]. However, further research is needed to confirm whether NCCN-IPI specifies clinical outcomes in the relapsed or refractory setting. Furthermore, cell of origin (COO) subtype in relapsed or refractory DLBCL has also been noticed as a prominent prognostic biomarker for survival prediction since DLBCL cases differ in their cellular compositions [5]. However, gene expression profiling or immunohistochemical analysis to determine COO has not been widely implemented in clinical practice. There seems to be more solid evidence from several studies that inflammation is closely associated with the pathogeneses of NHLs [6][7][8]. Pro-inflammatory cytokines in tumor microenvironment have been shown to promote tumor growth, DNA damage, angiogenesis, and immune suppression [9][10][11]. Therefore, inflammation could negatively affect clinical outcome in NHLs. Indeed, among the various inflammatory markers associated with cancer, cell-based inflammatory markers including neutrophil/lymphocyte ratio (NLR), derived NLR (dNLR), lymphocyte/monocyte ratio (LMR), and platelet/lymphocyte ratio (PLR) have been associated with survival in newly diagnosed or relapsed/refractory DLBCL [12][13][14][15][16][17]. The nutrition-related inflammatory markers Glasgow Prognostic Score (GPS) and Prognostic Nutritional Index (PNI) have also been associated with a poor prognosis in the newly diagnosed DLBCL patients [18,19]. However, to our best knowledge, it is still unclear what could be the most useful parameter among the above-mentioned inflammatory markers and clinical factors, such as IPI and NCCN-IPI, to predict survival in relapsed or refractory DLBCL. Moreover, there is still no clearly established risk stratification model to predict survival in relapsed or refractory DLBCL, although various prognostic factors have been extensively described in the relapsed or refractory setting. Therefore, this study aimed to assess what inflammatory marker could be the most meaningful prognostic factor for predicting disease progression and survival in patients with relapsed or refractory DLBCL. Furthermore, we attempted to define a prognostic model that incorporates the significant factors associated with patients with relapsed or refractory DLBCL. Patient eligibility The information about the patients who had had relapsed after R-CHOP therapy as the first-line therapy and who progressed during the initial therapy from January 2007 to September 2016 was collected. In this study, a relapsed or refractory disease was defined according to the criteria outlined by Cheson et al. [20]. Patients were excluded if they had chronic diseases, such as chronic renal disease, chronic hepatitis B and C, and pulmonary tuberculosis because they could influence the initial levels of C-reactive protein (CRP) and albumin in the serum and negatively affect the disease management or survival of the patients. Patients were also excluded if they presented with DLBCL secondary to low-grade NHL or had received other follow-up treatments, including maintenance therapy and radiotherapy after R-CHOP therapy. Eventually, 242 patients diagnosed with relapsed or refractory DLBCL after or during first-line R-CHOP therapy were enrolled. Relapsed cases in central nervous system were excluded. Patients who achieved complete or partial response after salvage chemotherapy entered either follow-up or autologous stem cell transplantation (ASCT). Patients who did not achieve any response were treated with supportive care. Ethics statement The retrospective review of the records for this study was approved by the Institutional Review Board of five medical centers, including Pusan National University Hospital, Hanyang University Hanmaeum Changwon Hospital, Chonnam National University Hwasun Hospital, Dong-A University Hospital, and Haeundae Paik Hospital. Salvage treatment and response assessment Three salvage chemotherapy schedules were adopted for patients with relapsed/refractory DLBCL. The regimens were as follows: ESHAP/R-ESHAP (etoposide, methylprednisolone, cytarabine, and cisplatin with/without rituximab), DHAP/ R-DHAP (dexamethasone, cisplatin, and cytarabine with/ without rituximab), or ICE/R-ICE (ifosfamide, carboplatin, and etoposide with/without rituximab). Treatment response was assessed using the National Cancer Institute-sponsored Working Group guidelines [20]. After the salvage chemotherapy was completed, the patients were followed-up with physical examinations and laboratory tests for every 3 months over 5 years, and imaging tests were also performed twice/year during the follow-up period. Prognostic factors The serum beta-2 microglobulin (B2MG) level of each patient at the relapsed or refractory status was measured to evaluate whether the level of this protein was a meaningful prognostic marker. In addition, GPS was determined by the serum CRP and albumin levels measured at the time of a patient's presentation with a relapsed or refractory status. Patients with both elevated CRP (≥10 mg/L) and decreased albumin levels (<35 g/L) were classified into GPS 2 group. Patients with only one of these two laboratory abnormalities were classified into GPS 1 group, and patients without these abnormalities were classified into GPS 0 group. To test the clinical value of GPS, several comparative prognostic factors were included as follows: IPI or NCCN-IPI score of the IPI or NCCN-IPI factor, respectively, at the relapsed/refractory status was included (high IPI and NCCN-IPI were defined as ≥3 and ≥5 scores, respectively). Additional comparative variables, such as the primary refractory type, which further progresses during R-CHOP therapy, and the maximum 18F-fludeoxyglucose uptake value (SUVmax) in positron emission tomography (PET)/computed tomography (CT) measured at the relapsed or refractory status were also included. As systemic inflammatory factors, NLR, dNLR, LMR, PLR, PNI, systemic inflammation response index (SIRI), and sys- temic inflammation index (SII) at the relapsed or refractory status were included. dNLR was defined as neutrophil/(white blood cell count-neutrophil count) at the relapsed or refractory status. PNI was estimated by the following equation: 10×serum albumin (g/dL)+(0.005×total lymphocyte count) SIRI was defined as follows: peripheral neutrophil count× monocyte count/lymphocyte counts. SII was defined as follows: platelet count×neutrophil count/lymphocyte count. SIRI score was considered as follows: patients with both elevated hemoglobin (Hb) level and elevated LMR at the relapsed or refractory status (≥137/116 g/L and ≥3.23, respectively) were considered to have a score of 2 (group 2); patients with either elevated Hb level or elevated LMR were considered to have a score of 1 (group1); and patients with both decreased Hb level and decreased LMR were considered to have a score of 0 (group 0). Additional factors such as male sex, R-containing salvage therapy, B symptom at the relapsed or refractory state, and ASCT after salvage therapy were included to compare with the inflammatory markers. However, the cell of origin determined by immunohistochemistry was excluded in our analysis since such histological assessments were performed in only 81 patients (33.5%). Statistical analysis Chi-square test or Fisher's exact test were used as appropriate to analyze categorical variables. Progression-free survival (PFS) and overall survival (OS) were estimated using the Kaplan-Meier method and the 2-tailed log-rank test. PFS was defined as the time from the initiation of the salvage therapy until disease progression or death, whereas OS was defined as the time from the initiation of the salvage therapy until death. The Cox proportional-hazards model was used to evaluate the prognostic impacts of several prognostic factors. The hazard ratios (HRs) of the prognostic factors were used to measure the differential risks of disease progression and death. Receiver operating characteristic (ROC) curves were prepared to estimate the accuracy in predicting the ideal cut-off values of the continuous variables. The statistical analysis was carried out with the SPSS software version 18.0 (SPSS Inc., Chicago, IL, USA). A P -value <0.05 was considered significant. Patient characteristics A total of 242 patients with relapsed or refractory DLBCL from five medical centers were evaluated, and their clinical characteristics are summarized in Table 1. Their median age was 65 years (range, 40-76 yr), and 160 patients (66.1%) were >60 years old (of these, 154 patients were ≤75 years old and 6 patients were >75 years old). The patients included 143 males (59.1%) with 45 patients of the primary refractory (P =0.021, P =0.033, and P =0.028, respectively) (Fig. 1). ROC analysis for continuous variables as prognostic factors The patients were separated into favorable and unfavorable groups according to each optimal NLR, dNLR, LMR, PLR, PNI, SUXmax, and SII cut-off value determined by ROC analysis. The cut-off values for NLR, dNLR, LMR, PLR, PNI, SUVmax, and SII for disease progression were 1.5, 3.5, 1.6, (Table 2). Validation of the risk stratification model The risk stratification model was constructed using the independent prognostic factors, including high NCCN-IPI, GPS 2, and high dNLR obtained by the multivariate analysis. In the model, each factor was given the same point because of the similar HRs of the factors in the Cox proportional hazard model. Fig. 3 shows the stratification for validated risk based on NCCN-IPI, GPS, and dNLR to predict PFS and OS in patients with relapsed or refractory DLBCL. There were significant survival differences among the four risk groups (PFS, P <0.001; OS, P <0.001, Fig. 3). DISCUSSION To date, a risk stratification model that is based on clinical and laboratory parameters has not been proposed for patients with relapsed or refractory DLBCL. Although disease progression and survival in DLBCL could be determined by numerous factors, inflammation has an enhancing effect on malignant cell proliferation, angiogenesis, and metastasis. Cell-based inflammation consisting of macrophages, neutrophils, monocytes, lymphocytes, and platelets is significantly associated with the progression of cancer and metastasis of malignant cells [21][22][23]. Moreover, several inflammatory cytokines produced by cancer cells, such as tumor necrosis factor (TNF)-, IL-1, IL-6, IL-8, and vascular endothelial growth factor also promote cancer cell proliferation, invasion, and metastasis [24]. NLR and dNLR have been repeatedly suggested to have prognostic associations with newly diagnosed DLBCL [12,13]. Here, we assessed for the clinical associations of these parameters with patients who had relapsed or refractory DLBCL and found in the multivariate analysis of our data that only dNLR, but not NLR, had a significant prognostic value. We suppose that neutrophil counts in patients with relapsed or refractory DLBCL are often unstable due to numerous factors, such as the influence of front-line chemotherapy, advanced disease status, and several concomitant comorbidities. Neutropenia can often be promptly corrected with the well-established supportive care practices, such as administration of recombinant granulocyte colony-stimulating factors. Because it is hard to decide the time point for the measurement of NLR, the clinical value of NLR is assumed less significant. However, neutrophil count is excluded in the assessment of dNLR even though dNLR seems to be more meaningful than NLR in the relapsed or refractory setting. Additionally, other cell-based inflammatory markers, including LMR, PLR, SIRI, and SII did not exhibit any statistical significance in our study. Furthermore, the nutrition-related inflammatory marker PNI did not have a significant value in the multivariate analysis. It is likely that GPS, as a nutrition-related inflammatory marker consisting of serum CRP and albumin levels, respectively reflects the degree of cancer-related inflammation and the nutritional status. CRP, as a component of GPS, is an important and sensitive marker for systemic inflammatory response. The synthesis of CRP is generally induced by several cytokines, such as TNF-, IL-1, and IL-6 in the liver or cancer cells [25,26]. Therefore, the CRP level may conveniently reflect the degree of inflammation associated with cancer-related cytokines in a cancerous condition. This has been supported in several clinical studies that have reported that elevated CRP level is associated with poor prognosis in patients with various malignancies [27][28][29][30][31]. Increased serum albumin level is also considered an important sign of increased inflammation, impaired nutritional status, and other detrimental clinical conditions that result in a decreased therapeutic response rate, and thus tumor progression continues. In the relapse or refractory setting, blood cell counts might be altered by previous chemotherapy or a systemic condition. Thus, GPS, which is not associated with blood cell counts, is possibly a more accurate prognostic factor than others, including blood cell counts. Moreover, GPS could reflect nutritional status in addition to cancer-related inflammation, unlike cell-based inflammatory markers. In our analyses, GPS was found to be an excellent predictive parameter for disease progression and survival in relapsed or refractory DLBCL patients. In the present study, we investigated the clinical significance of IPI and NCCN-IPI as clinical prognostic factors in patients with relapsed or refractory DLBCL. Recent clinical studies have reported that NCCN-IPI is a better prognostic factor than conventional IPI in patients with newly diagnosed DLBCL [4]. However, clinical data in the relapsed or refractory setting is presumably still limited to confirm the predictive potential of NCCN-IPI. In our analyses, NCCN-IPI was also found to have a significant predictive potential for clinical outcomes in patients with relapsed or refractory DLBCL, but IPI was not. To date, a validated prognostic model for patients with relapsed or refractory DLBCL has not been constructed. In this study, dNLR and GPS as inflammatory markers, and NCCN-IPI as a clinical factor, were found to influence the disease status and survival in the patients. We assessed whether the model of risk stratification could separate our patients into four significantly different risk groups. The results showed that our prognostic model could be offered as an alternative to the previously unorganized prognostic criteria in relapsed or refractory DLBCL. However, we did not analyze the gene expression profiles or immunohistochemistry data, because it was difficult to incorporate the COO subtype information obtained by these techniques into our retrospective study. In conclusion, our study shows that dNLR, GPS, and NCCN-IPI meaningfully reflect disease progression and survival. Thus, these factors could be novel prognostic parameters for predicting outcomes in patients with relapsed or refractory DLBCL. Additionally, this is the first study that has attempted to delineate a novel risk stratification that incorporates an easily applicable inflammatory marker and clinical factor. However, our study has some limitations, such as low numbers of patients, retrospective design, and missing pathological data. To confirm our results, further clinical studies circumventing these issues are warranted.
2020-01-09T09:11:27.546Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "2ddea029269711ff8a02d600d3d5b6dda9c836dd", "oa_license": "CCBYNC", "oa_url": "https://www.bloodresearch.or.kr/journal/download_pdf.php?doi=10.5045/br.2019.54.4.244", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d773740a95aba1b6c6292fb736dfcbaf92fe341f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
148546935
pes2o/s2orc
v3-fos-license
Mental health by the year 2000 a.d. The historic Alma-Ata Declaration has now become a household word. Its ultimate goal—health for all by the year 2000 A.D. is a dream worth realizing. This seemingly simple and straightforward statement is a tremendous venture in the field of health care and is breathtaking in its scope. The emphasis in health care would follow the principles of Primary Health Clare as mentioned in the Alma Ata Declaration and which aims at deprofessionalizing and decentralizing health care. Achievement of the goals as envisaged above will have to include approaches and strategies for the improvement of all aspects of health, physical, mental and social. Since all are signatories to it, it does attain a somewhat mandatory status. Our concern of course relates to the prospects of adequate care being provided to the mentally unwell. We are well aware that mental health, in the past, was a relatively neglected area in the national and state health planning. This neglect is, however, now giving way to an increasing consciousness of the essentiality of the mental health component in schemes of health planning. Previous misconceptions regarding mental illness are being disproved; the importance of mental health skills in improving the quality of general health services is being recognized, and the aim of healthy psychosocial development of the people is rapidly gaining prominence. However, these are rather slow whereas the problems requiring our attention are growing in all dimensions. They require a strong commitment from the Government and dedicated endeavours by a vast number of health personnel of all categories as well as by the entire community. A consideration of the existing state of mental health delivery systems in India indicates that there is a strong disparity between requirement and existing facilities. It has been estimated that no more than 10% of those requiring urgent mental health care are receiving the needed help with the existing services. Moreover, these health services are mainly concentrated in the urban areas, thus further emphasizing the disparity, mentioned above, in the rural areas, which constitute 80% of our population. Deliberating over these matters, almost 15 years ago, an expert committee of the Indian Psychiatric Society-concluded that "Even if almost all the five-year plan efforts in the fields of health were only geared to increasing the number of psychiatrists, it would be impossible to provide an adequate number of hospital beds and mental specialists even in the next 50—100 years.. ." … MENTAL HEALTH BY THE YEAR 2000 A.D. The historic Alma-Ata Declaration has now become a household word.Its ultimate goal-health for all by the year 2000 A.D. is a dream worth realizing.This seemingly simple and straightforward statement is a tremendous venture in the field of health care and is breathtaking in its scope.The emphasis in health care would follow the principles of Primary Health Clare as mentioned in the Alma Ata Declaration and which aims at deprofessionalizing and decentralizing health care.Achievement of the goals as envisaged above will have to include approaches and strategies for the improvement of all aspects of health, physical, mental and social.Since all are signatories to it, it does attain a somewhat mandatory status.Our concern of course relates to the prospects of adequate care being provided to the mentally unwell.We are well aware that mental health, in the past, was a relatively neglected area in the national and state health planning.This neglect is, however, now giving way to an increasing consciousness of the essentiality of the mental health component in schemes of health planning.Previous misconceptions regarding mental illness are being disproved; the importance of mental health skills in improving the quality of general health services is being recognized, and the aim of healthy psychosocial development of the people is rapidly gaining prominence.However, these are rather slow whereas the problems requiring our attention are growing in all dimensions.They require a strong commitment from the Government and dedicated endeavours by a vast number of health personnel of all categories as well as by the entire community. A consideration of the existing state of mental health delivery systems in India indicates that there is a strong disparity between requirement and existing facilities.It has been estimated that no more than 10% of those requiring urgent mental health care are receiving the needed help with the existing services.Moreover, these health services are mainly concentrated in the urban areas, thus further emphasizing the disparity, mentioned above, in the rural areas, which constitute 80% of our population.Deliberating over these matters, almost 15 years ago, an expert committee of the Indian Psychiatric Societyconcluded that "Even if almost all the five-year plan efforts in the fields of health were only geared to increasing the number of psychiatrists, it would be impossible to provide an adequate number of hospital beds and mental specialists even in the next 50-100 years.. ."This statement of fact merely serves to emphasize the futility of the centre-toperiphery approach in the development of a mental health system in our country, or meeting the goal of achieving health for all by the year 2000 A.D. Hence, there has to be a major refocussing of approach, away from the traditional institutional approcahes, involving the alternative strategy of training an increasing number of different categories of health personnel in basic psychiatric and mental health skills.There would thus be a viable functional infrastructure prior to completion of a physical infrastructure.This approach is the one which is basically directed from the periphery to the centre.It is distinctly the more basic approach, beginning, so to say, at the grass-root level and allowing for a speedy coverage of the mental health needs of the rural poor and hitherto neglected areas of society.Both these strategic approaches are not mutually exclusive ; rather, they are complementary. For satisfactorily achieving the above-mentioned goal we would have to aim at diffusion of mental health skills to the periphery of existing network of health service system. At each level of the system (village worker, sub-centre, primary health centre, district hospital, regional hospital) the tasks to be performed will be appropriately apportioned and a referral system set up so that the total system works in an integrated fashion.Only then could we hope that mental health problems are handled effectively at the appropriate level of the health system.Areas particularly deficit in mental health care services would be tackled on a priority basis, thus strengthening mental health care in those regions at present deprived.As mentioned above, the basic mental health care would be integrated into the general health services, facilitating the application of mental health skills when dealing with patients without gross psychiatric disturbances.Another important aspect of this system would be the involvement of state, district and block leadership in the implementation of the mental health programme.Hopefully it would lead to active community participation in preventive efforts directed at psychosocial problems.Treatment, rehabilitation and prevention sub-programmes would form major components of the proposed services to be rendered.Finally, another important focus has been delineated; that of training of a mental health team, which would include in its scope, apart from post graduate training and under-graduate training, the training of psychiatric auxiliaries and parapsychiatric personnel. For a realization of these aims an outline of a plan of action is a must.This would specify targets to be achieved and, working in concert with the state and administrative machinery, would dilineate ways of achieving the goals.An important featuie of this plan is the proposed integration of psychiatric services into the general health care delivery.A linkage of mental health care with social welfare, schools and medical colleges is imperative and the coordination of all these activities through a National Advisory group is a must. Finally an adequate emphasis must be ensured to two further aspects of mental health care : Firstly, that indigenous systems of treatment like Yoga, Meditation and Ayurveda would receive encouragement and secondly, appropriate research into all aspects of mental health care is assured.Research has an important bearing on the quality of services rendered.Moreover, it is a strong monitoring and evaluative method to assess the efficacy of any programme.Hence, all these comprehensive aspects have to be geared up to their maximal potential in order to achieve our set target of health for all by 2000 A.D. We thus strongly suggest to the planners, thinkers and implementators to accord a super-speed priority in evolving the strategies of tomorrow and to ensure a very early adoption of National Mental Health Programme now under active consideration of the National and State authorities.
2016-05-12T22:15:10.714Z
1981-07-01T00:00:00.000
{ "year": 1981, "sha1": "d2335d2285cd4cb0c57d64900a3fe5d35e37b7a5", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d2335d2285cd4cb0c57d64900a3fe5d35e37b7a5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
265124764
pes2o/s2orc
v3-fos-license
Financial Statement Analysis Based on Harvard Analysis Framework — Taking Apple lnc. as an Example : In recent years, the high-technology products have become the most important tools which can make the daily life more convenient. With the developments of the current technology level, in the high-technology industry, there are more and more companies and the competition become more and more intensive. Therefore, for the companies, it is important to analyze companies ’ financial statement through the essential data in the financial statements and the overall environment of the industries. The comprehensive financial statement analysis can help the companies to identify their competitive advantages and adopt the favorable strategies. By selecting Apple lnc. as the subject, this research, based on Harvard analysis framework, provides a comprehensive analysis of Apple's financial situation from strategic, accounting, financial and prospective these four dimensions. In the first part, the background and significance of this paper, and a review of the relevant literature are presented. And in the second part, this research focuses on the theoretical framework of the Harvard framework. The third part, the focus of this research, is about the financial statement analysis of Apple. The last part, is about the conclusion and shortcomings of this research. Through the analysis, this research can help to identify the financial and strategic situation of Apple. Background With the development of the technology and globalization, the high-technology industry plays an important role in the global market.Now, it has been one of the fastest growing and largest industries in the world.More and more governments are paying attention to supporting the development of their domestic high-technology companies to encourage them to enter the global market, with the aim to grow their national economies and technology level.In the past, the high-technology industry was dominated by Nokia, Samsung and some other companies.But since 2007, when Apple launched its first generation iPhone, the market landscape has changed rapidly.In 2022, Apple's market capitalization has reached $2.6 trillion, ranking the first in the global high-technology industry.Nowadays, every product of Apple can change the development direction of the world hightechnology industry and deeply affect people's lives.This research is based on the background of Apple's great success, trying to use financial statement analysis theory and method to research the strategy and financial situation of Apple, and since provide some useful suggestions for other hightechnology companies. Significance Now, the competition in the high-technology industry is more and more intensive.The financial statement analysis is an essential method for the managers to know more about the environment of the industry and their companies.This is beneficial for the managers to make the more suitable corporate strategies, which can make their companies attain more competition advantage and market share.Through analyzing Apple, although some information of Apple is unique, this research can still provide a reference for other companies and make more managers aware of the importance of financial statement analysis. The Definitions of Financial Statement Analysis About the financial statement analysis, different scholars have the different understandings and definitions.The concept of financial statement analysis first came from the American banking industry.It was used by the financiers.Through analyzing the financial statements, they calculated the credit rate of the companies to identify whether they have the ability to pay the debts.As the development of the market, scholars continue to research the financial statement analysis.Yang considered that the financial statement analysis is consists of two parts.The first part is using the professional tools to analyze the financial situation, and the second part is the application of the results of the analysis when the company is operating [1].In the subsequent researches, some scholars explained and added to this concept.Helfert argued that financial statement analysis is a process which focuses on the analysis of the company's operations, investment activities and assessed values [2].Stickney argued that financial statement analysis should include the process to compensate for the shortcomings of companies which show in the analysis [3].Zhang pointed that the financial statement analysis includes four processes which are preparation, analysis, reporting and conclusion respectively [4]. The Methods of Financial Statement Analysis About the financial statement analysis, currently, there are three methods in the academia which are Du Pont analysis, ratio analysis and Harvard analysis framework. Kaplan and Norton established a method of financial statement analysis which is called Du Pont analysis.In addition to financial indicators, they encouraged to use some non-financial indicators to analyze the financial performance of companies, such as customer satisfaction, operational efficiency and innovation [5].This is the initial method of financial statement analysis. Based on Du Pont analysis, Wole created the method of ratio analysis.He chose seven indicators of a company's financial situation and linked them in a linear relationship to analyze the financial statement of companies [6].These ratios, such as inventory turnover ratio and non-current assets ratio, are used in the current financial statement analysis.Now, the methods generally adopted by scholars are Harvard analysis framework.Palepu, Healy and Bernard, based previous research, combined the analysis of corporate strategies with financial statement and proposed the Harvard analysis framework [7].The Harvard analysis framework not only includes the qualitative and quantitative analysis, but also is consists of strategic, accounting, financial and prospective analysis.Therefore, it can be seen as a comprehensive method of financial statement analysis. The Applications of Harvard Analysis Framework Current, more and more scholars are using the Harvard analysis framework to analyze the financial statement of companies, with the aim to learn more about the companies, industries, markets and to develop this method. Liu and Wang selected Yili lnc. as the research object and adopted Harvard analysis framework to analysis its financial statement.Through the analysis of Yili's corporate strategies, finance and other two dimensions, Liu and Wang obtained the result of analysis and proposed the relevant suggestions for the company [8].Ji analyzed the external and internal environment and forecast the future development of a home appliance company [9].The result was useful for the development of the company.Some scholars also used Harvard analysis framework to analyze the non-profitable organizations.By analyzing the East China Normal University, Jia and Hong found the effectiveness of Harvard analysis framework in non-profitable organizations [10].Yang selected five public universities as the samples and also indicated the validity of Harvard analysis framework [11]. Review Conclusion According to the previous literature, this research considers that a complete financial statement analysis process should include not only a description of the existing situation of company, but also a forecast of the company's future development and the presentation of relevant suggestions.Compared to some traditional methods of financial statement analysis which only uses financial ratios, Harvard analysis framework is a more comprehensive method and combines the qualitative methods with quantitative.Kasmioui suggested that the using of Harvard analytical framework can improve the accuracy and effectiveness of the financial statement analysis [12].Therefore, this research will adopt the Harvard analysis framework to analyze Apple's financial statement. Theoretical Framework After many academic and practice tests, in recent years, the Harvard analysis framework has been greatly developed.Now, it is mainly consists of strategic, accounting, financial and prospective analysis. Strategic Analysis The strategic analysis dimension is Harvard analysis framework's key difference with the traditional analysis methods.It mainly about the strategies adopted by the companies and the industries the companies belong to.With regard to the company level, the SWOT framework, generic strategies and value chains are usually used to analyze the internal and external environment of the company.With regard to the industry level, the Porter's five force model is used to identify the profitability and the competitive pressure of the industry. Accounting Analysis The accounting analysis is mainly about the financial statements of the companies, which includes the balance sheet, income statement and the cash flow.The common-size financial statements will be used.Through analyzing the significant changing in some key factors, such as assets, liabilities, revenue and so on, the accounting situation of the companies can be identified and it can support the subsequent analysis. Financial Analysis Based on the common-size financial statements, the financial analysis focuses on the financial ratios including the profitability, liquidity, efficiency and investment.And meanwhile, the financial analysis uses both vertical and horizontal comparison methods to compare not only the past and current financial situation of the companies, but also the financial situation of the companies with its competitors. Prospective Analysis The prospective analysis is the last part of Harvard analysis framework.It is the summary of the previous three analysis.Based the information and results obtained from the previous analysis, the prospective analysis focuses on the future risks, challenges and opportunities facing the companies and to forecast the future prospects of the companies.Through the above theoretical analysis, this research finds that the Harvard analysis framework including strategic, accounting, financial and prospective these four dimensions is a comprehensive and accurate method.Therefore, this research will also adopt these four dimensions to analysis the financial statement of Apple. Financial Statement Analysis of Apple This part will first introduce the Apple lnc.and then analyze the financial statement of Apple based on Harvard analysis framework.The first force is the threat of new entrants.For the high-technology industry, if one company wants to enter this industry, it requires a lot of capital to invest in the R&D and the manufacture of its products.The capital requirement is high.And meanwhile, the customers switching costs is high because each technology company's products have their own unique operation system.For example, Apple has the IOS system while Microsoft has the Windows system.So the customers will not easily switch from Apple to Microsoft.These two factors lead to a low threat of new entrants for hightechnology industry. The second force is the bargaining power of suppliers.In the high-technology industry, the suppliers are usually the manufacturing companies which produce the chips, CPU and so on.Because of the globalization, Apple can choose the suppliers from all over the world.And Apple has a large scale of production.Its orders are a major source of the revenue for these suppliers.Therefore, the bargaining power of suppliers for Apple is low. The third force is the bargaining power of consumers.For the high-technology industry, the consumers are usually the individuals.So the customer concentration is low.And as mentioned before, the customers switching costs is high because each technology company's products have their own unique operation system.For Apple, it has a large and loyal user base.So, the bargaining power of consumers for Apple is low. The forth force is threat of substitutes.In the high-technology industry, the substitutes for the products are limited.There are few substitutes to the phone, computer and so on.And the potential threat of substitutes is also low because it costs a lot for the companies to invent the new substitutes for the high-technology products.So, the threat of substitutes is low for Apple. The last force is industry rivalry.The competition in the high-technology industry is fierce.The first reason is that there are so many companies in this industry, such as Microsoft, Huawei, Dell, Samsung and so on.These companies remain the roughly equal size and power.The second reason is that with the development of technology and industry, the competition on the price of products is more and more fierce.The price of products is lower and lower.So, the industry rivalry for the Apple is high. Through the Porter's five force model, this research finds that the competition in the hightechnology industry is more and more intense.But because of Apple's strong brand value and large number of loyal users, it still dominant the industry and have a large market share. Company Level. In the company level, this research will use the SWOT framework to analyze the internal and external environment of Apple and the strengths, weaknesses, opportunities and threats of Apple are shown in the Table 1.For the Apple, it should take advantage its strengths and to resolve its weaknesses.And it should also seize the opportunities and protect itself from the threats. The Accounting Analysis of Apple This research selects the balance sheet and income statement for the three years 2020-2022 to analyze the accounting situation of Apple.In the common-size balance sheet, there are some significant changes to the accounts. Balance Sheet. As shown in the Table2, about the assets, the cash and short-term investment had a significant decrease, from 28.09% to 13.69%.It means that Apple has a lower capital reserve ratio and debt paying ability.Apple has a higher risk about debt.But meanwhile, it also means that Apple has a higher fund utilization ratio and profitability.The total accounts receivable increases from 11% to 17%, also means the higher risk about debt.In recent three years, Apple's inventory ratio is close and low.It is a good signal for Apple.For the long-term assets, the whole long-term assets have the increases, indicates that Apple is expected to have a good development.About the liabilities and equity part, overall, Apple has the high liability ratios.They are above 80%.And there is an increasing in the current liabilities.And combined with the decreasing cash mentioned before, this change should get Apple's attention.About the equity, both the number and ratio have the decreases.There are maybe three reasons for the decreases.Stock buybacks and dividend payouts, Covid-19 and intensive competition.This research thinks that the reason is the first one.In 2020 and 2021, through stock buybacks and dividend payouts, Apple returned $73 billion and $90 billion to shareholders, which reduces the equity. Income Statement. As shown in Table 3, in recent three years, there is a continuous increase in the revenue of Apple and in 2021, the sales and revenue had a significant increase.This research thinks it was because, at the end of 2020, Apple launched iPhone 12 which support 5g network.So, in 2021, so many people updated their mobile phone which makes the revenue of Apple increase significantly.About the gross income ratio, there is an increase.It is because Apple invested more in the R&D which reduces the cost. Through the accounting analysis, this research finds that Apple has some risk in the debt and cash.Although Apple has a stable development in the profitability, this issue should be taken seriously by some managers. The Financial Analysis of Apple In this section, the financial ratios, include the profitability, liquidity, efficiency and investment, will be calculate and the vertical and horizontal comparison methods will be used to analyze some key financial indicators of Apple. Profitability Ratios. As shown in Table 4, Apple has the steady and increasing gross profit margin and net profit margin.These indicate Apple has a stable growth and it has control every steps of business including quality, price, cost and so on.Apple has a high ROCE ratio and it increases from 40.59% to 76.09%.It shows that Apple has the good ability of using investor funds to generate profit.Compared with its competitors, Apple's ROE ratio is very high.In recent three years, Apple's average ROE ratio is 145% while Samsung's average ROE ratio is 17%.It is beneficial for the Apple and the investors.As shown in the Table 5, Apple has the low current ratio and quick assets ratio and they are decreasing which means Apple's assets has a low liquidity.For the high-technology industry, the average current ratio is 197%, almost twice as high as Apple's.It is not beneficial for Apple because Apple will have the risk about not being able to pay the day-to-day expenses and liabilities.So Apple should pay more attention to the liquidity of its assets.As shown in the Table 6, in recent years, Apple has a low inventory holding period while the average inventory holding period for Huawei and Samsung are 101 days and 52 days respectively.It is beneficial for Apple because in high-technology, the update of the products leads to the high inventory cost.Apple's receivable collection period is low while its payable payment period is high which leads to the negative working capital cycle.This shows that Apple has a high bargaining power between suppliers and consumers.As a result, Apple can use its funds more efficiently.As shown in the Table 7, Apple's EPS increases from 3.31 to 6.15 which shows its profitability is increasing.It is a good signal for investors to invest.However, in the high-technology industry, it is ranked better than 68.5% of the companies.Compared with its competitors, it is not Apple's competitive advantage.In 2020, Apple's P/E ratio is above 30%, which shows that Apple is overestimated because for the high-technology industry, the benchmark is around 20%.In recent two years, it is around 25%, which is normal in the high-technology industry. The Prospective Analysis of Apple According to the product life cycle theory, every product goes through the introduction, growth, maturity and decline period [13].For Apple, each year it will launch the new products to maintain its profitability and some the breakthrough products, such as iPhone 12, can make a large increasing in the revenue of Apple.So, based on the accounting analysis and the product life cycle theory, this research assumes that Apple launches a breakthrough product every 2 to 3 years and obtains the following revenue forecast for a ten-year period.As shown in the Fig. 1. Figure 1:The revenue forecast of Apple. However, these risks could have an impact on Apple's future revenue.The pandemic caused interruptions at a few of the company's component suppliers, which led to supply shortages that impacted sales globally.Such disruptions may still happen in the future.The price of Apple products may be materially adversely impacted by trade policies, disputes, and other international conflicts, especially if they lead to tariffs and other restrictions on international trade in areas where the company has significant supply chain operations and sources a sizable portion of its revenues. Conclusion In summary, as a mature and dominant company in the high-technology industry, at the current time, Apple faces increasingly fierce competition, which is a big threat for it.Compared with its competitors, Apple has some competition advantages, such as diversified products and high innovation capacity.Apple is financially healthy and uses its resources efficiently.It also generates significant income for its investors.In the future, Apple is expected to have a stable and favourable development.However, some uncertainties, such as unfavourable trade policies and the rapid changes in the industry, could create the challenges for Apple's growth.Through the financial statement analysis of Apple, based on the Harvard analysis framework, this research can provide some reference for analyzing other companies.And this research expects to help to improve the accuracy and science of the financial analysis of the high-tech industry and other industries. However, this research still has some shortcomings, which can be improved.In the account analysis and financial analysis, this research only selected the financial statements of Apple for the past three years as the samples, which leads to the less convincing analysis results.And the financial statements in this research are obtained from the Apple's publicly available annual reports.Therefore, for the information that is not disclosed by Apple in the annual reports, such as goodwill, is not analyzed in this research. Proceedings of the 7th International Conference on Economic Management and Green Development DOI: 10.54254/2754-1169/29/20231386 4.1. About Apple lnc. Apple Inc. is an American company which was founded by Steve Jobs, Steve Wozniak and Ronald Wayne in 1976.It was originally known as Apple Computer Inc. and in 2007, changed to Apple lnc.Apple focuses on digital technology products and is famous for its innovation.Its products are mainly phones, computers, music players, headphones and smart watches, etc. which include almost all hightechnology products.After more than three decades of rapid development, now Apple has grown to the listed company which has the largest market value in the world.It has dominated the hightechnology industry.In the industry level, this research will use the Porter's five force model to analyze the hightechnology industry that Apple belongs to. Table 1 : The SWOT frame of Apple lnc. Table 2 : The balance sheet of Apple lnc. Table 3 : The income statement of Apple lnc. Table 4 : The profitability ratios of Apple lnc. Table 5 : The liquidity ratios of Apple lnc. Table 6 : The efficiency ratios of Apple lnc. Table 7 : The investment ratios of Apple lnc.
2023-11-12T16:13:30.654Z
2023-11-10T00:00:00.000
{ "year": 2023, "sha1": "fc7f73287fa8de32788ff5e0e7036f235cc706db", "oa_license": "CCBY", "oa_url": "https://aemps.ewapublishing.org/media/52baab3b532b44ce85592dcb2f348cc5.marked.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "5502aabd9d0d954eda7536e44f1286cfe947c93e", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [] }
246357048
pes2o/s2orc
v3-fos-license
Softening Hard Water using Cocoa Shell Activated Charcoal Cocoa pod shells contain 23-54 % cellulose, 1.14 % hemicellulose, and 20-27.95 % lignin. The high cellulose content in the cocoa pod shell has the potential to be further processed into adsorbents. Before being used as an adsorbent, activation using HCl solution was carried out to increase the adsorption power of the cocoa shell. This research was conducted to analyze the influence of adsorbent dose, pH solution, the efficiency of hard water reduction, and adsorption capacity on Ca2+ and Mg2+ ions. Adsorption of hard water ions was conducted by varying adsorbent doses of 1, 3, 5, 7, and 9 g and varying at the pH of 5, 6, 7, 8, and 9. Optimum condition achieved at the mass of 5 g with Ca2+ and Mg2+ ions adsorption efficiency of 85.4 and 18.31 %, respectively. Optimum condition achieved at the pH of 9 with Ca2+ and Mg2+ ions adsorption efficiency of 61.54 and 49.11 %, respectively. The highest Ca2+ and Mg2+ ions adsorption capacity was obtained at the adsorbent mass of 1 g with adsorption capacity respectively 4.05 and 0.54 mg/g. The highest Ca2+ and Mg2+ ions adsorption capacity was obtained at a pH of 9 with an adsorption capacity of 4.05 and 0.54 mg/g, respectively. Cocoa (Theobroma cocoa) is a plantation commodity that produces the most significant proportion of fruit skin waste (Kamelia & Fathurohman, 2017). The utilization of cocoa pod husk waste is still minimal. People use more of the waste from cocoa pods as animal feed and compost only. Most of the waste produced by cocoa pods is only left to rot around the plantation area (Purnamawati & Utami, 2014). Cocoa pod shells contain 23 -54 % cellulose (Masitoh & Sianita, 2013), 1.14 % hemicellulose, and 20 -27.95 % lignin (Anas et al., 2011). The relatively high cellulose content in the pod husks has the potential to be further processed into adsorbents. Before being used as an adsorbent, activation can be carried out to increase the absorption power of the cocoa pods using an acid or alkaline solution. Activation with acid solutions is the most commonly used and proven effective in increasing the adsorption capacity (Purnamawati & Utami, 2014). Preparation and activation of adsorbent The cocoa shells were cut into small pieces, then washed with water and dried. After that, the dried cocoa pods were put into a heating furnace (furnace) for the charcoal process at 600 o C for 1 hour, and then the charcoal was mashed and sieved with a 70 mesh sieve. Charcoal obtained from the sieve was chemically activated by immersing in 4 M HCl solution for 24 hours and then filtered and washed using distilled water. Activated charcoal was dried in an oven at 110 o C for 5 hours and stored in a desiccator. Determination of moisture content 1 g of activated charcoal from cocoa pods and put it in a porcelain crucible, then dried in an oven at 110 o C for 2 hours. Then the sample was put into a desiccator, then weighed until its weight was constant, and the moisture content was determined in percent (%). where a is the initial weight of activated carbon (g) and b is the weight of activated carbon after drying (g). Determination of ash content The moisture content of the activated charcoal from the cocoa shells was weighed as much as 0.5 grams and put into a known-weight porcelain dish. Then put it in the furnace at 600 o C for 1 hour to form ash. Furthermore, it was cooled in a desiccator and then weighed until its weight was constant and the ash content was determined in percent (%). Determination of the adsorption capacity of the I2 solution The activated charcoal that heated in an oven was weighed as much as ± 0.5 g and put into the Erlenmeyer. The sample was given a 50 mL 0.1 N iodine solution, stirred using a shaker for 15 minutes, and let stand for 15 minutes. Furthermore, 10 mL of filtrate was taken and titrated with 0.1 N Na2S2O3 solution. If the yellow color of the solution looks faint, then add 1 mL of 1% starch solution. Repeat titration until the blue color disappears. V1 is analyzed iodine solution (mL), N1 is iodine normality, V2 is required thiosulfate solution (mL), N2 is sodium thiosulfate normality, and W is the weight of activated charcoal(g). Effect of adsorbent dose on adsorption of Mg 2+ and Ca 2+ ions The activated charcoal adsorbent of cocoa shells was weighted as much as 1, 3, 5, 7, and 9 g, respectively, followed by the addition of the adsorbent into 50 mL of 100 ppm imitation hard water, then shake with a shaker for 60 minutes. After that, it was filtered then the filtrate was analyzed for the levels of Ca 2+ ions and Mg 2+ ions using AAS. Effect of adsorbent pH on adsorption of Mg 2+ and Ca 2+ ions The optimum adsorbent for activated charcoal from cocoa shells obtained from the determination of the adsorbent dose was put into an Erlenmeyer containing 50 mL of 100 ppm imitation hard water at pH 5, 6, 7, 8, and 9. Shake the mixture using a shaker for 60 minutes. The mixture was then filtered and analyzed for the content of Mg 2+ and Ca 2+ metal ions using AAS. Preparation of charcoal from cocoa shells Before carrying out the carbonation, the cocoa shells were prepared previously, namely washing the cacao shells with clean water to minimize impurities such as soil and adhering sand. Then left the cocoa shells dry in the sun to reduce the moisture content. Dried cocoa shells were carbonized in the furnace at 600 o C for 1 hour. Furthermore, the charcoal was mashed using a mortar and pestle. The powder that had been obtained was then sieved with a 70 mesh sieve. This sieving is carried out to uniform the powder size to get a good and homogeneous particle size to increase its surface area (Maleiva et al., 2015). Activation of charcoal The charcoal yield obtained from cocoa shell waste still contains impurities (Sianipar et al., 2016). The presence of contaminants that stick to the pores of charcoal, such as inorganic minerals, can affect the absorption of charcoal (Sekewael et al., 2015;Sianipar et al., 2016). The way to get rid of these impurities is by activating them. Activation is a treatment carried out on charcoal to remove contaminants that cover the pores, cause the surface area to increase, and affect its adsorption power . The manufacture of activated charcoal in this study used 4 M HCl as its activation material. The charcoal had activated then washed with distilled water. The filtered filtrate was tested using universal indicator paper until the neutral pH was attained. The activated charcoal is dried in an oven at 110 o C for 5 hours. A temperature of 110 o C is to evaporate water that is still trapped in the charcoal pores. Determination of moisture content Determination of moisture content was to determine how much moisture content is in the activated charcoal of the cocoa shells. The high and low moisture content indicates the amount of water covering the pores of activated charcoal. The less moisture content contained in activated charcoal, the larger the pores produced. The bigger the pores of activated charcoal, the wider the surface area so that the adsorption ability of activated charcoal will be optimal (Masyithah et al., 2018). Based on Indonesian Nasional Standard (SNI) 06-3730-1995, the permissible moisture content of activated charcoal in powder form is a maximum of 15 %. The moisture content obtained in this study was 4.46 %, meaning that the water content produced had met the standards set by SNI. Determination of ash content Determination of ash content aims to determine the remnants of minerals and metal oxides in activated charcoal, which are insoluble and wasted during the charring and activation processes. The ash content will affect the quality, which can cause clogging of pores to affect absorption. It happens because the surface area of the activated charcoal will decrease due to the clogging of the pores (Pandia et al., 2017). Based on SNI 06-3730-1995, the maximum permissible ash content of activated charcoal is 10%. The ash content obtained from the observations was 28.46 %. This figure still does not follow the standards that have been set. The high ash content obtained indicates that mineral residues in activated charcoal are not wasted during the activation process to clog the pores of activated charcoal and reduce its adsorption power. Determination of the iodine number Iodine adsorption is one of the main parameters used to determine the quality of activated charcoal. The reactivity of activated charcoal can see from its ability to adsorb the substrate. The adsorption power indicated the amount of iodine number, which is a number that shows how much the adsorbent can adsorb iodine. The greater the iodine number, the greater the adsorption power of the adsorbent or activated charcoal (Setyoningrum et al., 2018). The adsorption power of activated charcoal to iodine correlates with the number of pores or the surface area of the activated charcoal. The magnitude of the absorption of activated charcoal to iodine also illustrates the many micropore structures that formed (Alzaydien, 2016). Based on SNI 06-3730-1995, the minimum iodine adsorption is 750 mg / g. The iodine number obtained from this study was 881.955 mg/g; the results of this iodine adsorption have met the set standards. Based on the results of the tests carried out to on the quality of activated charcoal, the conclusion that the moisture content, ash content, and iodine absorption capacity of activated charcoal from cocoa shells are presented in Table 1. Effect of dose on Mg 2+ and Ca 2+ ions adsorption Determination of the optimum adsorbent dose is needed to determine the adsorption efficiency of the concentration of Ca 2+ and Mg 2+ ions. The decrease in Ca 2+ and Mg 2+ ions concentration was analyzed using AAS and calculated the adsorption efficiency. The curve of the adsorbent dose effect on the adsorption efficiency of Ca 2+ and Mg 2+ ions is shown in Figure 1. Figure 1 shows that the best adsorption efficiency is found in the dose of 5 g with adsorption efficiency values for calcium and magnesium of 85.4% and 18.31%, respectively. It can be seen that at a dose of 1 to 5 g, there is an increase in adsorption percentage. It might be due to the increased adsorbent dose will increase the number of active sites and the surface area of the activated charcoal (Mgombezi et al., 2017), causing the adsorbent surface to bind Ca 2+ ions and Mg 2+ ions to increase. Furthermore, there was a decrease in the adsorption efficiency of the dose from 7 to 9 g. It indicates that the dose of 5 g is the equilibrium point of the active site of the adsorbent or the saturation limit. Hence, in excess of the dose of 5 g, the amount of activated charcoal used is no longer efficient for adsorption. This event is known as desorption. Desorption can occur if the adsorption process has reached its optimum, the adsorbent surface is saturated or is no longer able to adsorb the adsorbate, and equilibrium occurs (Giyatmi et al., 2019). The data obtained also show that the adsorbed concentration of Ca 2+ ions is far away more than the Mg 2+ ion. The Ca 2+ ion has a larger atomic radius than the Mg 2+ ion, which causes the ionization energy to decrease. It is easy to form strong bonds on the surface of the activated charcoal. Effect of pH on Mg 2+ and Ca 2+ ions adsorption After obtaining the optimum dose, then determine the optimum pH of the activated charcoal adsorbent to determine the exact degree of acidity in adsorbing Ca 2+ and Mg 2+ ions in hard water. Determination of the optimum pH is carried out because it affects the surface charge of the adsorbent. The pH value is one of the most critical parameters in the adsorption process and can affect the chemical equilibrium of the adsorbate and adsorbent. This research was conducted at pH range 5, 6, 7, 8, and 9. The effect of pH on the adsorption of Ca 2+ and Mg 2+ ions in hard water can be seen in Figure 2. Figure 2 shows that the adsorption process has increased with increasing pH. When conditions are acidic or pH < 7, the adsorption process of Ca 2+ ions and Mg 2+ ions is low and increases at alkaline pH or pH > 7. The low adsorption efficiency of Ca 2+ and Mg 2+ ions at pH < 7 because when conditions are Jurnal Akademika Kimia acidic, H + ions in solution are abundant. A large amount of proton causes competition between H + ions with Ca 2+ and Mg 2+ ions on the active site of the adsorbent surface so that the attractive force between the adsorbent and the ion decrease (Mgombezi et al., 2017). When the solution is in alkaline conditions or pH > 7, the number of OHions is relatively abundant in the solution, which causes the adsorbent surface to become negatively charged (Varada, 2018). It affects the adsorption of Ca 2+ and Mg 2+ ions on the adsorbent surface through the electrostatic force of attraction (Mgombezi et al., 2017). Adsorption capacity of Ca 2+ and Mg2 + ions in adsorbent dose variation Determination of the adsorption capacity aims to determine the number of Ca 2+ and Mg 2+ ions absorbed by the adsorbent, expressed in mg/g. The value of the adsorption capacity of Ca 2+ ions and Mg 2+ ions is listed in Table 2. Table 2 shows that the highest adsorption capacity of Ca 2+ and Mg 2+ ions were obtained at 1 g of adsorbent dose of 4.05 mg/g and 0.54 mg/g, respectively. The lowest adsorption capacity of Ca 2+ and Mg 2+ ions were obtained at the adsorbent dose of 9 g, respectively 0.23 mg/g and 0.06 mg/g. The data reveal that the adsorption capacity for Ca 2+ and Mg 2+ ions decreased along with the increase in the dose of adsorbent. The decrease in adsorption capacity occurs because the active site of the adsorbent is not all bound to the adsorbate. The increase in adsorption capacity is inversely proportional to the dose used. It might be due to the adsorption capacity measures the number of Ca 2+ ions and Mg 2+ ions adsorbed per unit weight of the adsorbent (Putri et al., 2019). Adsorption capacity of Ca 2+ and Ion Mg 2+ ions on pH variation The results of determining the adsorption capacity at various pH identified as shown in Table 3. Table 3, the highest adsorption capacity of Ca 2+ and Mg 2+ ions was obtained at pH 9 with adsorption capacity values of 0.61 mg/g and 0.49 mg/g, respectively. The lowest adsorption capacity value of Ca 2+ and Mg 2+ ions was at pH 5, with adsorption capacity values of 0.3 mg/g and 0.25 mg/g, respectively. Based on these data, the adsorption capacity increases with the increasing pH of the solution. The largest adsorption capacity occurs at alkaline pH because, under these conditions, the competition between protons (H + ) and metal ions (Ca 2+ and Mg 2+ ) decreases on the surface of the activated charcoal so that Ca 2+ ions and Mg 2+ ions can readily be adsorbed on the surface of the activated charcoal (Mgombezi & Vegi, 2020). Conclusions The results showed that activated charcoal from cocoa shells could adsorb Ca 2+ and Mg 2+ ions with hard water content. The mass and pH of the adsorbent affect the adsorption of Ca 2+ and Mg 2+ ions. The equilibrium of the adsorption process was achieved at the dose of 5 g and the pH of 9 with an adsorption capacity of Ca 2+ and Mg 2+ ions, respectively, 0.61 mg/g and 0.49 mg/g, respectively.
2022-01-28T16:33:43.653Z
2021-05-30T00:00:00.000
{ "year": 2021, "sha1": "18e095b5795ae0214c9b466b99dc172cf331b10f", "oa_license": "CCBYNCSA", "oa_url": "https://jurnal.fkip.untad.ac.id/index.php/jak/article/download/690/1114", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "024f1d629a3b17ea38438f95a96cfd31098b9ffb", "s2fieldsofstudy": [ "Environmental Science", "Materials Science", "Chemistry" ], "extfieldsofstudy": [] }
207459295
pes2o/s2orc
v3-fos-license
Progressive increase in cavitation with the evolution of fungus ball: A clue to the diagnosis of chronic necrotizing pulmonary aspergillosis Chronic necrotizing pulmonary aspergillosis (CNPA) is an uncommon pulmonary infection seen in the patients with chronic obstructive pulmonary disease, bronchiectasis, pneumoconiosis, diabetes mellitus, alcoholism, poor nutrition or low dose corticosteroid therapy. Here, we are presenting a case of CNPA with diabetes mellitus that was misdiagnosed as pulmonary tuberculosis. treatment, he deteriorated clinically as well radiologically. On admission, general examination revealed grade III clubbing of fingers and toes. Examination of respiratory system revealed bronchial breath sound over left mammary area and coarse crepts in bilateral suprascapular region. Review of his serial chest radiograph revealed progressive increase in cavitation with the evolution of fungus ball and progression of disease on right side [ Figure 2]. His hematological and biochemical investigations were with in normal limits except uncontrolled blood sugar (fasting blood sugar: 202mg /dl and post prandial: 368mg /dL). Enzyme-linked immunosorbent assay (ELISA) for human immunodeficiency virus was negative. Multiple sputum smears revealed no bacteria and acid-fast bacilli. The culture by BACTEC did not show any mycobacteria. The sputum on fungal culture grew Aspergillus fumigatus. A CT scan of the chest revealed a crescent-shaped lucency (air crescent sign) with in the area of consolidation in the left upper lobe and right middle lobe [ Figures 3 and 4]. Fiberoptic bronchoscopy showed purulent secretions coming from both upper lobe bronchi and left lower lobe bronchi. Bronchoalveolar lavage smears did not reveal any bacteria or acid-fast bacilli but the growth of Aspergillus fumigatus. Thus, he was diagnosed as a case of chronic necrotizing pulmonary aspergillosis with diabetes mellitus. INTRODUCTION A variety of clinical entities caused by the fungus aspergillus have been described. The spectrum Aspergillus lung disease includes saprophytic aspergillosis in the form of pulmonary aspergilloma, immune disease in the form of allergic bronchopulmonary aspergillosis and hypersensitivity pneumonitis, and infectious disease in the form of invasive and chronic necrotizing or semi-invasive pulmonary aspergillosis. In this report we present a case of diabetes mellitus with progressive increase in cavitations on chest radiograph that was diagnosed as a case of chronic necrotizing pulmonary aspergillosis. CASE REPORT A 40 year old male who was known to have type I (insulin-dependent) diabetes mellitus, presented with complaints of productive cough, breathlessness, fever, weight loss, and hemoptysis for three years. His previous chest radiograph revealed left upper lobe cavitary infiltrate [ Figure 1]. His routine investigation revealed uncontrolled blood sugar and negative sputum smear for acid-fast bacilli at that time. Patient had received two years of antituberculosis treatment along with insulin from a private practitioner, though his sputum did not show any acid-fast bacilli. Despite adequate antitubercular 200 mg twice daily. After one month of itraconazole therapy, the patient started showing clinical improvement, became afebrile, gained weight, and the volume and the purulence of sputum also reduced considerably. DISCUSSION Chronic necrotizing pulmonary aspergillosis (CNPA), also termed semi-invasive pulmonary aspergillosis (SIPA), is an uncommon, indolent pulmonary infection commonly seen in patients with altered local defense due to preexisting lung disease such as chronic obstructive pulmonary disease, bronchiectasis, pneumoconiosis and mildly compromised systemic defense due to diabetes mellitus, alcoholism, poor nutrition or low dose corticosteroid therapy. [1,2] Our patient was also a known case of diabetes mellitus. This usually occur in middle-aged to older individuals who present with fever, productive cough, and weight loss that occurs over a period of months, [3] thereby mimicking tuberculosis and often resulting in a delay in the diagnosis (as happened in our case also). In contrast to the patients with simple mycetomas, CNPA nearly always presents with pulmonary or systemic symptoms. In addition, hemoptysis, the most common symptom in patients with mycetoma, is reported in only 10% of patients with CNPA and is rarely an isolated symptom. [3] Aspergilloma, a noninvasive form of aspergillosis, may develop in a healthy host, in which the organism colonizes a preexisting cavity whereas CNPA instead of developing in a preexisting cavity, it may create its own cavity in immunocompromised host and then grow as a relatively noninvasive organism. Radiographic manifestation usually consists initially of an area of consolidation in the upper lobe, which develops progressive cavitation over several weeks or months. [4] The cavitation may be associated with intracavitary soft tissue opacity which can be recognized radiographically with the appearance of an air crescent sign. The air crescent sign may be visualized on chest CT well before it is seen on the chest radiograph. The diagnosis is suggested by the clinical course and the isolation of the fungus from pulmonary secretions; negative cultures for other pathogens and failure to respond to antibacterial or antimycobacterial therapy. [5] Diagnostic confirmation requires histologic evidence of local lung tissue invasion by septate hyphae, consistent with Aspergillus species; however, this is often difficult to obtain. Both transbronchial and percutaneous biopsy have low diagnostic yields for locally invasive aspergillosis when compared with autopsy findings. [1,5] Sputum is also unavailable for culture in many cases; even when present, the sensitivity of such culture is probably in the order of 50% to 60%. [6] Similar values have been found for culture of respiratory tract secretions sampled by bronchoalveolar lavage or bronchial washing or brushing. A positive sputum culture may also reflect simple colonization. However in the appropriate clinical setting, repeated positive sputum or BAL fluid cultures have been found to be a reliable method of diagnosis. [7] Unlike Aspergillus fungal ball, in which medical therapy has very little proven benefit, [8] successful treatment of CNPA has been reported. [5] A treatment protocol utilizing itraconazole, followed, if necessary, by intravenous amphotericin B or surgical excision or intracavitatary amphotericin has been proposed. [2] The dose and duration of therapy should be based on clinical response. [3] Maintenance therapy with itraconazole can be considered in patients with residual parenchymal scarring. [3] Voriconazole and Micafungin are useful in refractory cases. [9] In this disorder, diagnosis is usually delayed, patients are poor operative candidates, and postoperative complications are common. Treatment outcome is likely to be influenced by the severity of co-morbid conditions, the extent of the underlying disease, delays in diagnosis, and initiation of effective therapy. [3] In a country like India, cavity in upper zone of chest radiograph is considered a case of pulmonary tuberculosis even in the absence of AFB. Diagnosis of chronic necrotizing pulmonary aspergillosis must be considered in those patient in which disease progress along with evolution of fungus ball after a trial of antitubercular treatment.
2018-04-03T00:17:41.677Z
2009-07-01T00:00:00.000
{ "year": 2009, "sha1": "0643a86d7cdda4bb73efe41849470006d56f6127", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4103/0970-2113.53235", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c3fd829c5622d8f29a2797b1b5fd47ea3fb7cc74", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
264530938
pes2o/s2orc
v3-fos-license
Associations of the MIND Diet with Cardiometabolic Diseases and Their Risk Factors: A Systematic Review Purpose Recent studies have expanded the scope of research on the Mediterranean-DASH Intervention for Neurodegenerative Delay (MIND) diet beyond its impact on cognitive performance. These investigations have specifically explored its potential to provide protection against cardiometabolic diseases and associated risk factors, including obesity and dyslipidemia. Methods We systematically summarized and evaluated all existing observational and trial evidence for the MIND diet in relation to cardiometabolic diseases and their risk factors in adults. PubMed, Embase, CINAHL and Cochrane Library databases were systematically searched to extract original studies on humans published until September 2023, without date restrictions. A total of 491 studies were initially retrieved, out of which 23 met the eligibility criteria and were included in the final review. Duplicated and irrelevant studies were screened out by five independent reviewers using the Rayyan platform. Quality assessment was ascertained using the Newcastle-Ottawa scale for observational studies and the Cochrane risk-of-bias tool (RoB 2) for randomized trials. Results Across the different study designs, the MIND diet was generally associated with an improvement in anthropometric measures and other cardiometabolic outcomes, such as blood pressure, glycemic control, lipid profile, inflammation and stroke. The effects of the MIND eating pattern on some cardiovascular diseases are less conclusive. Conclusion The findings of this systematic review support the recommendation of the MIND diet as a strategy to reduce cardiometabolic risk in adults. Further well-designed and long-term studies are warranted. Introduction Cardiometabolic diseases are rapidly growing as a global health concern, 1,2 comprising a variety of conditions including cardiovascular disease (CVD), diabetes mellitus (DM), dyslipidemia, hypertension and non-alcoholic fatty liver disease (NAFLD). 3According to the International Diabetes Federation (IDF), more than one in ten adults are now living with diabetes globally, and this number will continue to rise. 4 Since the beginning of the century, the prevalence of diabetes in adults has increased more than threefold, from an estimated 151 million in the year 2000 to 537 million today.This occurs in parallel with increases in the rates of other cardiometabolic diseases.For one, CVD remains the leading cause of mortality worldwide. 5From 271 million in 1990 to 523 million in 2019, prevalent cases of CVD have almost doubled over the last 30 years. 62][13][14][15][16] This shift in dietary recommendations carries forward the emphasis that nutrients and foods are not consumed in isolation, but rather in various combinations over time. 17Dietary patterns account for the synergistic and/or antagonistic effects of these combinations in the diet as a whole. 18Furthermore, the overall influence of diet on cardiometabolic disease is more likely to be caused by the combined effects of dietary components, rather than those of a single nutrient or food. Dietary patterns rich in plant foods, such as fruits, vegetables, wholegrains, nuts, seeds and legumes, are increasingly recommended as a strategy to lower the risk of cardiometabolic diseases. 10The Mediterranean and Dietary Approaches to Stop Hypertension (DASH) diets are examples of such patterns. 19,20These diets also limit red meats, sweets, sweetened beverages and processed foods that are often associated with the Western diet, and typically linked with chronic disease. 21][24][25][26][27][28][29][30] In 2015, the Mediterranean and DASH diets were combined into a hybrid diet tailored specifically for the protection of the brain. 31Termed the Mediterranean-Dash Intervention for Neurodegenerative Delay (MIND), the diet emphasizes ten brain-healthy foods, namely green leafy vegetables, other vegetables, berries, nuts, beans, whole grains, fish, poultry, olive oil, and wine (Table 1). 32On the MIND diet, five brain-unhealthy foods (butter and margarine, cheese, red meat and products, fast fried foods, pastries and sweets) should be consumed in limited amounts.Hence, similar to the Mediterranean and DASH diets, the MIND diet highlights plant-based foods with limited intake of animal and saturated fat foods.It uniquely specifies the consumption of green leafy vegetables and berries, but does not emphasize other types of fruits.There are no recommendations on the consumption of potatoes and dairy products, nor does the diet recommend more than one fish meal per week. 320][41][42] It is hypothesized that the MIND diet may have positive effects on the prevention or management of cardiometabolic diseases.3][24][25][26][27][28][29][30] Given the rapidly increasing aging population, there is a pressing need to address the alarming rise in the rates of cardiometabolic diseases, which are common among older adults.To our best knowledge, there is currently no comprehensive review summarizing studies on the associations of the MIND diet with cardiometabolic diseases nor any of their risk factors.Therefore, the aim of this review is to systematically summarize and evaluate all existing observational and trial evidence for the MIND diet in relation to cardiometabolic diseases and their risk factors in adults. Methods The current systematic review was conducted following the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines. 43 Search Strategy A literature search was conducted in PubMed, Embase, CINAHL and Cochrane Library databases until September 2023, with no date restrictions.The following keywords were employed in the search: "cardiovascular disease" OR "dyslipidemia" OR "noninsulin dependent diabetes mellitus" OR "abdominal obesity" OR "hypertension" OR "nonalcoholic fatty liver" OR "inflammation" OR "insulin resistance" OR "anthropometry" OR "metabolic syndrome" OR "cardiometabolic risk factor" OR "liver function test" OR "lipid profile" OR "glycemic control" OR "body composition" in all-fields keywords AND "Mediterranean-DASH intervention for neurodegenerative delay" OR "MIND diet" in all-fields keywords.No limits were applied to ensure the collection of all possible research papers.Details on the search strategy used are included in Supplementary Tables 1-5. Eligibility Criteria All original articles published in English language were included.Included studies were observational studies (crosssectional, case-control and cohort studies) and interventional studies (randomized controlled trials) addressing the association of the MIND diet with cardiometabolic diseases (CVD, DM, stroke, dyslipidemia, obesity, hypertension, NAFLD) and their risk factors (anthropometric measurements, blood pressure, lipid profile, glycemia, insulin resistance, inflammation).The excluded studies were published in other languages, performed on other populations (eg, children), or examined other outcomes (eg, cognition).Furthermore, studies that investigated the association of other dietary patterns (eg, Mediterranean diet or DASH diet) with cardiometabolic diseases and their risk factors were also excluded. Selection of Studies The extracted studies were transferred to the Rayyan platform, 44 and duplicates were removed prior to screening.The titles and abstracts of the remaining records were screened by five independent reviewers to identify articles potentially eligible for inclusion in the systematic review.In case of exclusion of any study, the reasons for exclusion were documented.The full texts of the screened studies were then critically reviewed separately by each of the five reviewers for eligibility and data extraction.Any discrepancy in evaluation between the reviewers was resolved through meetings and discussions. Data Extraction Data extraction of selected studies was performed and cross-checked by all five reviewers.Data from each study were extracted and categorized as follows: first author and year of publication, country, study design, follow-up duration (if applicable), sample size and characteristics, dietary assessment method and MIND diet range/categories.In studies where some components of the MIND diet were not included in the final calculation of its score, these components were also documented.Moreover, any outcome measure related to cardiometabolic diseases and their risk factors were extracted.Finally, covariates and findings accompanied by odds/hazard ratios, confidence intervals, or other indicators of association and their p-values, if available, were extracted.Results of the fully adjusted models of the studies were used to evaluate the relationship of the MIND diet with cardiometabolic parameters. Quality Assessment Selected studies were assessed for methodological quality by five independent reviewers.The quality and risk of bias of the included observational studies (cross-sectional, case-control and cohort studies) were assessed using the Newcastle-Ottawa quality assessment tool. 45Three main domains were assessed to determine the quality and risk of bias: selection of the exposed and non-exposed groups; comparability of groups on the basis of the design or analysis controlling for confounders; and the assessment of either outcome (cross-sectional and cohort studies) or exposure (case-control).A star system was applied to classify the articles as good, moderate, or poor quality.Studies with a total score of 6 or higher were classified as high quality.The quality of the randomized controlled trials (RCTs) was evaluated using the Cochrane Collaboration's tool 46 focusing on bias in selection, performance, detection, attrition, and reporting.In addition, bias in the matching of control and treatment groups regarding age, education and anthropometric indices were examined. Selection of Studies As guided by the search strategy, 491 records were first retrieved, while 383 remained after removing duplicates.During the first screening phase, records were screened by title and abstract, and only 49 publications were relevant to the topic and scope of this review.An additional 26 articles were excluded after full-text screening as they did not meet the eligibility criteria.Finally, 23 articles (14 cross-sectional, 5 cohort, 1 case-control and 3 RCTs) were included in this review (Figure 1). Characteristics of Included Studies Studies included in this review varied in terms of key characteristics as shown in Table 2. Variation was observed in study design, sample size, MIND diet scoring, dietary assessment methods and sample characteristics.The majority of (Continued) (Continued) the studies found on the MIND diet in relation to cardiometabolic diseases were observational, 39,41,42,[47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63] except for three RCTs. 40,64,65All studies were published within the past six years (2018-2023).Twelve of the studies were conducted in Iran, [39][40][41]49,53,54,[57][58][59][61][62][63] one of which was an RCT. 40 The rest of the sudies spanned across Egypt, 65 China, 42,51,64 the United States 48,55,56,60 and Australia.47,50,52 Sample size varied between 37 to 10,009 participants among the different studies. Th mean age of participants was above 35 years in most studies.Most studies included both males and females, except for four studies that included only women.40,41,58,65 Additionally, there were some discrepancies in the study population of the included reports.Some were conducted on healthy adults, [47][48][49]53,56,59 three studies included overweight or obese women, 40,41,58 while one included post-menopausal women with mild cognitive impairment (MCI). 65 Additonally, one study recruited older adults with hypertension, 64 while another was conducted on individuals with diabetes.57 The only case-control study included hospitalized stroke cases and hospitalized controls. 54 For the ssessment of dietary intake, most studies used Food Frequency Questionnaires (FFQs) to determine MIND diet scores. Two studs used multiple 24-hour dietary recalls.47,50 The range for the MIND diet scores varied between studies.Only six studies used the full score of 15, as they included all MIND diet components.41,47,48,56,60,64 Six studies excluded wine from the score, using a total score of 14. 40,49,54,57,58,61 Another five studies excluded wine and olive oil, resulting in a total MIND diet score of 13. 39,53,59,62,63 Two studies used a score of 9 due to lack of sufficient information on other MIND diet components.42,51 One study used a score range of 15-75, as the consumption of 15 food components were separated into quintiles and given a score of five for each.50 Finally, one study did not use a score, since the MIND diet was delivered as an intervention. 65 The mainconfounders adjusted for in most studies included age, sex, energy intake, body mass index (BMI), marital status, physical activity, education status, occupation, smoking, socio-economic status, alcohol use, number of diseases and medical history of diseases (hypertension, diabetes, dyslipidemia, and heart disease).Two RCTs and one crosssectional study did not adjust for any confounders.40,57,65 Quality of Articles All included observational studies, except one, 57 were rated as moderate to good quality (Supplementary Tables 6-8).Cohort studies were of moderate to good quality, with quality assessment scores ranging from five to eight (out of nine representing the lowest degree of bias) (Supplementary Table 6).The single case-control study included in this review was of moderate quality with a score of five (out of nine representing the lowest degree of bias) (Supplementary Table 7).The quality assessment scores of the fourteen cross-sectional studies ranged between three and six (out of seven representing the lowest degree of bias) (Supplementary Table 8).The main factors impairing quality of these observational studies were the self-administered questionnaires, self-reported exposures or outcomes, selection bias, confounding bias (due to lack of adjustment of important covariates) and non-response bias related to lack of information regarding certain domains.As for the randomized controlled trials, the overall risk-of-bias judgment for all RCTs had some concerns (Figure 2; Supplementary Tables 9-11) Reasons limiting the quality of these RCTs were lack of blinding due to the nature of the interventions and the significant differences between trial groups. Relationship Between MIND Diet and Cardiometabolic Diseases Cross-Sectional Studies [56][57][58][59][60][61] Seven of these studies showed no significant association between the MIND diet and cardiometabolic disease or Metabolic Syndrome (MetS). 39,47,49,50,58,59,61MetS is defined as a cluster of cardiometabolic risk factors such as central obesity, hypertension, dyslipidemia and impaired glucose tolerance. 66MIND diet adherence was negatively correlated with waist circumference, BMI, total cholesterol (TC) to high density lipoprotein cholesterol (HDL-C) ratio (TC/HDL-C) and diabetes status, 39,47,48,56 and positively correlated with HDL-C (p<0.05). 49,56dditionally, lower scores in the MIND pattern were positively associated with low ankle brachial index (ABI), which indicates atherosclerosis. 51Some studies had gender specific outcomes, whereby a higher adherence to the MIND diet was negatively associated with obesity, but only in women (OR: 0.81; P<0.03). 39In another study, the inflammatory biomarker high sensitivity C-reactive protein (hs-CRP) was lower among men adhering to the MIND diet. 42One study reported a significant interaction between MIND diet and the CAV1 rs3807992 polymorphism for metabolic dyslipidemia. 41 Cohort and Case-Control Studies A total of five cohort studies tested the association between the MIND diet and cardiometabolic disease over time. 52,53,55,62,63ne study showed that a higher MIND diet score was associated with lower incidence of diabetes, 55 while another showed an inverse association between the MIND diet and incident CVD. 53According to Golzarand et al, as the consumption of MIND diet components increased, the risk of CVD decreased. 53Whole grains, green leafy vegetables, and beans reduced the risk of CVD by 60%, 45%, and 65%, respectively. 53On the other hand, Livingstone et al reported a positive association between the MIND diet and nonfatal CVD events (MI and stroke) (HR: 1.20; 95% CI: 1.05-1.36). 52However, this association lost its significance after excluding deaths that occurred in the first 2 years of follow-up.In addition, one study did not find an association between higher MIND diet scores and hypertension risk. 62In another study, there was a reduced risk of metabolically unhealthy phenotypes (such as elevated lipid profile, blood pressure or blood glucose) with higher MIND diet scores. 63One case control study conducted in Iran included participants aged over 45 years that were recruited as hospitalized stroke cases and hospital-based controls.The study found that the MIND diet score was inversely associated with the odds of stroke (T3 vs T1 OR: 0.41; 95% CI: 0.18-0.94). 54 Randomized Controlled Trials The effect of the MIND diet intervention on cardiometabolic diseases or their risk factors was reported in three singleblind randomized controlled trials conducted on three different populations. 40,64,65After a four-week intervention on older adults with hypertension, the MIND diet decreased waist-to-hip ratio (WHR) by 0.03 (p=0.050)compared to the control group. 64Total cholesterol and LDL-C decreased by 0.60 mmol/L and 0.33 mmol/L in the MIND diet group (compared to 0.57 and 0.28 mmol/L in the control group), respectively.Additionally, blood glucose levels decreased significantly by 0.68 mmol/L in the MIND diet group. 64In a 3-month trial on healthy overweight and obese women, the effects of a calorie-restricted MIND diet and a calorie-restricted control diet on anthropometric measures were investigated.The MIND diet group participants experienced a significant reduction in weight (-3.98±-0.29),BMI (-1.55±- 0.11), percentage of body fat (-5.16±-0.82),and waist circumference (-3.54±0.56). 40Furthermore, a trial on 60 postmenopausal women with mild cognitive impairment (MCI) also showed that participants following the MIND diet significantly reduced their body weight and BMI after 12 weeks of intervention (p<0.05). 65 Discussion This is the first systematic review to assess the relationship between the MIND diet and cardiometabolic diseases and their risk factors.Overall, the included studies indicated that adherence to the MIND diet was associated with favorable cardiometabolic outcomes in adults.Across all the different study designs, a desirable significant effect was observed on obesity and anthropometric indicators, including waist circumference, 40,47,48,56 WHR 64 and BMI. 40,48,65Significant improvements were also found in blood pressure, 57 glycemic outcomes, 48,56 HDL, [47][48][49]56 triglycerides, 56,64 and total cholesterol.64 Moreover, the MIND diet was found to reduce inflammatory markers, 42 as well as the incidence of DM, 55 CVD, 53 stroke 54 and atherosclerosis. 51 Hoever, no significant effect was found for systolic blood pressure, 64 metabolic syndrome, 47,49 HDL, 64 body fat percent (BFP), 64 and other markers of cardiovascular disease risk in some studies.50 Anthropometric Measurements The studies included in this systematic review reported on several anthropometric measures.In RCTs, the MIND diet was associated with a reduction in waist circumference, BMI, WHR and weight, whereas effects on BFP were conflicting. 40,64,65au et al reported no effect of the dietary intervention on BFP in comparison to the control group, which may be due to lower levels of body fat at baseline.Cross-sectional investigations also revealed negative correlations with waist circumference, 47,56 general obesity 39,49 and BMI, 48 but no association with abdominal obesity. 49As cross-sectional studies are observational in nature and report on current intake, the discrepancy in abdominal obesity may be explained by the energy-restricted interventions employed in the RCTs.[74][75] Lipid Profile In the present systematic review, the MIND diet was generally associated with a beneficial effect on lipid biomarkers.Of the included RCTs, Yau et al reported a reduction in triglycerides, TC and LDL-C in the MIND diet group. 64In crosssectional studies, MIND diet was positively associated with HDL-C 47,49,56 and negatively related to TC/HDL-C ratio. 485 The results for these biomarkers were comparable across different study designs, except for TG.In a cross-sectional analysis, Holthaus et al reported an inverse association of TG with high MIND diet scores, 56 while Mohammadpour et al reported no significant association between a higher MIND diet score and the odds of high serum TG. 49 This may be explained by the high intake of red meat and margarine by participants in the highest tertile (T3) of the MIND diet, apart from a high intake of brain healthy foods.The intake of these foods should be limited to a certain number of servings per day in the MIND eating pattern. Glycemic Control and Type 2 Diabetes 61,64 The RCT by Yau et al showed a reduction in glucose levels in the MIND group, and these results were supported by some of the observational research included in this review.A cohort study showed that high MIND diet scores were associated with lower incidence of diabetes. 55In two cross-sectional studies in the USA, the MIND diet was negatively correlated with DM 48 and FBS, 56 respectively.However, no association was found with fasting blood glucose in three Iranian studies. 49,57,61Published literature on the effect of the Mediterranean and DASH diets on glycemic measures has been somewhat inconsistent.While Mediterranean diets have shown beneficial effects on glycemic profile, 76,77 some meta-analyses of DASH did not find a significant association with blood glucose. 75,78lood Pressure and Hypertension 3,64 Some observational studies showed that blood pressure was inversely related to the MIND diet, 56,57,60 while others did not. 48,61For instance, Mohammadpour et al found no association between elevated blood pressure and the MIND eating pattern. 49This could be explained by the baseline value of blood pressure in tertile 3, which was already much lower than tertile 1 and 2. Further, the MIND diet was not associated with SBP in the RCT.In addition, a cohort study did not find an association between higher MIND diet scores and hypertension risk. 62Comparing to the two dietary patterns that the MIND diet is derived from, while the DASH diet has been studied extensively for its beneficial effects on blood pressure, 22,79,80 the evidence on the Mediterranean diet and improved blood pressure outcomes is compelling, [81][82][83] but not conclusive. 84,85Further trials and longitudinal studies are needed before the MIND diet can be recommended as an intervention for controlling blood pressure. Cardiovascular Disease 1][52][53][54] An inverse association between the MIND diet and stroke was reported in one cohort 53 and one case-control study. 547][88] In the same cohort, MIND diet was also inversely related to coronary heart disease. 53One cross-sectional study reported a positive association between low MIND diet score and low ankle-brachial index (ABI), which is an indication of atherosclerosis. 51This is in line with evidence on the Mediterranean and DASH dietary patterns, which have been strongly linked to lower risks of CVD. 28,89In another cross-sectional study, the MIND diet was found to be unrelated to CVDs, including ischemic heart disease, CHD, angina, heart failure and cerebrovascular disease. 50However, upon exclusion of participants who misreported their energy intake, an inverse association was found between the MIND diet and heart failure only.The lack of a relationship with most CVDs that was observed in this study may be partly explained by the narrow range of MIND diet scores that were present in the sample, which may have affected the ability to detect the associations. Inflammation One cross-sectional study in this review found an association between the MIND diet and reduced hsCRP in men, suggesting a protective effect against inflammation. 421][92] The lack of an observed association in women in the study by Chan et al may be partly explained by the incomplete dietary assessment in the study.The maximum score of MIND adherence was 9 instead of 15, as the authors reported insufficient information to determine the use of olive oil and the consumption of fish, beans, poultry, red meat and products, and fast fried foods. 42Western dietary patterns characterized by high intake of red meats and fried foods have been previously related to increased levels of CRP. 93A conclusion on the effect of the MIND diet on inflammation in women cannot be drawn without such crucial information on their dietary intake.Furthermore, the gender difference may also be explained by the heterogeneity in the dietary patterns of men in the study.Hence, the benefits of the MIND diet components were more easily detected in men than in women, ultimately showing a reduction in their CRP. 42 Mechanisms The MIND eating pattern is derived from the Mediterranean and DASH diets to include components with the strongest evidence for cognitive health.Diabetes and hypertension are risk factors for Alzheimer's disease, and the pathogenesis for neurocognitive disorders lies within similar pathways as cardiometabolic diseases. 94Therefore, it is reasonable to presume that similar to the Mediterranean [95][96][97] and DASH eating patterns, 22 the MIND diet would also have cardiometabolic benefits. One proposed mechanism behind the association between the MIND diet with cardiometabolic diseases could be attributed to its components that are rich in antioxidants and anti-inflammatory molecules. 32The MIND diet consists of ten brain-healthy foods (green leafy vegetables, other vegetables, berries, nuts, beans, whole grains, fish, poultry, olive oil, and wine), which are known to have cardiometabolic protective properties. 32A 3-year prospective cohort study found that a high adherence to the healthful plant-based diet index (hPDI), which is rich in whole grains, fruits, vegetables, nuts, legumes, vegetable oils, and tea/coffee, was inversely associated with T2DM (HR: 0.55; 95% CI 0.51-0.59,p<0.001) in American adults. 98This negative association could be explained by the high content of bio-active substances known as phytochemicals found in plants, mainly the green leafy vegetables. 99,100The MIND diet is rich in phytochemicals, a group of more than 5000 compounds that have been established to reduce the risk of different chronic diseases. 100urthermore, the MIND diet specifies high consumption of berries, which are rich in specific phytochemicals termed flavonoids. 32][103][104] Moreover, in-vitro and in-vivo studies have revealed that flavonoids can be promising anti-diabetic phytochemicals. 105nother potential mechanism appears to be linked to the role of the MIND diet in upregulating genes involved in mitochondrial respiration and oxidative phosphorylation. 106Mitochondrial dysfunction is associated with a reduction in these processes and leads to an increase in reactive oxygen species, which are implicated in the progression of cardiometabolic diseases, inflammation and insulin resistance. 107,108he MIND diet was originally developed for cognitive health and has been extensively studied in observational research for its benefits on cognition via mechanisms similar to those discussed above. 37,109Recently, the first RCT comparing the MIND diet to a control diet with mild caloric restriction showed no significant changes in cognition and brain structure between the two groups. 110Additionally, the incidence of adverse events was similar in both groups and was not associated with the diets.Although comparable outcomes were observed across both groups, the findings highlight the MIND diet as a potential dietary pattern that could be adopted to reduce risk factors associated with the pathogenesis of cardiometabolic and cognitive diseases.Given the observed null association, it may be conceivable that individuals in the control group enhanced their diets, as indicated by comparable weight loss observed across both groups.Additionally, there is a possibility that repeated testing improved cognitive markers, and an extended period greater than three years would be required to observe positive effects related to diet.Further, long-term trials are necessary to validate the findings of the MIND diet observed in various prospective studies. Strengths The current systematic review is the first to summarize all available evidence on the association of the MIND diet with cardiometabolic diseases.An extensive search of four databases was conducted by 5 independent reviewers, which may help to reduce publication bias.Each article was carefully reviewed and critically appraised using well-established tools.Studies included in the review spanned across 5 different countries (USA, Australia, Iran, China, Egypt), improving the generalizability and applicability of the findings.The prospective cohorts all had large sample sizes (>2000 participants), which enhances the precision of the estimates.The overall quality of the observational studies was judged as moderate to good, and one RCT had a low risk of bias.Most of the included studies used validated dietary assessment tools, mainly food frequency questionnaires.Another strength is the adjustment for key confounders (eg, energy intake and BMI) by many of the included studies, which helps to illustrate the independent relationship between the MIND diet and cardiometabolic diseases.Furthermore, as an a priori dietary pattern, the MIND diet can be easily compared with other studies, an added strength to this review. Limitations The review has several limitations.For one, out of twenty-three studies included, fourteen were cross-sectional in design, which limits temporality and causality.Second, some studies had small sample sizes (<100 participants), which might affect their overall power.Third, many studies excluded some MIND diet components from their score due to lack of sufficient information from the participants.Most importantly, eight studies excluded olive oil, a key component of the MIND diet.Given the established protective effects of dietary patterns high in olive oil on cardiometabolic outcomes, the findings from these studies should be interpreted with caution.Some components of the MIND diet require assessment of weekly consumption, which could not be applied to studies that used 24-hour recalls.However, the collection of multiple 24-hour recalls on non-consecutive days may be used to estimate usual dietary intakes in group settings. 111In addition, while most studies employed validated FFQs, this method of dietary assessment is prone to recall bias or reporting errors.In addition, the heterogeneity among the studies, such as in their designs, populations and outcomes, makes it difficult to compare results.Lastly, although most studies used multivariable models, the possibility of residual confounding cannot be fully ruled out. Conclusion In conclusion, the MIND diet appears to be associated with improvements in cardiometabolic parameters, including anthropometric measures, lipid profile, inflammation and incidence of stroke.However, due to heterogeneity among the included studies, the results should be interpreted with caution.Nonetheless, given the demonstrated effectiveness of the MIND diet in managing Alzheimer's disease, which shares similar pathways and risk factors with cardiometabolic diseases, this healthful eating pattern may have the potential to play a role in disease risk reduction.With an increase in the aging population and the prevalence of multiple comorbidities, including cognitive and cardiometabolic diseases, healthy dietary patterns like the MIND diet should be promoted as a strategy for prevention.Further well-designed and long-term studies are warranted to confirm these findings. Ethics Policies This research did not require ethical approval. Figure 1 16 3356 Figure 1 Flow diagram of the identified and screened articles on the MIND diet in relation to cardiometabolic diseases and their risk factors. Figure 2 Figure 2 (a) Risk of bias summary of studies examining the effects of the MIND diet.(b) Risk of bias graph of studies examining the effects of the MIND diet. Table 1 MIND Diet Components and Servings Table 2 Characteristics of the Included Studies on the Associations of the MIND Diet with Cardiometabolic Diseases and Their Risk Factors
2023-10-28T15:14:42.301Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "c549e9250b7140006c9fdfc5db446f97344f6c97", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "73f8b1d7397d5ad182c20e14d3e21142c09362a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232480466
pes2o/s2orc
v3-fos-license
Meningeal “Lazarus Response” to Lorlatinib in a ROS1-Positive NSCLC Patient Progressing to Entrectinib Background ROS1 tyrosine kinase inhibitors (TKIs) have showed activity and efficacy in ROS1-rearranged non-small cell lung cancer (NSCLC). In the clinical practice, besides the utilization of crizotinib, less is known about the best treatment strategies involving additional, new-generation TKIs for the sequential treatment of ROS1-positive NSCLC patients. Case Presentation A patient suffering from a ROS1-rearranged lung adenocarcinoma, after receiving cisplatin-pemetrexed chemotherapy, was treated with entrectinib, a new-generation ALK/ROS1/NTRK inhibitor. After 16 months, central nervous system (CNS) metastases appeared, without extra-cerebral disease progression. Stereotactic brain radiotherapy was performed and entrectinib was maintained, due to the global systemic disease control. Approximately one month after radiotherapy, thoracic and meningeal progressions were detected, the latter highly symptomatic with neurocognitive disorders, visual hallucinations and worsening of psycho-motor impairment. A lumbar puncture was positive for tumor cells and for an EZR-ROS1 fusion. The administration of lorlatinib (a third-generation ALK/ROS1 inhibitor) prompted an extremely rapid improvement of clinical conditions, anticipating the positive results observed at radiologic evaluation that confirmed the disease response still ongoing after nine months since treatment start. Discussion With the expanding availability of targeted agents with differential activity on resistance mechanism and on CNS disease, choosing wisely the best treatment strategies is pivotal to assure the best clinical outcomes in oncogene-addicted NSCLC patients. Here we have reported lorlatinib reverted an almost fatal meningeal carcinomatosis developing during entrectinib in a ROS1-positive NSCLC patient. Introduction The current availability of several tyrosine kinase inhibitors (TKIs) for each oncogenic molecular alterations (eg EGFR mutations, ALK and ROS1 rearrangements) allows the significant improvement of survival outcomes in non-small cell lung cancer (NSCLC) patients. ROS1 (whose rearrangement is present in 1-2% of NSCLC patients) is a receptor tyrosine kinase (RTK) structurally similar to ALK (4-7% of NSCLC), and since the first-generation TKI crizotinib, several other inhibitors are active against both RTKs. 1,2 Novel-generation ROS1 TKIs include ceritinib, entrectinib, cabozantinib, brigatinib, talactrectinib (DS-6051b), lorlatinib, repotrectinib. 3 Notably, besides crizotinib, entrectinib has recently received the approval by EMA, FDA and the Japanese agency PMDA for the treatment of ROS1-driven lung cancer patients, while lorlatinib, a third-generation ALK/ ROS1 TKI, is the treatment of choice at crizotinib progression. 4,5 Besides the sequence crizotinib-lorlatinib, less is known concerning treatment strategies involving TKIs other than crizotinib administered as upfront targeted agents, considering the relative rarity of ROS1 positivity detection in NSCLC patients. Here, we report the history of a patient suffering from ROS1-rearranged NSCLC developing meningeal progression while undergoing entrectinib. Switching to lorlatinib engendered an impressive clinic-radiological disease response still ongoing. Case Presentation In February 2018, a 62-year-old woman with a previous light smoking history (five packs/year) was diagnosed with lung adenocarcinoma with pleural, pericardial, lymph nodal and bone metastases. After two cycles of first-line chemotherapy with cisplatin and pemetrexed (best response stable disease) and the detection of an EZR-ROS1 fusion on liquid biopsy (InVisionFirst-Lung amplicon-based assay), 6 confirmed by FISH on tumor sample, the patient received entrectinib 600 mg daily since July 2018. No relevant side effects were recorded and partial response was achieved after two months. In November 2019, multiple brain lesions were detected ( Figure 1A impairment, conditioning an ECOG performance status (PS) of 2. Entrectinib was suspended and fractionated stereotactic brain radiotherapy was performed on the five largest metastases (the remaining ones being subcentimetric) ( Figure 1B). In January 2020, after entrectinib resumption, new brain MRI and CT scan respectively detected meningeal carcinomatosis (Figure 2A), with partial regression of the irradiated metastases ( Figure 1C), and thoracic progression (carcinomatous lymphangitis and parenchymal lesions increase, pleural effusion onset, Figure 3A). A lumbar puncture was positive for tumor cells and for an EZR-ROS1 fusion was detected in the cerebrospinal fluid (CSF) by the Oncomine™ Lung cfDNA Assay (Thermo Fisher Scientific), without additional mutations across the kinase domain, potentially implicated in resistance. Meanwhile, neurocognitive disorders worsened, with the onset of visual hallucinations and psycho-motor deficiency (ECOG PS 3). After having received the approval within a "temporary authorisation for use" program, third line lorlatinib 100 mg daily was initiated at the end of January 2020 and led to a rapid neurological improvement in the very next days, leading to patient's discharge. CT scan and brain MRI of March 2020, performed six weeks after treatment start, detected thoracic response ( Figure 3B), with major regression of brain and meningeal involvement ( Figure 1D, Figure 2B). At the last follow-up of October 2020 after nine months since lorlatinib initiation, the patient is in A B good clinical conditions (ECOG PS 1), intra-and extracranial disease response is maintained, and the TKI is well tolerated without toxic effects (eg hypercholesterolemia, hypertriglyceridemia, weight increase, neurocognitive disorders). 7 Discussion Several inhibitors are currently available for the treatment of ROS1-positive NSCLC. Targeted therapy is recommended as the first-line treatment of choice. 8,9 The evidence sustaining these recommendations is low, considering that only Phase I-II, non-randomized trials have been conducted in this molecular subset of patients. 4,5,10-16 Upfront chemotherapy can indeed be considered, especially in the case of low-burden disease without relevant symptoms, waiting for complete molecular information. Targeted first-line treatment options are represented by crizotinib and entrectinib, with a potential preference for the second in case of brain metastases. 3 Ceritinib and entrectinib failed in showing activity after progression on crizotinib. 14,17 Lorlatinib is active after crizotinib, while its administration at entrectinib progression has been far less evaluated (only one patient across prospective study and retrospective series). 5,[18][19][20][21][22] Albeit lorlatinib is not licensed yet for the treatment of ROS1-positive NSCLC according to health authorities, its administration at progression to a previous ROS1 TKI is mentioned in both ESMO and NCCN guidelines. 8,23 Central nervous system (CNS) metastases are common in ROS1-rearranged lung adenocarcinoma patients, documented in 20-30% and 35-50% of treatment-naïve and post-crizotinib settings, respectively. 24,25 Albeit lorlatinib is known to act in CNS disease at crizotinib progression in ALK-and ROS1-driven lung cancers, only one recent case report described a radiological dimensional decrease of brain metastases in a ROS1-positive patient receiving lorlatinib at entrectinib progression. 26 Entrectinib is deemed to harbor a better CSF penetration compared to crizotinib, potentially explaining better CNS activity. 4 In the present case, the absence of the resistance mutation is the CSF, in particular G2032R, suggest that the progressive disease might be due to an insufficient CNS exposure to the TKI. Lorlatinib was preferred to an attempt of entrectinib dose escalation, given its well-known CNS activity and the fact that a fast response was required considering critical patient's conditions. The impressive CSF penetration of lorlatinib (75% of which pass through the blood-brainbarrier) has probably contributed both to the extremely rapid disease response and to the prolonged CNS disease control observed. 27,28 As a result, rapid and dramatic improvement of neurological condition, such as in our patient, is expected, before any formal response by imaging assessment. In the clinical situation here presented, lorlatinib was likely the only inhibitor (together with repotrectinib) 15 to tame disease aggressiveness at the CNS level at the moment of entrectinib progression. The recent availability of several targeted agents for ROS1-rearranged NSCLC requires a careful process of decision-making in order to guarantee the best patients' outcomes. Ethical Considerations The patient provided written consent to publish her clinical history.
2021-04-02T05:32:42.744Z
2021-03-26T00:00:00.000
{ "year": 2021, "sha1": "0686a335d8583609f9c5bb516862e179e680556e", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=68036", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0686a335d8583609f9c5bb516862e179e680556e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
251495563
pes2o/s2orc
v3-fos-license
Multicomponent Educational-Rehabilitation Approach in Rehabilitation of Patients After Stroke Rehabilitation must be based on the individual needs and specific goals of the person and must be adapted to his abilities. According to the recommendation of the World Stroke Organization, the team involved in conducting rehabilitation should be multidisciplinary. One of the treatments that are applied within the multidisciplinary approach to a neurological patient is educational-rehabilitation treatment, which is multi-component in nature. Before starting educational-rehabilitation treatment, an educational-rehabilitation clinical assessment is necessary, which aims to detect difficulties caused by impairment; identify potentials and constraints in these areas; determine the specifics, course, and forecasts of difficulties; formulate clear treatment recommendations; form a watch list that will be available to all team members in the process of diagnosis, treatment, education, and to evaluate the effectiveness of treatment; and continuously monitor the ability and adaptive behavior of the person. Educational-rehabilitation clinical treatment includes treatment of cognitive abilities, treatment of motor skills, relaxation, treatment of adaptive skills, as well as informing the person about the disease and counseling. This review focuses on some aspects of rehabilitation such as treatment of cognitive and motor disorders, treatment of adaptive skills, relaxation issues, and informing and counseling patients from the perspective of an educational rehabilitator with practical experiences in this area of rehabilitation. Introduction According to the World Health Organization, rehabilitation is "a set of measures that help individuals, who have or are likely to have a disability, to achieve and maintain optimal functioning in interaction with their environment". 1 Rehabilitation must be based on the individual needs and specific goals of the person and must be adapted to his abilities. 2 Treating a chronic patient does not only mean treating the disease but also all aspects that the disease affects, especially economic and social factors. 3 Approaches to the treatment of patients should be holistic because without working on the cognitive, emotional, and physical aspects, the healing process is slowed down as well as the desired effect of medication. 4,5 Stroke is the largest single cause of severe physical disability, and rehabilitation to reduce functional deficits is the most effective treatment. 6 Functional recovery is based on the restitution of brain tissue and on the relearning of, and compensation for, lost functions. [7][8][9] An important concept in rehabilitation is that of brain plasticity, which implies that it is possible to modulate or facilitate reorganization of cerebral processes by external inputs. 10 Stroke rehabilitation is a process; its objectives are to prevent deterioration of function, improve function, and achieve the highest possible level of independence (physically, psychologically, socially, and financially) within the limits of the persistent stroke impairments. During this process, treatment and training are provided to stroke survivors to help them return to normal life. By regaining and relearning skills of everyday living through rehabilitation, many stroke survivors obtain greater independence in activities of daily living and improved functional capacity. Hallmarks of effective stroke rehabilitation practice include multi disci plina ry/ in terdi scipl inary team work, team work coordinated by regular meetings, goal-focused activities and individualized goals, emphasis on functional activities, involvement of patient and family in rehabilitation process, education provision to patients and families, and staff with specialized skills and interest in stroke. 11 The World Stroke Organization has for the past 2 decades emphasized the importance of a multidisciplinary approach in providing recovery and rehabilitation services after stroke. 12 The team involved in conducting rehabilitation should be multidisciplinary to include professionals such as a neurologist, physiatrist, nurses, physiotherapist, educational rehabilitator, speech therapist, neuropsychologist, occupational therapist, and social worker. Communication and coordination among these team members are paramount in maximizing the effectiveness and efficiency of rehabilitation. 13 One of the treatments that are applied within the multidisciplinary approach to a neurological patient is educational-rehabilitation treatment (ERT), which is multicomponent in nature. Educational-rehabilitation treatment is based on a bio-psycho-social model in which biological, psychological, and social factors must be observed simultaneously, and rehabilitation must be focused on the person as a whole and on achieving optimal physical, mental, and social potentials. 14 It is divided into stimulative treatment (encouraging the development of various abilities and skills), corrective treatment (based on direct action on endangered abilities or skills in order to alleviate or eliminate perceived difficulties), and compensatory treatment (based on the development of compensatory strategies). The methodology and organization of work should be flexible, guided by the characteristics and capabilities of the person and his environment, and not by some pre-given criteria for adapting to the requirements of the environment. The general approach to treatment is ecological, aimed at developing the potential and alleviating the limitations of the person in specific life circumstances and in light of the unique requirements of these circumstances. 15 Before starting ERT, an educational-rehabilitation clinical assessment is necessary, which aims to: detect difficulties caused by impairment; identify potentials and constraints in these areas; determine the specifics, course, and forecasts of difficulties; formulate clear treatment recommendations; form a watch list that will be available to all team members in the process of diagnosis, treatment, education and to evaluate the effectiveness of treatment; and continuously monitor the ability and adaptive behavior of the person. Educational-rehabilitation clinical treatment includes treatment of cognitive abilities, treatment of motor skills, relaxation, treatment of adaptive skills, as well as informing the person about the disease and counseling. Tretment of cognitive abilities Cognitive functions are intellectual processes by which we become aware of something, perceive, and understand ideas. Therefore, they refer to the processes by which we receive and process information. 16 The basic cognitive functions are attention, long-term memory, perception, while the higher cognitive functions are speech, language, decision-making, and executive functions. 17 Frith and Dolan 18 state that the difference between higher and lower cognitive functions is that the lower ones are automated and do not require special effort, while higher cognitive functions are under conscious control. However, as some higher cognitive functions, such as reading and understanding language, may become automated, the most important feature that distinguishes them is the fact that higher cognitive functions require more cognitive effort than lower cognitive functions. Cognitive training conducted by an educational rehabilitator has 4 levels. The first level includes the treatment of cognitive functions that provide observation and input of new information, selection, and sorting of relevant information, as well as information maintenance, planning, organization, and control of activities. Namely, our sensory organs accept information from the environment and the body, and the process of sensory integration enables the brain to organize them and give them meaning. The brain can use only organized and integrated sensations for movement, learning, and behavior. 19 Sensory stimulations of impaired central nervous system functions, with appropriate series of stimuli, chart a new path for collaterals of weakened or lost neurons and participate in their connection. 20 The program of sensory stimulations includes visual, auditory, tactile, olfactory, gustatory, proprioceptive (sense of movement), and vestibular stimulation (sense of self-awareness-body position and movement in relation to gravity). The ERT of visual functions include eye movement and focusing exercises, visual shape perception exercises (visual complementarity, background character discrimination, and visual organization), spatial relationships (spatial reasoning, spatial perception, and visual imagining), spatial orientations (laterality and direction), visual-motor integrations (motor response to visual stimulus and visual-motor coordination), visual memories (visual-spatial memory and visual-sequential memory), and speed of visual information processing (perceptual speed, automatism, and speed of motor responses). Difficulties in visual perception cause difficulties in the area of higher visual functions and cognitive difficulties in terms of visual agnosia, alexia, prosopagnosia, achromatopsia. 21 They also affect information processing speed, perceptual speed, as well as visual search (functional vision). 22 Achtman et al 23 proposed computer games as part of vision rehabilitation because of their impact on neuroplasticity of the visual system and visual learning. There is a large selection of vision exercises for impaired motility, binocular vision, and accommodation, for example, brock String, exercises with eccentric circles, Marsden ball, space fixator, McDonald field recognition exercise, Wayne Saccadic Fixator, and the like. Visual exercises primarily exercise visual functions but they also affect the functions of visual perception, such as visual-motor integration, visual-spatial organization, reaction speed, etc. 24 Although the bulk of the voice of stroke rehabilitation is on the recovery of motor and communication functions, visual impairment is slowly beginning to receive the same amount of attention. 25 The research indicates the need for vision rehabilitation to achieve maximum possible recovery in comprehensive rehabilitation and more successful performance of daily life activities. 26,27 Vision rehabilitation has a positive effect on visual field width recovery, 28 and visual field function can be compensated by increasing visual search field 29 and increasing perception speed during visual search. 22 The results confirmed the impact of vision rehabilitation on visual functioning in daily activities. 30 Main Points • An individual and holistic educational-rehabilitation approach in the rehabilitation of stroke patients, with a multidisciplinary team of experts, covers the complex motor and non-motor consequences of the disease, which affect the functional recovery and quality of life of the patient. • Educational-rehabilitation treatment is based on a bio-psycho-social model in which biological, psychological, and social factors must be observed simultaneously, and rehabilitation must be focused on the person as a whole to achieve optimal physical, mental, and social potentials. • Educational-rehabilitation clinical treatment includes treatment of cognitive abilities, treatment of motor skills, relaxation, treatment of adaptive skills, as well as informing the person about the disease and counseling. There are numerous reliable sources on visual perception of patients with stroke. 25,27,28,[32][33][34][35][36][37][38][39] Difficulties in auditory information processing can be manifested by difficulty focusing on auditory stimuli, limited ability to listen continuously in the presence of noise, difficulty in understanding verbal information, limited ability to directly reproduce auditory information and follow orders of varying complexity, poor recognition and interpretation of different voices, reading and writing difficulties, poor speech, and lower academic achievement. Difficulties in the speed of processing auditory information can be reflected in understanding longer instructions and completing tasks related to reading and writing in a limited time (difficulties in understanding telephone voice messages, radio/ television news, and movies). The treatment of auditory functions includes speed of auditory information processing, auditory discrimination, auditory sequencing, and auditory integration. Treatment of sensory problems includes increasing tactile awareness, discrimination, and stereognosy through various tactile sensations (hard, soft, moist, dry, smooth, and rough). The program of kinesthetic perception includes differentiation of body scheme in gestural space, differentiation of facial mimicry, somatic stimulation of body and its parts, perception exercises in deep sensibility, and exercises for building and restoring body scheme. Several authors in their clinical research report that somatosensory training can lead to sensory improvement after stroke. [40][41][42][43][44] Namely, after the somatosensory training, the somatosensory and functional changes of the upper extremities improve in terms of improving the ability to interpret bodily sensations and the feeling of control over one' s hand. Somatosensory training eases the use of the hand in daily life activities which is essential for continued brain and arm recovery. 45 Attention is a means by which limited mental processes are directed to the information and cognitive processes that are most important at a given moment. The most common manifestations of attention difficulties are forgetting instructions, difficulties in organization of materials and activities, and difficulties in solving tasks. Disorders in the domain of visual and auditory attention can make it difficult to adopt and perform complex activities of everyday life, primarily due to difficulties in processing information or directing attention to relevant aspects of information. Educational-rehabilitation treatment contains 4 components of attention: focusing, maintaining, alternating, and sharing attention. The first group of exercises that are used are auditory attention range exercises (repetition of a series of numbers, letters, words, tones, or rhythms). The second group of exercises is focused on the development of strategies for processing and storing auditory verbal information (stimuli are given that can be grouped according to the functional, semantic, and phonological principles). In the treatment of the range of visual attention, exercises are used, which consist of a series of visual thymuses (objects, images, or movements) of increasing complexity, which a person should separate from a larger group and reproduce. Selective attention is practiced by applying tasks that require search, monitoring, rapid activation and inhibition of responses, and coordinated motor activity. In the domain of auditory attention, tasks are given in which the respondent should react (by raising his hand, moving a token, or taking notes) to the target word or sound in a series of auditory stimuli. The second level of cognitive treatment includes memory treatment. Memory allows us to retain and find in our experience the information we use in the present. 46 Memory difficulties can occur in the areas of encoding (initial organization of information for immediate reproduction or storage and later reproduction), consolidation (the process of transforming information from temporary, active process into permanent memory), and recollection (recalling stored information from long-term memory). The treatment should focus on metamemory, the creation, and the application of memory strategies. Exercises are of increasing complexity in different sensory modalities-auditory (verbal and nonverbal), visual (objects, illustrations, and shapes), and tactile-kinesthetic. In the field of auditory memory, exercises for direct and delayed reproduction of verbal and nonverbal information are applied. The skills that the client will stimulate through exercises are visualauditory-kinesthetic strategies, associations, tracking orders, and recalling information. After memory-based cognitive rehabilitation, stroke patients reported fewer memory problems in everyday life immediately after treatment compared with control groups; however, there was no evidence that these beneficial effects persisted over a longer period. 47 The third level of cognitive treatment refers to information processing and thinking that are responsible for the mental organization and reorganization of information: classification and organizational skills (separation, shaping, reasoning, combining, sorting, ranking, sequencing, and categorizing). The skills that the client will stimulate through exercises from this level are verbal and visual reasoning, thought organization, convergent reasoning, logic, comprehension, integration, reasoning, and problem-solving. The last level includes the treatment of functions that ensure the expression and output of information (self-awareness, goal setting, self-control, flexible problem solving, and speech) and the treatment of difficulties in the field of academic skills (reading, writing, and mathematical skills). Kim et al 48 suggest that rehabilitation programs, for persons with stroke, should concentrate on increasing attention, concentration, information processing skills, memory, and patients' judgmental ability to improve social cognition. Treatment of motor skills Treatment of motor skills includes reeducation of motor skills, exercises for the development of basic and higher motor skills, 49 and exercises aimed at the functional use of the hand and arm. Motor reeducation is a multidimensional therapeutic approach in working with children and adults with motor disorders. Psychomotor reeducation is a specific field in educational and rehabilitation practice that focuses on the development of motor and motor skills, dexterity, balance, movement coordination and speed control, development of perceptual and gnostic abilities, and cognitive functions and contributes to the enrichment of sensory-motor and psychomotor experience. 50 Reeducation of motor skills uses movement as a sensorimotor and psychomotor activity which summarizes the entire developmental course of the relationship to oneself and others that was formed during the sensorimotor and psychomotor relationship between a person and the world. In case of increased anxiety or other psychosomatic disorders, certain forms of motor reeducation significantly contribute to the reduction of physical and mental difficulties The goal of reeducation is to stimulate, facilitate, or substitute dysfunctional cognitive mechanisms with more functional mechanisms to improve the client' s performance in those domains of behavior in which dysfunction or deficit is manifested. 51,52 Motor reeducation treatment includes general reeducation exercises and specific reeducation exercises. General motor reeducation exercises include massage, hydrotherapy, exercises with passive movements, exercises for defining the experience of body integrity, exercises for defining the experience of gestural space, exercises for independence of movement, exercises for equalizing muscle tone, observation exercises, stabilizing lateralization, exercises of experience and mastering the rhythm, exercises of experience, duration and orientation in time, exercises of coordination of movements, exercises for control of impulsivity, exercises for knowing the shape and weight, and exercises for noticing the presence of another. Exercises of specific reeducation of motor skills are directed toward individual and specific clinical pictures and are classified as follows: exercises for agraphia, exercises for alexia, exercises for acalculia and agnosia, exercises for apraxia, and exercises for motor speech disorders caused by neurological damage to nerves that participate in shaping voice with specific speech therapy treatment. 53 Exercises for the development of basic motor abilities and skills are used to achieve better posture in different positions and activities, for better maintenance of balance at rest and movement, for more successful coordination of movements, for achieving greater precision of movement, and for more successful bimanual activities. The goals of treatment of higher motor abilities and skills are more precise execution of voluntary non-transitive and transitive movements, more harmonious organization of transitive movements in space and time, better organization of elements in 3-dimensional, 2-dimensional, and graphic space, and more successful coordination of movements with verbal or nonverbal orders. The following exercises are applied: exercises of melokinetic practice, ideomotor practice, ideational practice, construction exercises in 2-dimensional and 3-dimensional space, according to reproduction model or independent construction, exercises of graphomotor activities and visual-motor coordination (copying geometric shapes and drawings and drawing), and non-verbal movement regulation exercises. 15 Upper extremity apraxia is a common stroke-related disorder that can reduce patients' levels of independence in daily activities and increase their level of disability. 54 The treatment of apraxia that has been studied in the literature, 55-57 especially in stroke patients, 57,58 proved effective. An adequate apraxia rehabilitation program can improve independence in daily life activities and accelerate the natural recovery process. 59 The educational-rehabilitation program also includes motor skills exercises (locomotor activities like throwing, catching, and pushing; perceptual-motor activities important for the development of fine motor skills, coordination of the upper extremities, and visual-motor coordination; program for tone equalization and interdependence), exercises aimed at functional hand and arm use by performing coordinated activities required to move and handle objects using fists and hands (pulling, pushing, retrieving, manipulating, throwing, and grasping), and exercises aimed at improving fine motor skills by performing coordinated activities of handling objects, lifting, handling, and releasing by using 1 hand, fingers, and thumb (manipulating small objects). Damage to the upper extremities after a stroke most often involves difficulties in moving and coordinating arm, hand, and fingers and often causes difficulties in performing daily activities such as eating, dressing, and washing. Dysfunction of the upper extremities, after stroke is the biggest reason for limitations in the functional use of hand in the daily activities of patients and their socialization. [60][61][62] In stroke patients, the upper extremities recover more slowly functionally than the lower extremities. 63 Relaxation Relaxation refers to a specific physiological state that is completely opposite to the way the body reacts when under stress or during a panic attack. Regular practice of deep relaxation can alleviate generalized anxiety and frequency and intensity of panic attacks, prevent stress accumulation, increase energy and productivity, improve concentration and memory, reduce insomnia and fatigue, reduce a number of psycho-somatic disorders 64 and influence muscle relaxation, and improve concentration and attention. 65 Some of the common methods for achieving a state of deep relaxation are: guided imagination, meditation, progressive muscle relaxation, self-training, yoga, abdominal breathing, soothing music, and the like. 64 Guided imagination is the process of imagining objects, spaces, people, and situations, that is, stimulating visual, auditory, olfactory, taste, and proprioceptive sensations that are complementary to the sensory and aesthetic criteria of an individual. In this way, in addition to encouraging pleasure and satisfaction, more adequate imaginary systems of dealing with reality can be encouraged. 66 Imagination also affects almost all important physiological mechanisms, such as respiration, pulse, blood pressure, sexual function, and the immune response, and can be used to relieve anxiety, depression, and stress symptoms. [67][68][69] Breathing exercises are the basis of techniques used in different relaxation techniques with different therapeutic effects. 64,[70][71][72] Treatment of adaptive skills Adaptive behavior is a set of conceptual (decision-making, planning, and implementation of activities and verbal communication), social (reactions to social expectations/rules, interpersonal communication, self-esteem, and responsibility), and practical skills (activities of everyday life), which a person learned to function in everyday life. The level of adaptive behavior is conditioned by a number of personal and environmental factors, so when considering the approach to treatment, it is necessary to assess all potential factors that could be important for the development and modulation of various adaptive skills. As a result of impaired body structures and functions, after a stroke, a person may have difficulty performing basic activities of daily living which can lead to limitations in participation (e.g., in meaningful activities, community, family, work, social, and civic life), [73][74][75] regardless of the severity of the stroke. 76 Assessing and diagnosing adaptive skills are of great importance because adaptive skills help a person in performing everyday life activities. 77 The treatment of adaptive skills aims to achieve an optimal level of independence in everyday life. In order for people with neurological diseases to have a better quality of life, it is very important to enable the acquisition of practical skills and to focus rehabilitation on strengthening independence and autonomy. Disease information and patient counseling Information about the disease and counseling of patients are carried out with the aim of changing health behavior and raising the level of motivation of the individual in order to increase his desire for cooperation in rehabilitation and readiness to achieve goals aimed at successful rehabilitation. Motivation means the mental readiness of an individual to accept information. It depends on the relationship with the person conducting the education (information) and has the ability to convey the message in an understandable appropriate way. The most important for informing and educating patients is the interaction-communication relationship between educators and patients. At the very core of the relationship is empathy, as the educator's ability to enjoy the role of the patient, to provide and enable support to the patient to make his or her own decision. Information and learning depend on the characteristics of the patient, such as his prior knowledge of the disease, resistance to the disease or its acceptance, and the support of the family and social environment. Positive transfer and countertransference are prerequisites for the success of the transmission of educational messages. Counseling as a 2-way relationship of patient training is essential for independent and successful solving of individual problems. It can be used in agreement on behavior change (in eating habits, smoking, and physical activity) and agreement on change of attitudes (toward illness, surgery, and social network). 78 People react differently to the realization that they are seriously ill: they become disoriented, confused, their emotions numb, they distance themselves from the environment, and things that used to make sense become irrelevant. Therefore, support in coping with the disease is important, sometimes crucial for building a constructive response to the new situation. The patient is encouraged to think only about the next step, about small shifts, but special attention is paid to each achievement. 79 Providing information to patients and caregivers improves their knowledge of stroke and increases patient satisfaction with some of the information they have received about stroke. An effect on reducing patients' depression was also found. Providing information in a way that it involved patients and caregivers more actively, for example, when given more opportunities to ask questions, had a greater effect on patients' mood than 1-time provision of information. There is little evidence that providing information has effects on independence or social activities. 80 Since stroke carris with it two types of conseauences (motor and non motor) and rehabilitaion program should be focused equally on both areas, motor and non motor, respectively. Today, rehabilitation is more focused on the treatment of motor consequences, while nonmotor ones are not given enough attention although they are an integral part of the clinical picture with a high prevalence and significantly incapacitate patients with stroke. The nonmotor consequences of stroke, in addition to the cognitive ones we talked about, include speech, reading, writing, arithmetic disorders, neglect phenomena, anosognosia, agnosia, dysphagia, sleep disorders, as well as anxiety, depression, post-traumatic stress disorder, etc. 81 All these disorders should be included in the comprehensive rehabilitation process, which implies the multidisciplinarity of the team, which must have a physiotherapist, speech therapist, neuropsychologist, and occupational therapist in addition to the educator-rehabilitator. Conclusion Rehabilitation of patients with stroke should be focused on the person as a whole and on achieving optimal physical, mental, and social potentials. An individual and holistic educationalrehabilitation approach in the rehabilitation of stroke patients, with a multidisciplinary team of experts, covers the complex motor and non-motor consequences of the disease, which affect the functional recovery and quality of life of the patient. In this review, the focus is only on some aspects of rehabilitation such as treatment of cognitive and motor disorders, treatment of adaptive skills, relaxation issues, and informing and counseling patients from the perspective of an educational rehabilitator with practical experiences in this area of rehabilitation. Funding: The authors declared that this study has received no financial support.
2022-08-12T06:18:36.384Z
2022-08-11T00:00:00.000
{ "year": 2022, "sha1": "18901d750c5c7b697c109f3749b206f82ee2ed23", "oa_license": "CCBY", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "45e9201e90957e737aa61708b112d77d75a02939", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
218928327
pes2o/s2orc
v3-fos-license
Remediation of Cobalt-Contaminated Soil Using Manure, Clay, Charcoal, Zeolite, Calcium Oxide, Main Crop ( Hordeum vulgare L.), and After-Crop ( Synapis alba L.) : This study was undertaken to determine the e ff ects of various substances on soil contaminated with cobalt (Co) on the mass and content of cobalt in the main crop—spring barley ( Hordeum vulgare L.)—and the after-crop—white mustard ( Synapis alba L.). Manure, clay, charcoal, zeolite, and calcium oxide were used for phytostabilization. Cobalt was applied in the form of CoCl 2 in doses of 0, 20, 40, 80, 160, and 320 mg / kg soil. Amendments in the form of manure, clay, charcoal, and zeolite were applied in an amount of 2% in relation to the weight of the soil in a pot, with calcium oxide at a dose of 1.30 g CaO / kg of soil. The highest cobalt doses resulted in a significant reduction in yield of both plants and in tolerance index for cobalt. Increasing contamination of soil with cobalt resulted in a major and significant increase in its content in plants and a reduction in cobalt translocation factor in both plants. Amendments used in phytostabilization had a significant e ff ect on growth and development of oat and content of cobalt in plants. The strongest e ff ect on the yield of above-ground parts was exerted by manure (both plants) and calcium oxide (white mustard), while the strongest e ff ect on weight of roots was exerted by calcium oxide (both plants) and zeolite (white mustard). The addition of manure, zeolite and calcium oxide to soil caused an increase of the tolerance index for both plants, while the addition of clay only had a positive e ff ect for white mustard. All substances used in phytostabilization (except zeolite) decreased cobalt content of roots, and manure and calcium oxide in above-ground parts of spring barley; manure and zeolite only in above-ground parts, and calcium oxide in both organs of white mustard. Most of them also reduced bioconcentration of cobalt in above-ground parts, calcium oxide decreased cobalt content in roots of both plants, and manure in roots of spring barley. The e ff ect on cobalt translocation was less clear, but most substances used in phytostabilization increased the transfer of cobalt from the soil to plants. White mustard had higher ability to accumulate cobalt than spring barley. Introduction Soil degradation has now become a global problem, primarily due to deforestation, erosion and, particularly, contamination with trace elements [1]. The occurrence of excessive contents of trace elements in the soil contributes to their intense uptake by microorganisms and plants, which enables those elements to penetrate into other living organisms [2]. Therefore, it poses a significant hazard to the proper growth and development of plant and also to the subsequent links of the food chain, with humans at the top of the chain [3]. The occurrence of large amounts of trace elements in the soil may also have a toxic effect on plants. An example of such an element is cobalt. The natural cobalt content of soil is up to 40 mg/kg [4]. Cases of exceeding its permissible content in the soil primarily result from the intensive extraction of this element due to its wide application in various industries [5]. The highest contamination of the soil environment associated with its extraction is found in Central-Southern Africa (DR Congo and Zambia) [6]. In Europe, since cobalt resources are small, its permissible content in the soil is exceeded more rarely. The areas that are most exposed to the occurrence of high cobalt contents of the soil of anthropogenic origin are mainly industrial and transport areas [6]. Cobalt is taken up by plants in various forms, most often in the form of di-and trivalent cations [7]. Cobalt is characterised by significant mobility in plants, but it depends on the species. Usually, its largest amounts accumulate in the roots, smaller amounts in the stems, and the smallest amounts in the leaves [8]. Cobalt toxicity is closely related with the acidity of the soil. The most frequent symptoms resulting from the high accumulation of cobalt in plants is reduced plant growth and the emergence of necroses as well as disorders in the uptake of nutrients. Cobalt toxicity for plants is most frequently found in light soils with a poorer sorption complex than in heavy soils [9]. To immobilise or eliminate trace elements from soils, various types of phytoremediation treatments which make use of the properties of plants are applied [10,11]. Phytostabilisation is one of the methods of the phytoremediation. Phytostabilisation is based on the application of various amendments to soil to reduce the availability and uptake of trace elements by plants [12]. The research hypotheses were as follows: soil contamination with cobalt has a negative effect on plants, amendments reduce the effect of cobalt on plants. In view of the above, the study was undertaken to determine the effects of various substances on soil contaminated with cobalt (Co) on the mass and content of cobalt in the main crop-spring barley (Hordeum vulgare L.)-and the after-crop-white mustard (Synapis alba L.). Manure, clay, charcoal, zeolite, and calcium oxide were used for phytostabilization. Methodological Design The study was based on a pot experiment carried out at a greenhouse owned by the University of Warmia and Mazury in Olsztyn (Poland). Soil was used whose granulometric composition corresponded to that of loamy sand. The chemical composition of soil was presented in Table 1. The methods of soil analysis prior to setting up the experiment were provided in a previously published study [13]. The experiment was carried out in polyethylene pots, each containing 9 kg soil, in six series (1-without amendments, 2-with added bovine manure granules, 3-with added clay, 4-with added charcoal, 5-with added zeolite of the clinoptilolite type, and 6-with added 50% calcium oxide). Cobalt was applied in the form of CoCl 2 in doses of 0, 20, 40, 80, 160, and 320 mg/kg soil. Amendments in the form of manure, clay, charcoal and zeolite were applied in an amount of 2% in relation to the weight of the soil in a pot, with calcium oxide at a dose of 1.30 g CaO/kg of soil. The chemical composition of amendments was shown in Table 2. In addition, the following were added on a one-off basis to the soil in each pot: 100 mg N (NH 4 O) per kg of soil. The experiment was carried out in three replications. The applied soil amendments were primarily selected due to their frequent use in remediation treatments, while cobalt doses were based on the Polish country standards which are currently in force (Regulation of Minister of the Environment of 9 September 2002 on the quality standards for soil and quality standards for land, and of the Regulation of Minister of the Environment of 1 September 2016 on the procedures for the assessment of land surface contamination). In the prepared soil, seeds of spring barley (Hordeum vulgare L.) of the Mercada variety were sown. The sowing density was 15 plants per pot. After 52 days from sowing, in the heading stage, the above-ground parts and roots of spring barley were harvested. The above-ground parts of spring barley were cut, and the roots were separated from the soil on the sieve. The soil was returned to each of the pots. This was followed by the sowing of seeds of white mustard (Sinapis alba L.) of the Bamberka variety to the same soil. The after-crop was cultivated with a sowing density of eight plants per pot. The above-ground parts and roots of white mustard were harvested 36 days after sowing, following the completion of the flowering stage. Main environmental parameters were typical to the period May-July (average air temperature-15.6 • C, average air humidity-76.5%, length of daytime-from 13 h 3 min to 16 h 31 min.). The soil moisture was kept at 60% of water capillary capacity using distilled water. Cultivation of two plants (main crop and after-crop) resulted from the application of a system typical for field crops. Spring barley was primarily selected because it is sensitive to high contamination of soils with trace elements; in turn, white mustard is classified as a hyperaccumulator plant and it is recommended as an aftercrop following the cultivation of cereals. Methods of Laboratory and Statistical Analyses During the harvest, the average weights of particular organs of the plants in each pot were determined. The collected plant material was dried at a temperature of 60 • C and then ground. Next, it was "wet" mineralised in concentrated nitric acid (HNO 3 of analytical grade, density of 1.40 g/cm 3 ) in HP500 Teflon vessels in a MARS 5 microwave oven (CEM Corporation, Matthews, NC, USA). Total content of cobalt was determined using the method of flame atomic absorption spectroscopy (FAAS) in an air-acetylene flame [14]. The study results were compared with certified reference material NCS ZC 73030, originating from the Chinese National Analysis Centre (Beijing, China) for Iron and Steel 2014 and with Fluka standard solutions with the following symbol: Co 119785.0100. Additionally, the following factors were calculated for cobalt: bioconcentration factor (BCF) = C plant part /C soil for the above-ground parts and the roots of both test plants, translocation factor (TF) = C above-ground parts /C roots , and transfer factor (TFr) = C plant /C soil , in which C denotes the content of a particular element, expressed in mg/kg [15,16]. Moreover, calculations were made of the values of tolerance indices Ti using the formula: Ti = yield of the plant biomass from the Co-enriched soil/yield of the plant biomass from the control soil. Ti < 1 indicates a negative effect of Co, Ti = 1 indicates no effect of Co, and Ti > 1 indicates the positive effect of Co on the growth and development of the tested plants. The test results were statistically processed using ANOVA two-factor variance analysis and the Duncan test as well as the correlation coefficients from the Statistica 13 package (StatSoft, Inc., Tulsa, OK, USA) [17]. The least squares deviation (LSD) and Tukey's HSD (honest significant difference) as a post hoc tests were applied. Significance levels * P ≤ 0.05 and ** P ≤ 0.01 were used to assess the significance of differences between the tested factors. Results An analysis of the results presented in this paper clearly indicates that the applied doses of cobalt and the applied substances in phytostabilization have a significant effect on the productivity of spring barley and white mustard, tolerance index, cobalt content and the bioconcentration, translocation, and cobalt transfer coefficients. In the series without amendments, the highest doses of cobalt (320 mg Co/kg soil) resulted in a decrease in the weight of the above-ground parts by almost 100% and the roots of spring barley by 90% and completely prevented the growth and development of white mustard, as compared to the soil not contaminated with this element. A lower dose of cobalt (160 mg Co/kg soil) decreased the weight of the above-ground parts and roots of the main crop by 98% and 90% and of the after-crop by 99% and 90%, respectively (Table 3). Amendments used in phytostabilization had a significant and positive effect on the weight of the roots of both plants. The application of most of the substances had a positive effect on the yield of the above-ground parts of white mustard. Manure and calcium oxide had a positive effect on the yield of the above-ground parts of spring barley. The application of manure had the strongest effect on an increase in the average yield of the above-ground parts by 58%. The application of calcium oxide had the strongest effect on the weight of the roots of spring barley (an increase of 171%). The application of manure and calcium oxide had the strongest effect on the yield of the above-ground parts (an increase of 50%). The application of calcium oxide and zeolite had the strongest effect on the weight of the roots of white mustard (an increase of 68-69%), compared to the series without amendments. The beneficial effect of manure and calcium oxide on the increase in the yield of both test plants are particularly confirmed in the soil with doses of 160 and 320 mg Co/kg soil. The addition of charcoal to the soil had a negative (but small) effect on the yield of the above-ground parts of white mustard and, to a small extent, of spring barley (Table 3). In the series without amendments, increasing doses of cobalt decreased the tolerance index (Ti) from 1 to 0.006 for spring barley and from 1 to 0 for white mustard ( Figure 1). The application of all substances to the soil resulted in an increase in the tolerance index. Of all the applied substances, manure and calcium oxide exhibited the strongest effect on the increase in the tolerance index Ti. A value of Ti exceeding 1 indicates the inhibition of the growth of plants, which was observed in the control series. In the soil without amendments, the increasing contamination of the soil with cobalt resulted in a significant increase in its content in both plants (Table 4). Compared to the control (non-contaminated with cobalt), the increase was up to 38-fold in the above-ground parts and 816-fold in the roots of spring barley and 397-fold in the above-ground parts and 16-fold in the roots of white mustard. The positive effect of these substances applied to the soil on cobalt content was most evident for the roots of spring barley (Table 4). Least squares deviation (LSD) for CD-cobalt dose, AT-amendment type, CD·AT-interaction; significant at ** P ≤ 0.01, * P ≤ 0.05; r-correlation coefficient. Homogeneous groups denoted with letters A-E were calculated for cobalt dose, with numbers I-III for amendment type and with letters a-h for interaction between cobalt dose and amendment type. All of the substances (except zeolite) used in phytostabilization had a beneficial effect on the cobalt content of the roots of spring barley ( Table 2). Under the influence of these substances, a decrease in the content of cobalt in the roots of spring barley varied on average from 11% (charcoal) to 42% (manure), compared to the series without amendments. An analogous effect was exerted by manure and calcium oxide through decreasing the content of cobalt (by 44% and 31%, respectively) in the above-ground parts of spring barley, and by manure, zeolite and calcium oxide through decreasing its accumulation (on average by 41%, 29%, and 74%) in the above-ground parts of white mustard. In other cases, the substances either had no effect or contributed to an increase in cobalt accumulation in plants. Increasing effect of amendments on cobalt content in plants in some objects of experiment was higher in white mustard than in spring barley. In the soil without amendments, a dose of 80 mg Co/kg soil had the strongest effect on the growth of bioconcentration factor (BCF) for cobalt in the above-ground parts (a five-fold increase) and in the roots (a 53-fold increase) of spring barley and in the above-ground parts (a 36-fold increase) of white mustard. Dose of 40 mg Co/kg soil had the strongest effect on its roots (a two-fold increase) (Figures 2 and 3). Higher cobalt doses resulted in a decrease in the bioconcentration factor (BCF) in the tested organs of both plants, compared to the previous levels of Co contamination. The application of most substances to the soil for phytostabilization of cobalt resulted in an increase in the BCF value in the roots and a decrease in the above-ground parts of the after-crop. Manure and calcium oxide lead to a decrease in the BCF value in the above-ground parts and roots of the main crop, in relation to the series without amendments. Other substances had an opposite effect on the BCF value in this plant. The most significant increase in the BCF in the roots (by an average of 42-44%) of spring barley following the introduction of charcoal and zeolite and in roots of white mustard (257%) after application of manure was noted. Small increase (7%) in the above-ground parts of white mustard after application of clay was observed. Both in the first (the above-ground part increased by 45% and the roots increased by 27%) and in the second (the above-ground part increased by 83%, the roots increased by 36%) test plants, calcium oxide had the strongest effect on a decrease in the BCF. In the series without amendments, a 320 mg Co/kg soil resulted in a 95% decrease in the translocation coefficient of this element (TF) in spring barley, while 80 mg Co/kg soil resulted in a 25-fold increase in white mustard as compared to the control soil ( Figure 4). All substances (except clay for spring barley) decreased the value of this coefficient in both plants. Manure had the strongest effect on a decrease in the TF in the main crop, on average by 74%, and in the after-crop by 95%, in relation to the series without amendments. In the series without amendments, the coefficient of cobalt transfer from the soil to the plants (TFr) increased the most in spring barley (by 17 times) and in white mustard (by seven times) under the influence of 80 mg Co/kg soil ( Figure 5). Higher cobalt doses reduced this coefficient in the plants. The application of calcium oxide to both plants, and of manure to white mustard, reduced (while the addition of the remaining substances increased) the value of the TFr. Of the used amendments, charcoal and zeolite had the greatest effect on an increase in the value of the TFr in spring barley (on average, by 35% and 37%, respectively), while clay had the greatest effect on white mustard (by 53%) as compared to the series without amendments. Calcium oxide most strongly decreased the value of TFr in both plants. Under its influence, the TFr decreased by an average of 31% in spring barley and by 72% in white mustard. Discussion The toxicity of trace elements to plants is determined by many factors, which mainly include the species of a particular plant, the type of soil and its physico-chemical properties, the type of an element, and primarily their content in the soil [18]. Cobalt is a trace element which, at low concentrations in the soil, has a positive effect on plant growth, while at high concentrations it Discussion The toxicity of trace elements to plants is determined by many factors, which mainly include the species of a particular plant, the type of soil and its physico-chemical properties, the type of an element, and primarily their content in the soil [18]. Cobalt is a trace element which, at low concentrations in the soil, has a positive effect on plant growth, while at high concentrations it exhibits adverse effects [9]. The highest yield of the first and the second test plants was reached in soil non-contaminated with cobalt and with an added dose of 20 mg/kg soil, while in other soil contaminated with high doses of this element, the growth of plants was partially or completely stopped. The above relationships were also confirmed in the other authors' studies. Khalid and Ahmed [19] found that the application of 30 mg Co/L had a positive effect on the growth of Nigella sativa L., Sarma et al. [20] found that the application of 100 and 200 mg Co in a cultivation nutrient solution and sand using Hoagland solution had a stimulating effect on the height, number and surface area of the leaves and the wheat dry matter content, while 300, 400, and 500 mg Co exhibit a harmful effect. An adverse effect of cobalt doses of over 300 mg on chlorophyll a and b contents (and on its stability index) was also found [20]. According to Wallace et al. [8] an increase in cobalt content in the leaves to 43 µg Co/g and 142 µg Co/g dry matter significantly contributes to the emergence of severe chloroses. The inhibition of the increase in biomass of Hordeum vulgare L., Brassica napus L., and Lycopersicon esculentum L. under the influence of cobalt contamination is demonstrated by a study by Li et al. [21]. According to Shaukat et al. [22], the stoppage of seed germination may result from the occurrence of an osmotic effect caused by a high cobalt content of the soil. The reduction in the negative effects of high contamination of soil on plants can be obtained through the introduction of various mineral and organic substances into the soil in phytostabilization process. A beneficial effect of charcoal on the growth and development of various plant species was demonstrated by Kuzyakov et al. [23], zeolites and dolomite on maize by Kovacevic and Rastija [24], calcium oxide and compost on: maize by Wyszkowski and Radziemska [25] and Wyszkowski and Ziółkowska [26,27], oats by Wyszkowski and Ziółkowska [26], and yellow lupine by Wyszkowski and Ziółkowska [27]. Phytostabilization of soil contaminants through the application of various substances has a significant effect on the content of trace elements in the soil and, at the same time, on the uptake by plants [28]. According to Pichtel and Bradway [29], the introduction of bovine manure to the soil results in a reduction in heavy metal uptake from the soil by plants. With the short duration of its mineralisation, the availability of nutrients increases, which may, at the same time, explain its favourable effect on the growth and quality of plants [30]. In turn, Skwaryło-Bednarz et al. [31] found that an appropriate content of basic nutrients provided during fertilisation has an effect on the yield of plants. The content of nutrients in plants is strongly correlated with the physico-chemical properties of the soil. Application of an alkaline fertiliser to the soil increased the pH and reduced solubility in the soil and the availability of cobalt and other trace elements for plants [32,33]. The cobalt content in the soil and other soil properties after the plants cultivation in this experiment were published in other papers [34,35]. Kosiorek and Wyszkowski [34] found that following the introduction of calcium oxide, the pH value of the soil increased (>7). They also noted that manure and calcium oxide used in phytostabilization had the most favourable effect on an increase in the yield and had a significant effect on a reduction in the contents of cobalt, lead, zinc, copper, and manganese in selected parts of the plants, which confirms the above assumptions. According to Ciećko et al. [36], manure, brown coal, and calcium oxide are effective in reducing cadmium uptake by plants. In an experiment by Sivitskaya and Wyszkowski [13], the introduction of calcium oxide resulted in an increase in the contents of most trace elements in maize. This was also confirmed by the authors' own study, since the greatest increase in cobalt content of both test plants was noted in the series with the addition of calcium oxide, which resulted in the greatest increase in the pH value of the soil and could have a toxic effect on the development of the test plants. The favourable effect of zeolites used in phytostabilization on the increase in the pH value of the soil (and thus an increase in the immobilisation of micronutrients) was also demonstrated by Querol et al. [37]. Zeolite applied in this study failed to exert the assumed effects in the remediation of the soil contaminated with cobalt, as it led to an increase in the cobalt content in the above-ground parts and in the roots of the main crop. Zeolites have strong adsorption properties in relation to both cationic forms and to anionic trace elements occurring in the soil [38]. Deenik et al. [39] reported that the content of volatile substances in charcoal has a significant effect on the reduction in the growth of Lactuca sativa L. and Zea mays L., cultivated in tropical soils under greenhouse conditions, and on nitrogen transformations in the soil. As reported by Yamato et al. [40], the introduction of charcoal to the soil reduces the content of trivalent aluminium cations in the soil and exhibits a significantly more favourable effect on the properties of a low-quality soil than on highly fertile soils. It resulted in an almost 50% increase in the yield of Zea mays L., Vigna unguiculata L., and Arachis hypogaea L. According to Kolb et al. [41], the presence of charcoal in the soil contributes to an increase in the activity of bacteria and microorganisms in the soil, and to an increase in the availability of nutrients necessary to plants. In the authors' own study, the introduction of charcoal into the soil resulted in a slight decrease in the yield of the above-ground parts of spring barley and white mustard. It also had an effect on the increase in cobalt content of the above-ground parts and the roots of white mustard. The roots of the plants, as compared to the stems and leaves, accumulate the greatest amounts of cobalt from the soil [8]. Sarma et al. [20] found that the introduction of 100 mg Co/kg and 200 mg Co/kg to the nutrient solution has a considerably stronger effect on the cobalt content of the grains of wheat than a dose of 500 mg of this element on kg of soil. In the authors' own study, cobalt was accumulated in greater amounts in the roots than in the above-ground parts of the barley. In turn, a reverse trend was demonstrated for white mustard. According to Abdel-Sabour and Al-Salama [42], the cultivation of Brassica napus may eliminate up to 40% cobalt from the soil. Tappero et al. [43], using Allysum murale L., indicated the value of the bioconcentration factors (BCF) for cobalt at a level of 532 (the Co + Ni series) and 702 (the Co + Ni + Zn series) and the value of the translocation factor expressed by the ratio between the cobalt content of the above-ground parts and this content in the roots (3.0). In this study, the highest average translocation factor of 2.379 in the series without amendments was found in the white mustard. Therefore, it can be concluded that this plant is more effective than spring barley in the process of remediation of a soil contaminated with cobalt, for which the average TF in an analogous series amounted to 0.658. Majority of amendments used in phytostabilization in our study had a positive effect and reducing the negative influence of cobalt contamination on both plants, especially spring barley. Conclusions In the series without amendments, the highest cobalt doses resulted in a significant reduction in both the yield of the above-ground parts and the roots of both plants and in the tolerance index. In the series without amendments, increasing contamination of the soil with cobalt resulted in a major and significant increase in its content in the tested organs of the plants (the highest levels of cobalt were in the above-ground parts of the main crop, and the lowest levels were in the roots of the after-crop), and a reduction in the translocation factor in both plants. Medium doses of cobalt had a similar effect on the bioconcentration and transfer factors, in contrast to high doses of cobalt. All substances used in phytostabilization reduced the adverse effect of cobalt. The strongest effect on the yield of the above-ground parts was exerted by manure (both plants) and calcium oxide (white mustard), while the strongest effect on the weight of the roots was exerted by calcium oxide (both plants) and zeolite (white mustard). The addition of manure, zeolite and calcium oxide to the soil had a positive effect through increasing the tolerance index for both plants, while the addition of clay only had a positive effect for white mustard. All substances used in phytostabilization (except zeolite) decreased cobalt content of the roots, and manure and calcium oxide decreased the cobalt content of the above-ground parts of spring barley; manure and zeolite decreased the cobalt content only in the above-ground parts, and calcium oxide decreased cobalt content in both organs of white mustard. Most of them also reduced the bioconcentration of cobalt in the above-ground parts, calcium oxide decreased cobalt content in the roots of both plants, and manure decreased cobalt content in the roots of spring barley. The effect on the cobalt translocation was less clear, but most substances increased the transfer of cobalt from the soil to the plants. White mustard had higher ability to accumulate cobalt than spring barley. It can be used for remediation of areas which are contaminated with this element. Author Contributions: M.W. and M.K. framed the methodology, conceived the ideas, designed the paper, wrote the paper, prepared the tables and figures, and collected the data. M.W. reviewed the manuscript. All authors contributed significantly to the discussion of the results and the preparation of the manuscript. All authors have read and agreed to the published version of the manuscript. Funding: This study was supported by the Ministry of Science and Higher Education funds for statutory activity. Project financially supported by Minister of Science and Higher Education in the range of the program entitled "Regional Initiative of Excellence" for the years 2019-2022, Project No. 010/RID/2018/19, amount of funding 12.000.000 PLN.
2020-05-21T00:10:52.457Z
2020-05-11T00:00:00.000
{ "year": 2020, "sha1": "477375a8ecbe2b56aeacf3e666aee6ead4e9dad8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-163X/10/5/429/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ca62f33b005827a0d69aea1e6f0f3a3700c0b919", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
18756156
pes2o/s2orc
v3-fos-license
Some Generalized Lacunary statistically difference double semi-normed sequence spaces defined by Orlicz function In this article, we have introduced the idea of statistically convergent generalized difference lacunary double sequence spaces 2 [ ( , , , )] n w M p q θ Δ , 2 0 [ ( , , , )] n w M p q θ Δ and defined over a semi norm space (X, q). Also we have study some basic properties and obtained some inclusion relations between them. Introduction The concept involving statistical convergence plays a vital role not only in pure mathematics but also in other branches of mathematics especially in information theory, computer science and biological science. Let ∞  ,c and 0 c be the Banach spaces of bounded, convergent and null sequences ( ) . In order to extend the notion of convergence of sequences, statistical convergence was introduced by Fast (1951) and Schoenberg (1959) independently.Later on it was further investigated by Fridy (1985), Mursaleen and Mohiuddine (2009a and b), Mohiuddine and Danish Lohani (2009), Mohiuddine et al. (2010), Šalát (1980Šalát ( ), Tripathy (2003)), Tripathy and Sen (2001) and many others.The idea depends on the notion of density (natural or asymptotic) of subsets of N. A subset E of N is said to have natural density ( ) has natural density zero.Kizmaz (1981) introduced the notion of difference sequence spaces as follows: for X= ∞  ,c and c o .Later on, the notion was generalized by Et and Çolak (1995) as follows: for X= ∞  ,c and c o , where and also this generalized difference notion has the following binomial representation: Subsequently, difference sequence spaces were studied by Esi (2009a and b), Esi andTripathy (2008), Tripathy et al. (2005) and many others. Lindenstrauss and Tzafriri (1971) used the idea of Orlicz function to construct the sequence space The space M l is closely related to the space p l , which is an Orlicz sequence space with In a later stage different Orlicz sequence spaces were introduced and studied by Tripathy and Mahanta (2004), Esi (1999Esi ( , 2009aEsi ( and b, 2010)), Esi and Et (2000), Parashar and Choudhary (1994) and many others. The following well-known inequality will be used throughout the article.Let p = (p k ) be any sequence of positive real numbers with for all a ∈ . Let w 2 denote the set of all double sequences of complex numbers.By the convergence of a double sequence we mean the convergence in the Pringsheim sense that is, a double sequence x = (x k,l ) has Pringsheim limit L (denoted by lim P x L − = ) provided that given ε > 0 there exists , 1900).And we called it as "Pconvergent".We shall denote the space of all Pconvergent sequences by c 2 .The double sequence x = (x k,l ) is bounded if and only if there exists a positive number M such that for all k and l.We shall denote the space of all bounded double sequences by 2 ∞ l .The zero single sequence will be denoted by θ = (θ, θ, θ,…) and the zero double sequence will be denoted by θ 2 = (θ). The notion of asymtotic density for subsets of The notion of statistically convergent double sequences was introduced by Mursaleen and Edely (2003) andTripathy (2003) independently. A double sequence (x k,l ) is said to be statistically convergent to  in Pringsheim's sense if for every ε > 0, The double sequence , {( , )} Notations: , , , The set of all double lacunary sequences is denoted by In this presentation our goal is to extend a few results known in the literature from ordinary (single) difference sequences to difference double sequences.Some studies on double sequence spaces can be found in Gökhan and Çolak (2004Gökhan and Çolak ( , 2005Gökhan and Çolak ( 2006)). Let M be an Orlicz function and , ( ) factorable double sequence of strictly positive real numbers and , r s θ be a lacunary sequence.Let X be a seminormed space over the complex field  with the seminorm q.We now define the following new statistically convergent generalized difference lacunary double sequence spaces: and where: ) and also this generalized difference double notion has the following binomial representation: ( 1)  Some double sequence spaces are obtained by specializing , r s θ M, p, q and n.Here are some examples: then we obtain the double sequence spaces then we obtain the double sequence then we obtain the double sequence spaces and where ( ) ( ) ) (3.4) x Δ is a Cauchy sequence in (X, q).Since (X, q) is complete, there exists x k,l ∈ X such that , , lim ( ) is continuous, so for 0 i m ≥ , on taking limit as j → ∞ we have from (3.4), On taking the infimum of such ρ′ s, we have . By linearity of the space 2 0 [ ( , , , )] Proof.The first part of the result follows from the inequality and the second part of the result follows from the inequality Proof.We prove it for ( ) [ ( , , , )] Thus from the second term in (3.6) we have ( ) ( ) [ ( , , , )] M and 2 M be Orlicz functions, q, 1 q and 2 q be seminorms.Then The proofs of (ii) and (iii) follow obviously.The proof of the following result is routine work. Proposition 3.7.For any Orlicz function M, if and 2 w ∞ . Let ( ( )) ( )  which leads us to the desired results. Conclusion In this article we defined some new sequence spaces by double lacunary summability method by combining the concept of Orlicz function and statistical convergence.Further, we proved some topological and algebraic properties of the resulting spaces.
2016-01-07T01:57:53.067Z
2013-01-30T00:00:00.000
{ "year": 2013, "sha1": "cf4bc9a3fe9f89da0ecd012e73eeb3255e4dfa96", "oa_license": "CCBY", "oa_url": "https://periodicos.uem.br/ojs/index.php/ActaSciTechnol/article/download/15523/pdf", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "cca1ffb3b800effefed8d953bd029b6886cf9484", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
29799658
pes2o/s2orc
v3-fos-license
Detecting infection hotspots: Modeling the surveillance challenge for elimination of lymphatic filariasis Background During the past 20 years, enormous efforts have been expended globally to eliminate lymphatic filariasis (LF) through mass drug administration (MDA). However, small endemic foci (microfoci) of LF may threaten the presumed inevitable decline of infections after MDA cessation. We conducted microsimulation modeling to assess the ability of different types of surveillance to identify microfoci in these settings. Methods Five or ten microfoci of radius 1, 2, or 3 km with infection marker prevalence (intensity) of 3, 6, or 10 times background prevalence were placed in spatial simulations, run in R Version 3.2. Diagnostic tests included microfilaremia, immunochromatographic test (ICT), and Wb123 ELISA. Population size was fixed at 360,000 in a 60 x 60 km area; demographics were based on literature for Sub-Saharan African populations. Background ICT prevalence in 6–7 year olds was anchored at 1.0%, and the prevalence in the remaining population was adjusted by age. Adults≥18 years, women aged 15–40 years (WCBA), children aged 6–7 years, or children≤5 years were sampled. Cluster (CS), simple random sampling (SRS), and TAS-like sampling were simulated, with follow-up testing of the nearest 20, 100, or 500 persons around each infection-marker-positive person. A threshold number of positive persons in follow-up testing indicated a suspected microfocus. Suspected microfoci identified during surveillance and actual microfoci in the simulation were compared to obtain a predictive value positive (PVP). Each parameter set was referred to as a protocol. Protocols were scored by efficiency, defined as the most microfoci identified, the fewest persons requiring primary and follow-up testing, and the highest PVP. Negative binomial regression was used to estimate aggregate effects of different variables on efficiency metrics. Results All variables were significantly associated with efficiency metrics. Additional follow-up tests beyond 20 did not greatly increase the number of microfoci detected, but significantly negatively impacted efficiency. Of 3,402 protocols evaluated, 384 (11.3%) identified all five microfoci (PVP 3.4–100.0%) and required testing 0.73–35.6% of the population. All used SRS and 378 (98.4%) only identified all five microfoci if they were 2–3 km diameter or high-intensity (6x or 10x); 374 (97.4%) required ICT or Wb123 testing to identify all five microfoci, and 281 (73.0%) required sampling adults or WCBA. The most efficient CS protocols identified two (40%) microfoci. After limiting to protocols with 1-km radius microfoci of 3x intensity (n = 378), eight identified all five microfoci; all used SRS and ICT and required testing 31.2–33.3% of the population. The most efficient CS and TAS-like protocols as well as those using microfilaremia testing identified only one (20%) microfocus when they were limited to 1-km radius and 3x intensity. Conclusion In this model, SRS, ICT, and sampling of adults maximized microfocus detection efficiency. Follow-up sampling of more persons did not necessarily increase protocol efficiency. Current approaches towards surveillance, including TAS, may not detect small, low-intensity LF microfoci that could remain after cessation of MDA. The model provides many surveillance protocols that can be selected for optimal outcomes. Results All variables were significantly associated with efficiency metrics. Additional follow-up tests beyond 20 did not greatly increase the number of microfoci detected, but significantly negatively impacted efficiency. Of 3,402 protocols evaluated, 384 (11.3%) identified all five microfoci (PVP 3.4-100.0%) and required testing 0.73-35.6% of the population. All used SRS and 378 (98.4%) only identified all five microfoci if they were 2-3 km diameter or highintensity (6x or 10x); 374 (97.4%) required ICT or Wb123 testing to identify all five microfoci, and 281 (73.0%) required sampling adults or WCBA. The most efficient CS protocols identified two (40%) microfoci. After limiting to protocols with 1-km radius microfoci of 3x intensity (n = 378), eight identified all five microfoci; all used SRS and ICT and required testing 31.2-33.3% of the population. The most efficient CS and TAS-like protocols as well as those using microfilaremia testing identified only one (20%) microfocus when they were limited to 1-km radius and 3x intensity. Conclusion In this model, SRS, ICT, and sampling of adults maximized microfocus detection efficiency. Follow-up sampling of more persons did not necessarily increase protocol efficiency. Current approaches towards surveillance, including TAS, may not detect small, low-intensity LF microfoci that could remain after cessation of MDA. The model provides many surveillance protocols that can be selected for optimal outcomes. Introduction Disease elimination is the endgame for much infectious disease-related public health work. Considered infinitely cost-effective when successful [1], elimination or eradication programs cost little per case prevented in the beginning, and enormous sums per case prevented at the end as efforts to prevent, detect, or treat every last case continue despite few remaining cases [2]. In part due to resource challenges and donor fatigue, efforts to eliminate infectious diseases have more often failed (malaria, yaws, yellow fever, hookworm) than succeeded (smallpox, rinderpest) [3]. Currently, several infectious diseases including polio, onchocerciasis, guinea worm, trachoma, malaria, and lymphatic filariasis are targeted for elimination or are at various stages of elimination or eradication programs [2,4,5]. Lymphatic filariasis (LF), a mosquito-borne filarial disease causing lymphedema, hydrocele, and elephantiasis has been targeted by the World Health Organization's (WHO) Global Program to Eliminate Lymphatic Filariasis (GPELF) for elimination as a public health problem by 2020. For LF, this is defined as interruption of transmission using preventive chemotherapy, and management of morbidity and prevention of disability in persons already infected. The GPELF recommends steps to achieve interruption of transmission, including (i) mapping to define endemic areas; (ii) mass drug administration (MDA) in endemic areas to reduce infection below a threshold at which transmission is considered unsustainable; (iii) conducting and passing transmission assessment surveys (TAS) as a prerequisite for stopping MDA; (iv) posttreatment surveillance (PTS) after stopping MDA, comprising two repeat TAS and ongoing surveillance for at least five years; and (v) development of a dossier documenting these steps to achieve validation of the elimination of LF as a public health problem [6]. Although specifics of the last component are still to be determined, there is no doubt that complete elimination of LF transmission is the ultimate goal. Unlike the elimination of smallpox or polio, the path to elimination for LF likely does not require the absence of every infection. This is largely due to the poor transmission characteristics of LF: multiple infective mosquito bites are needed to establish a patent infection with the causative filarial agents, and at least one pair of opposite-sex worms must be present for an infected person to manifest infectious microfilariae. The likelihood of both occurrences decreases as infection prevalence declines during multiple years of MDA [7][8][9]. Passing the TAS requires the identification of fewer infections during the survey than a pre-set cutoff, intended to signify a mean LF prevalence below which infections are likely to irreversibly decline. The TAS involves a community-based survey of 6-7 year old children in areas where school enrollment is <75%, or a school-based survey where enrollment is at least 75%. The design is usually a cluster survey, and the threshold is set at either <2% antigenemia (in W. bancrofti-endemic areas with Anopheles or Culex as the principal vector) or <1% antigenemia (in W. bancrofti-endemic areas where Aedes is the primary vector). In Brugia-endemic areas, thresholds are set for <2% antibody prevalence [10,11]. 'Passing' the TAS-detecting no more positive children than the critical cutoff value specified in the guidelines-is a prerequisite for stopping MDA. However, whether or not this cutoff universally leads to a decline in infections is unclear. The existence of LF-and other diseases, such as malaria-in endemic foci as small as 1 km in diameter [12][13][14][15][16][17][18] before or during MDA suggests at least the possibility of residual endemic foci after treatment. The few data that exist about LF in post-MDA settings suggest that there are residual foci [19][20][21] amid large areas relatively free of infection. In addition, the area and population over which TAS are carried out vary widely [22], and may include as many as 2 million persons. Thus, an average antigenemia prevalence of 1% or 2% among 6-7 year old children might look quite different in different areas, depending on multiple factors both affected and not affected by LF elimination program activities. Beyond this, the absence of infection markers in children is not necessarily associated with the absence of infection and transmission among adults. Even in post-MDA settings, adults have a higher prevalence of infection markers than children [19,[23][24][25]; whether or not these adults are actively transmitting infection is unclear. While single individuals infected with LF who are surrounded by large areas without infections are unlikely to restart active transmission cycles, there clearly exists a number and concentration of infected persons above which transmission will be sustained or expand. In these situations, the average antigenemia prevalence may indeed be below the cutoff in children aged 6-7 years without cessation of transmission; elimination of LF 'as a public health problem' may be briefly achieved, only to be lost in coming decades due to recrudescence. For countries stopping MDA, PTS represents the last opportunity to detect any remaining foci of infection that may still lead to recrudescence [6,10]. However, methods which efficiently detect and address infections-including small residual foci of infections-in a lowprevalence setting are undefined, particularly for a disease such as LF for which infectiousness and clinical symptoms may be separated by years or even decades [26][27][28]. In this paper, we use microsimulation modeling to compare the effectiveness of different programmatic surveillance approaches in detecting both residual endemic foci ('microfoci') and individual, dispersed infections. Data in this paper are intended to provide a realistic framework in which to consider surveillance for low-prevalence infections, not only for LF but also for other diseases targeted for elimination, and shed light on how much assurance different approaches to surveillance can provide during the last stages of an elimination program. Model flow Microfoci, or areas of elevated infection prevalence relative to the background, were placed randomly on a map at the start of a simulation, with one household serving as the geographic center. Primary sampling included either 30-cluster sampling (CS) or simple random sampling (SRS), and only occurred in the population group specified by the simulation. The identification of a single infection-marker-positive person triggered follow-up testing ('trigger-based sampling') of the nearest X persons, irrespective of population group, around the initial positive (Fig 1, Table 1). Persons tested in trigger-based sampling included household members of the initial positive and persons living in the next-nearest households. A pre-set number of positives found during trigger-based sampling ('threshold') indicated the identification of a suspected microfocus (and the presumed requirement for action on the part of a country program). The number of true (known) microfoci was divided by the number of suspected microfoci to determine the predictive value positive of each simulation in identifying microfoci. Variables Each simulation utilized a 60 x 60 km region (3,600 km 2 ), based on the approximate sizes of TAS evaluation areas in Chu et al [22], and included 360,000 persons with a mean population density of 100 persons/km 2 , similar to population densities of Ghana and Kenya [29]. The mean village size was 1,200 individuals; thus, each simulation comprised 300 villages. A mean household size of six was estimated from the literature [30] and World Family Map data from Ethiopia and Nigeria [31]; the simulation population was developed using population projections for Sub-Saharan Africa 2015 [32]. Villages were heterogeneously distributed throughout the area; a single household was chosen in each village as an 'anchor' and household density in each village decreased as distance from the anchor household increased (Fig 2). Due to the computation time required to create an area, ten simulation areas were created to use in modeling. Density plots of these areas and further details on how the ten areas were used in the simulation models are included in the supplementary materials (S1 Fig). The population proportion for children in 1-year age groups for children 5 years and 6-10 years was determined by dividing the total population proportion assigned to those age groups by five ( Table 2). Background age-prevalence curves for infection markers were estimated based on the literature [19,24,25], using a background ICT prevalence among 6-7-year-old children of 1.0% to approximate a plausible post-MDA setting ( Table 2). Parameters that were varied in simulations included those related to the simulation area and those related to the sampling approach. Variables related to the simulation area included microfocus size (1, 2, or 3 km in radius), microfocus intensity (3x, 6x, or 10x greater prevalence of infection markers in the microfocus compared to the background infection marker prevalence), and the number of microfoci (5 or 10) included in each simulation, resulting in 18 (Table 2). Primary Wb123 sampling only evaluated children aged 5 years, due to the potential for high prevalence of Wb123 antibodies among older age groups. The number of persons sampled in each cluster during CS was determined by the number of persons targeted for sampling, divided by 30 clusters. If the required sample size (1,800 or 7,200) for primary sampling was not reached in 30 clusters, all persons in the target age group in the 30 clusters were sampled. Additionally, a cluster sample comprising 1,548 6-7-year-olds was drawn to approximate a TAS. Detailed definitions of the model terminology and a full list of outputs are included in the supplementary materials. All simulations were run in R versions 3.1 and 3.2 [33] (code available in S1 File). Each set of simulated surveillance activities was termed a 'protocol.' We considered the priority of each protocol to maximize 'efficiency', defined as (in this order) maximizing the proportion of microfoci identified (which can also be thought of as the probability that any one microfocus is detected in each simulation), minimizing the number of total tests required to identify them, and maximizing the predictive value positive (PVP) of identification of microfoci. Protocols were sorted to optimize those three relevant outputs, with results reported as median proportions with 95% confidence intervals. The true proportion positive in each microfocus was tested against the background proportion positive for the targeted age group and test method to determine if a significantly higher proportion of positive persons was present in each microfocus. All comparisons were made with a two-sided Fisher's Exact test. The number of statistically significant associations was summed for each protocol and the percentage of microfoci with statistically more infections than background prevalence was recorded. Negative binomial regression was used to estimate aggregate differences across all protocols in regression analyses. Results of regression analyses are reported as rate ratios with 95% confidence intervals and p-values. Regression analyses In total, 6,804 protocols were identified and evaluated in regression analyses, presented in Tables 3 and 4. Modifying nearly all variables included in the model significantly affected the three relevant outputs. Table 2. Population proportion and infection marker test status in background population. Microfilaremia was estimated to be 10-fold lower than ICT in every age group. Wb123 prevalence was assumed to be four-fold higher than ICT prevalence among persons <20 years of age, 4.5 times ICT prevalence among persons 20-40 years of age, and 5-fold greater than ICT prevalence among persons >40 years of age [19,[23][24][25]. Because these are based on test prevalences derived from real data (rather than gold standard prevalences for each infection marker), test sensitivity and specificity were not employed. Table 3. Effect of varying the model variables on the median predictive value positive of identifying microfoci, the median proportion of the population requiring testing, and the median proportion of microfoci detected. In this analysis, TAS-like protocols are excluded. The threshold for all microfilaremia testing is set at 1. For ICT tests, the threshold for identification of a suspected microfocus is 1 positive when 20 follow-up tests are used, and 2 in all other cases. For Wb123 testing, the threshold is 50% of the persons followed up testing positive. Maximizing the PVP is critical to protocol efficiency; protocols with low PVP waste resources by causing follow-up actions on suspected microfoci that are not truly microfoci. Increasing the number of microfoci primarily affected the PVP: having more microfoci in the model increased the likelihood that any microfocus investigated would be a 'true' microfocus. Similarly, increasing the radius of each microfocus increased the PVP, but had less effect on the median proportion of persons tested in each protocol. However, it did significantly increase the median proportion of microfoci detected. Predictive value positive Increasing the intensity of the microfocus caused a large and significant increase in PVP and the proportion of microfoci detected in each model, but had a smaller effect on the proportion of persons tested. Increasing the proportion of the population sampled in primary sampling increased the median total proportion of persons tested, but also increased the median proportion of microfoci detected, without affecting PVP (that is, more true microfoci were detected with increased primary sampling, but the number of false positive microfoci did not increase). All three metrics were significantly improved by using SRS instead of CS as the primary sampling methodology. In contrast, when using Wb123 or microfilaremia testing, compared with ICT, only PVP was increased; the proportion of persons tested declined but the median proportion of microfoci detected also declined significantly. Increasing the number of follow-up tests generally caused a decrease in the PVP and had relatively small-though significant-effects on the proportion of microfoci detected. However, it did have a large and significant effect on the proportion of persons requiring testing (Table 3), with nearly five times more persons requiring testing when 500 persons were followed-up instead of 20 persons. To investigate the effect of testing different age groups on the same metrics, we additionally excluded Wb123 testing, which only included children 5 years of age. The results for all other variables (Table 4) were largely the same as described above (Table 3). Results for Table 4. Effect of varying the model variables on the median predictive value positive of identifying microfoci, the median proportion of the population requiring testing, and the median proportion of microfoci detected. In this analysis, both TAS-like protocols and Wb123 protocols are excluded. The threshold for all microfilaremia testing is set at 1. For ICT tests, the threshold for identification of a suspected microfocus is 1 positive when 20 follow-up tests are used, and 2 in all other cases. women of child-bearing age are largely the same as results for all adults: compared with testing children, testing adults generally increases the PVP minimally, and increases both the proportion of the population tested and the proportion of microfoci identified more substantially. Based on the above results and to represent likely post-MDA scenarios as simply as possible, for the remaining analyses we limited the number of microfoci to five (n = 3,402 protocols). All 3,402 protocols with relevant inputs and outputs are available in S1 Table. Description of microfoci Microfoci are described in Table 5. Although most simulations resulted in more infections in each microfocus compared with expected infections at background prevalence, we separated the simulations into those where >80% vs 80% of microfoci had statistically significantly more infections than would be expected ( Table 5, shaded vs unshaded boxes). In total, 57 (70%) of the 81 simulation combinations shown had >80% of microfoci with statistically significantly more positives, as measured by the target infection marker in the target age group, than expected at background rates. Simulations with larger and more intense microfoci, those utilizing more sensitive tests, and those involving adults were more likely than others to have statistically significantly more infections in each microfoci than background. Protocols which identify the most microfoci Of the 3,402 protocols, 384 (11.3%) identified all five microfoci. The number of protocols identifying all five microfoci declined as size and intensity of the microfoci decreased ( Table 6). The 384 protocols required testing a range of 0.73%-35.6% of the total population, and had a range of PVPs of 3.2-100.0%. All used SRS and 378 (98.4%) only identified all five microfoci if they were 2-3 km diameter or high-intensity (6x or 10x) ( Table 6). Of the 384, 374 (97.4%) required ICT or Wb123 testing to identify all five microfoci, and 281 (73.0%) required sampling adults or WCBA. The top 10 most efficient protocols (those which identified the most microfoci with the fewest tests at the highest PVP) are shown in Table 7. Protocols which identify small, low-intensity microfoci The most efficient protocols identified during this initial evaluation invariably involved models with high-intensity (6x or 10x) and large (radius 2-3 km) microfoci. However, most residual microfoci post-MDA are likely to be low-intensity and small. To address this, we limited our model to include only protocols with microfoci of size 1 km in radius and 3x intensity. Of these, eight (2.1%) protocols identified all five microfoci; nine (2.4%) identified four; 13 (3.4%) identified three; 18 (4.8%) identified two; 39 (10.3%) identified one; 291 (77.0%) identified no microfoci. Among the eight identifying all five microfoci, each required testing of 31%-33% of the population. As the proportion of microfoci identified decreased, the proportion of the population requiring testing similarly decreased. In Table 8, we show a sample of the most efficient protocols when 100%, 80%, 60%, or 40% of microfoci are identified. Because all of the 'most efficient' protocols in the preceding examples specified ICT and SRS, we explored the most efficient options using different test types and diagnostic methods with small, low-intensity microfoci. Table 9 shows the most efficient protocols using microfilaria testing, Wb123 testing, or cluster sampling. The most efficient protocols using CS, MF, or Wb123 fail to identify more than one or two of the five microfoci and are markedly less efficient than the ICT/SRS protocols shown in Table 8. Spatial modeling for surveillance for lymphatic filarisis Children vs adults Different settings will facilitate sampling of different populations with greater ease. For this reason, we compared the most efficient sampling protocols limited to 6-7 year olds, WCBA, and adults. To facilitate a fair comparison across population groups, we limited the radius of microfoci to 1 km, the intensity to 3x, and the threshold to 1. The results are shown in Table 10. Both WCBA and adult sampling outperform sampling of 6-7 year olds in terms of efficiency. Children <5 years were not included in this evaluation as they were only evaluated with Wb123. Cluster sampling protocols Simple random sampling consistently resulted in higher efficiencies than cluster sampling. However, recognizing that simple random sampling might not always be feasible, we evaluated the peak efficiency of cluster sampling protocols at varying microfoci radii and intensities. In total, 1,458 protocols used cluster sampling (not including TAS-like sampling). The protocols identified 0%-40% of the five microfoci, and tested 0.9%-8.0% of the total population. Only 74 (5%) of the 1,458 protocols identified 40% of the microfoci; the remainder identified fewer. All Table 8. Protocols which required the fewest total persons to be tested to identify 100%, 80%, 60%, or 40% of the microfoci of size 1 km in radius and 3x in intensity. Spatial modeling for surveillance for lymphatic filarisis involved large (3-km), high-intensity (10x) microfoci. The three most efficient protocols with cluster sampling are presented in Table 11. The targeted primary sample size was not achieved for 180 of 729 cluster sampling protocols specifying 2.0% primary sampling; all involved 6-7 year olds. Sampling protocols that use microfilaremia testing Microfilaria testing is used in many countries despite challenges with sensitivity, which decreases as the prevalence of infection declines. As demonstrated above, for small, low-intensity microfoci, child populations are much less likely than adults or WCBA to have statistically more microfilaremia-positive persons than background (Table 5), and thus detection of any microfocus using microfilaremia testing in children is unlikely (or due to chance). In total, 1,458 protocols used microfilaria testing; of these, 10 identified 100% of the microfoci by testing 3.0%-7.0% of the population. These primarily involved large, high-intensity microfoci. When the parameters were limited to microfoci of radius 1 km and 3x intensity, the highest proportion of microfoci that could be identified was 20%, by testing 5.5%-5.9% of the population. Table 12 shows the top two most efficient protocols when 100%, 80%, 60%, or 40% of the microfoci are identified. Table 13 shows the three most efficient protocols using microfilaremia testing when microfoci are limited to radius 1 km and 3x intensity. Notably, even though 81% of the 1 km, 3x-intensity microfoci have statistically greater numbers of infected adults than background as indicated by microfilaremia testing (Table 5), the most efficient protocol in adults detects only one in five microfoci (Table 13). TAS-like sampling To evaluate how well the TAS would perform at detecting microfoci of various sizes, we simulated a TAS (as described in the Methods). As with the other protocols, each of these did incorporate trigger-based follow-up testing around each positive and a threshold for determining a suspected microfocus; thus, each demonstrates how few people could be tested and how many microfoci found if follow-up testing around infected persons was carried out. There were 486 protocols that utilized TAS-like sampling. The most efficient of these, using each diagnostic tool, are shown in Table 14. Although each required testing <1% of the population, none of the protocols identified more than one of the five microfoci in each simulation. Discussion Endemic foci that lead to stable or increasing numbers of infections threaten the success of infectious disease elimination or eradication programs. Current approaches to post-treatment surveillance for LF require that we assume the absence of infections in between well-defined areas where infections are known to be absent. Due to the highly focal nature of LF endemicity, this approach may not be sufficient to confirm LF elimination. The model presented here provides several important pieces of information about what can be expected from various types of surveillance in terms of identifying small endemic disease foci. First, we confirm the challenges in detecting small, low-intensity microfoci. While this in itself is unsurprising, this model demonstrates precisely how much additional effort is needed to identify microfoci-and which diagnostic test and follow-up testing combinations can do so most efficiently-as they become incrementally smaller and/or less intense. Programs may wish to have a specific level of confidence in their ability to detect microfoci of specific size, as measured by a specific marker: this model enables them to select approaches that could yield that level of certainty. While many protocols enabled the detection of all five large (3-km radius), high-intensity (10x) microfoci while testing small proportions of the total population, the only protocols that identified all five small, low-intensity microfoci required testing of an impractically high number of persons (>30% of the population). Notably, there are some protocols (Table 8) which identify most of the microfoci-4 of 5 -and require testing of only 3.5% of the population (12,600 persons in this model). While this may seem like a high number, the use of protocols such as this two years in a row might provide reasonable confidence in the absence of microfoci. These protocols include simple random sampling of adults or women of childbearing age using ICT, conducting follow-up testing of the 20 nearest persons to any identified positives, and using a threshold of just one additional infected person to identify areas needing additional programmatic attention. One way to achieve this in a country with high antenatal clinic attendance might be to test all women attending antenatal clinics until the sample size is reached for two years in a row. Interestingly, follow-up sampling of 500 persons around each infected person in this model, as is done in Togo [34], did not appear to provide more confidence in detection of microfoci than follow-up sampling of 20 persons, and was carried out at a large cost to the predictive value positive. The model also demonstrates that microfoci are exceedingly difficult to detect during PTS using a tool as insensitive as microfilaria smears. This due to the low estimated prevalence of microfilaremia in a post-MDA population, and the declining sensitivity of microfilaria testing as the prevalence of infection (and thus the number of circulating microfilaria overall and per infected person) decreases [7]. In child populations, smaller, lower-intensity microfoci are essentially invisible, due to the small size of the target population and the very low prevalence of microfilaremia in the background child population. In this model, microfilaria testing can identify large and intense microfoci, but even the best protocols only identified one of five microfoci when they were small and low-intensity, and all required testing of >5% of the total population. Combined with the need to sample persons in most areas between 10 pm and 2 am, microfilaria testing is unlikely to be a practical solution for monitoring the success of an elimination program. Using ICT identifies more, less intense, smaller endemic foci of infection while testing fewer persons. We additionally show the challenges of cluster sampling, as compared with simple random sampling, in detecting microfoci. The primary advantages in cluster sampling are logistical, as fewer areas need to be visited during a survey than would be required for simple random sampling. However, even in settings of large, high-intensity microfoci, a maximum of only 40% of microfoci were identified in this model. In these protocols, relatively few persons are tested overall due to the low proportion of persons tested during primary sampling and the followup of only 20 persons around each positive. When considering small, low-intensity microfoci, a maximum of one of five microfoci were detected by cluster sampling, at a total cost of testing approximately 3% of the population. Using the ostensibly more sensitive Wb123 test does not yield a meaningful improvement on this metric, although it does improve the predictive value positive: ICT testing of adults or WCBA is clearly the most efficient way of detecting small, low-intensity microfoci regardless of whether cluster sampling or simple random sampling is used. Importantly, TAS-like sampling performed poorly in this model regardless of diagnostic test type used, microfocus size, or microfocus intensity; too few areas were covered during sampling to identify more than one of five microfoci in each simulation. Microfoci, and not individually dispersed infections, pose the greatest risk for recrudescence of LF. However, the uncertainty surrounding what type of microfoci will spread without further intervention has been a stumbling block in setting LF elimination program targets. Conducting studies to determine which microfoci will spread is unethical: one cannot identify infected persons and deny treatment to evaluate the potential for infection propagation. Results from this model allow us to consider changing our approach to PTS entirely. Current methods focus on measuring average antigen prevalence, a metric that becomes less relevant as evaluation unit size and focal endemicity increase. Instead of targeting maximum tolerable average antigen prevalence for PTS, a decision about the maximum tolerable size and intensity of residual microfoci, and the confidence we desire in their absence in a post-MDA setting, could be considered. For example, we may wish to be 80% confident in the absence of residual microfoci >2 km in diameter, with intensities of infection !3 times the overall antigen prevalence in the population. We may choose to use ICT, and know that we want to test adult outpatients to approximate SRS. Under this framework, different simulations could be examined to determine which provided at least 80% confidence in the absence of such microfoci. There are several limitations to this model. First, while we tried to design a conservative landscape that captured plausible post-MDA settings, there are currently few data to inform the true post-MDA situation with regard to residual infections or infection markers-particularly Wb123, for which the meaning of a positive test remains unclear. Because Wb123 was simply simulated as a more sensitive test than microfilaremia or ICT, it may be appropriate to consider the results for this test from that perspective, rather than as representing the actual performance of Wb123. Related to this, a minority of simulations did not yield detectable microfoci; that is, due to the low prevalence of background infections, particularly for children being tested for less-prevalent markers such as microfilaremia, even larger or more intense microfoci rarely had statistically significantly more infection-marker-positive persons than background. Thus, in these simulations, identification of microfoci was more a function of chance than of a true difference in infection marker prevalence. While this is important in terms of understanding model results, it is equally important in considering where the limits of detection lie with different markers. Notably, as mentioned above, it is unclear what comprises a microfocus that would spread without further intervention, and thus we cannot separate residual foci of infection into 'important' and 'less important'. Beyond this, risk of LF is not homogeneously distributed across an area; however, precise prediction of risk from related factors is not available and because of this, we chose to treat all areas as though they were at equal risk. Improved determination of risk factors for infection 'hotspots' could facilitate better identification of priority areas for PTS, although this is unlikely to occur in time for most countries stopping MDA. While this model was designed with LF in mind, it can also be applied to surveillance for other low-prevalence diseases, particularly those for which clinical signs cannot be used to estimate concurrent infection prevalence. Hepatitis B virus (HBV) is one such infection for which this might prove useful. Similar to LF, the signs and symptoms of HBV are not overt until many years after infection, and the elimination target involves reducing antigenemia to <2% in 5-year-old children (with the eventual goal of <1% antigenemia in the general population) [35]. Other diseases with nonspecific symptoms, such as malaria, could also benefit from similar modeling. While this is a simulation, the parameters-particularly for small, non-intense microfociare not extraordinary. Large microfoci may be more easily detected during TAS or sentinel/ spot-check sampling, although there is no guidance about follow-up testing or actions when persons are identified as positive. Data from this model provide two critical pieces of information: first, to be confident that microfoci, if they existed, would be detected, we need to carry out more robust surveillance than our current TAS requires. Second, microfilaremia testing is unlikely to be useful for PTS if confirmation of elimination is the goal. To summarize, we show here that our current efforts at post-treatment surveillance will not suffice to detect small, low-intensity microfoci that may remain after cessation of MDA for LF. The use of more sensitive tests and more thorough testing methods are obligatory if we are to have confidence in long-term elimination of LF. Determining both practical and useful methods of surveillance may require some creativity, and perhaps graded efforts over time which help identify areas of interest during the first year, followed by much smaller continued surveys in subsequent years. High-intensity sampling of TAS-eligible areas would provide important data to improve this type of modeling.
2018-04-03T03:44:14.902Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "1acc6308021b0161385a1c20f868818245bb90f4", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0005610&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1acc6308021b0161385a1c20f868818245bb90f4", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237056607
pes2o/s2orc
v3-fos-license
Renal Cell Carcinoma Mimicking Transitional Cell Carcinoma: A Case Report Patient: Female, 76-year-old Final Diagnosis: Renal cell carcinoma Symptoms: Flank pain • haematuria Medication: — Clinical Procedure: Nephroureterectomy Specialty: Radiology • Urology Objective: Unusual clinical course Background: Preoperative differentiation between renal cell carcinoma (RCC) and transitional cell carcinoma (TCC) is of utmost important for determining surgical strategy, whether nephrectomy or nephro-ureterectomy, as well as the necessity for wider lymphadenectomy and subsequent intensive surveillance, as the latter is more prone to recurrence. Case Report: A 76-year-old Chinese woman presented with flank pain and gross hematuria, and was found to have right-sided hydronephrosis. An obstructing tumor in the renal pelvis was shown on a computed tomography (CT) intravenous pyelogram. Although its enhancement pattern was suggestive of RCC, the location within the collecting system without any attachment to the renal parenchyma is very unusual. The mass was diagnosed histopathologically as RCC on both ureteroscopic biopsy and subsequent radical nephrectomy. Conclusions: We present a rare case of RCC growing exclusively in the renal pelvis, mimicking a TCC. Hypotheses regarding this unusual presentation include direct invasion, continuous implantation, and intraluminal transit down the collecting system. The characteristics on imaging studies, including greater enhancement and higher tumor-to-kidney attenuation ratio, may provide a clue for diagnosis, but ureteroscopy and histopathology are the criterion standards and should be considered as part of routine preoperative assessment. Amidst controversies and inconsistencies, more and more emerging evidence suggests that RCC with urinary collecting system invasion is associated with less favorable overall and recurrence-free survival, especially in localized diseases. Background Preoperative differentiation between renal cell carcinoma (RCC) and transitional cell carcinoma (TCC) is of utmost importance for determining type of surgery, nephrectomy or nephroureterectomy, and the necessity for more extensive lymphadenectomy. Intensive surveillance for metachronous tumors in the remnant urinary tract is often needed for patients with TCC. The imaging features and enhancement pattern on computed tomography (CT) may provide a clue for diagnosis, but ureteroscopy and histopathology results should be considered as the criterion standards and routine preoperative assessment. An RCC growing exclusively in the renal pelvis, as presented in the current case report, is very rare. Only a few similar cases have been published in the literature, the most recent one being fumarate hydratase-deficient RCC arising in a patient suffering from hereditary leiomyomatosis and renal cell carcinoma (HLRCC) syndrome. Case Report A 76-year-old Chinese woman with no underlying medical problem presented to the Emergency Department with 1-day duration of intermittent right flank pain and gross hematuria. She had no fever or dysuria, and a renal punch was negative. Bedside ultrasound showed moderate right hydronephrosis. On CT intravenous pyelogram, a 1.6-cm lobulated mass was seen in the right renal pelvis. It was mildly hyperdense (38 HU) on pre-contrast images ( Figure 1A), heterogeneously enhanced on nephrographic phase (78 HU) but less than that of normal parenchyma (121 HU) with attenuation ratio of 0.64 ( Figure 1B-1D). No malignant cells were found on urine cytology. With a tentative diagnosis of collecting system tumor, she underwent ureteroscopy, where a renal pelvic mass was confirmed and biopsied. Histologically, the lesion harbored carcinoma cells in nests and tubules, polygonal to round with pleomorphic and hypochromatic nuclei. Expression of paired-box gene 8 (PAX8) was noted. Napsin A, 34betaE12, and cytokeratin 7 (CK7) were positive. Tumor protein p63 and GATA-binding protein 3 (GATA3, markers for TCC) were negative, while alpha-methylacyl-CoA racemase (AMACR) was focally positive. On subsequent whole-body staging CT, the mass showed interval growth with progressed hydronephrosis. There was no thoracoabdominal lymphadenopathy, or evidence of osseous metastasis on bone scan. The tumor remained confined to the kidney (pT1a) when radical nephrectomy was performed. It had a growth pattern resembling TCC, predominantly tubular, and tubulopapillary architecture and lack of demonstrable attachment to renal parenchyma. It was, however, proven to be of renal origin by positive PAX8/napsin A and absence of p63/GATA3. Discussion TCC accounts for 90% of all cancers in the renal pelvis, so the discovery of a renal mass with its epicenter in the pelvicaliceal system is generally considered TCC. In histopathologic studies, up to 14% of RCC demonstrate involvement of the collecting system [1]. RCC that exclusively grows within the collecting system, however, is exceedingly rare. A similar case has been recently reported Ajjikuttira et al in a patient with hereditary leiomyomatosis and renal cell carcinoma (HLRCC) syndrome [2]. There are 3 hypotheses of how RCC manifests as renal pelvic or juxtapelvic mass. First, invasion into hollow structure is easier compared to solid parenchyma when tumor arises from marginal area. Second, implantation of RCC cells through invasion of urothelial mucosa followed by intraluminal expansive growth. Third, tumor cells metastasizing via intraluminal transit down the urinary tract [3,4]. Considering the absence of dysplasia in the contiguous urothelial cells surrounding the RCC in our case, it might have taken the third route. Dynamic CT is valuable for characterization of renal pelvic lesion. A high-density mass on pre-contrast scan may indicate calculus, blood clot, or neoplasm. Presence of enhancement (>15 HU) confirms a neoplastic etiology. Three-dimensional CT intravenous pyelogram delineates precise location of the mass and its association with renal parenchyma [3]. Various subtypes of RCC have minor differences, but it is generally agreed that RCC, being a hypervascular tumor, tends to enhance more than TCC [5]. Pal Bata et al observed significant difference between attenuation ratios of RCC and TCC compared to renal parenchyma in corticomedullary (0.77 vs 0.50) and nephrographic (0.72 vs 0.45) phases [6], implying that RCC is likely to exhibit greater isodensity than a normal kidney. This phenomenon was observed in our case, where the tumor-to-kidney attenuation ratio was 0.64 on nephrographic phase. In cases where features of RCC such as necrosis, cystic degeneration, hemorrhage, and calcification are absent, such an enhancement pattern may suggest an alternative diagnosis. Preoperative differentiation between RCC and TCC is crucial in determining the management strategy: nephrectomy or nephroureterectomy. Often, TCC necessitates wider lymphadenectomy and intensive surveillance of a metachronous tumor in the remaining urinary tract. Although there are some inconsistencies in the reports regarding oncological outcomes of patients with RCC invading the urinary collecting system, more and more evidence suggests that it has a negative impact on
2021-08-16T15:12:12.674Z
2021-09-06T00:00:00.000
{ "year": 2021, "sha1": "157ff92225c45a70c755f366e7238321a065a6bf", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc8436829?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "68cdc0dca2ff71670b2a45fabd465e477c7781be", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254906990
pes2o/s2orc
v3-fos-license
An Approach for Estimating Lightning Current Parameters Using the Empirical Mode Decomposition Method Lightning parameters are needed in different engineering applications. For the prediction of the severity of transient voltages in power systems, an accurate knowledge of the parameters of lightning currents is essential. All relevant standards and technical brochures recommend that lightning characteristics should be classified according to geographical regions instead of assuming that these characteristics are globally uniform. Many engineers and scientists suggest that better methods for lightning current measurements and analyses need to be developed. A system for direct lightning current measurements installed on Mount Lovćen is described in this paper. Observed data were analyzed, and statistical data on parameters that are of interest for engineering applications were obtained, as well as correlations between various lightning parameters. Furthermore, a novel approach for classifying and analyzing lightning data from direct measurements based on empirical mode decomposition (EMD) is proposed. Matlab was used as a tool for signal processing and statistical analysis. The methodology implemented in this work opens possibilities for automated analysis of large data sets and expressing lightning parameters in probabilistic terms from the data measured on site. Introduction With the continuous increase in the complexity of the electrical power system, growing exposure to the adverse effects of different environmental factors is further aggravating problems associated with reliability and safe operation of power system. Extreme weather phenomena and lightning in particular is one of the most common causes of faults and power supply interruptions. Systematic reviews of numerous observations of lightning events all around the world and information thus obtained provide valuable tools to reduce vulnerability and improve overall performance of power system. Damage to the power system is caused by both direct and indirect lightning strokes. In order to assess lightning effects and to design effective protection systems accurate lightning current parameters must be used. Lightning current parameters are of great importance in insulation coordination procedure, e.g., if this procedure is not properly determined the energy of lightning discharge can exceed energy handling capability of power system components [1][2][3]. Three approaches can be used to obtain lightning data: direct measurements using instrumented towers, direct measurements using the technique of the artificial initiation of lightning and lightning location systems [4,5]. Formatted lightning data from modern lightning location systems include: time and date of lightning stroke, GPS coordinates (2D), lightning current amplitude, lightning type, height (for inter-cloud lightning) and 2D statistical error [6,7]. Lightning location systems do not provide data about front and tail time of lightning current waveform. It is well known that front time has a high influence on insulation in power systems, while, for example, energy stresses of surge arresters strongly depend on tail time of the overvoltage wave [8][9][10][11]. Therefore, when conducting simulations of power system transients it is necessary to predict these data, as it is recommended in the CIGRE Brochure 549 (2013) and last IEEE review of parameters of lightning strokes (2005) [12,13]. In the field of lightning research for engineering applications, the most important data are obtained by the analysis of directly measured lightning current waveform. When designing such measuring systems, the first important step is the selection of a location with high lightning activity. Modern lightning location systems should be used for selection of regions with high lightning activity. Furthermore, it is important to correctly select the components of the measuring system, which will be in accordance with the specific characteristics of the lightning current that should be measured. In addition, it is necessary to develop tools for adequate and precise processing and analysis of measured data. If a record of lightning current waveform is available, it is then possible, using appropriate numerical technique, to determine different parameters associated with that specific waveform. In every measurement, however, the measuring device is affected by environmental disturbances, referred to as noise, that alter characteristics of the output signal. Presence of noise can cause serious errors in measurement signal processing. In the case of lightning current measurement, in general, parameter determination may be difficult task due to the fact that all measured lightning current data are contaminated by considerable levels of noise, so additional processing steps must be undertaken in order to minimise effects of noise. The classification of recorded lightning current waveforms based on polarity and multiplicity is another important consideration in lightning studies. When dealing with a large amount of data from lightning monitoring systems, it is impractical to classify and analyze data manually. For such studies it becomes necessary to develop methodologies for automated classification and extraction of waveform parameters from the recorded data. In the recent literature on lightning research, processing techniques are insufficiently considered. Several established methods are frequently reported: the Fourier transform, the short Fourier transform (STFT) [14], and the wavelet transform (WT) [15,16]. In [17], a time domain digital processing system for lightning current waveform parameters extraction is described. Using this approach procedure for parameters extraction from negative lightning flash with only one stroke was developed. In [18][19][20], the authors use Empirical Mode Decomposition (EMD) for discharge electric field pulse analyses, but in the recent literature, this method was not used for lightning current waveshape analyses. Therefore, this paper proposes a novel approach for analyzing lightning current waveform parameters and it is based on EMD. Continuing the work in [17], the authors expanded capabilities of previously developed signal processing procedure in terms of introducing new algorithm, increasing number of analysed features and including all typical types of discharges. In this study, data from direct lightning current measurement system is analyzed using novel signal processing and parameter estimation technique and detailed statistics for one year observation period is presented. Correlations between various lightning parameters are established. This paper includes following contributions: • a novel approach to lightning current waveform processing based on EMD for more accurate automatic lightning classification and lightning parameter extraction is introduced; • statistical properties of lightning current parameters that are of great importance for engineering applications in region of mountain Lovćen (Montenegro) is presented; • empirical expressions for cumulative peak current distributions of first and subsequent strokes are determined. The rest of the paper is organized as follows. Section 2 introduces the types of lightning discharges and lightning current parameters. Description of observation site and lightning monitoring system is given in Section 3. Section 4 describes novel approach for lightning data analyses. Statistics of lightning current parameters and correlations between parameters are reported in Section 5 and the paper is concluded in Section 6. Lightning Discharges and Lightning Parameters for Engineering Application According to [4,21], cloud-to-ground lightning discharges are divided into four main types: upward or downward (by the direction of the motion of the initial leader) and positive or negative (by the sign of the charge deposited along the channel). Classification in [4] includes only "unipolar flashes" that transport charge of one polarity to ground. Lightning flashes that transport both negative and positive charges to ground called "bipolar flashes" are not included in this classification. More than one lightning stroke can hit the same place on the ground in short time interval. To identify number of strokes in a single flash the term multiplicity is introduced. Usually first strokes have larger currents than subsequent strokes that occur both in new and in previously formed channel. Further details on lightning phenomenon can be found in [4]. Different types of lightning discharges have different impacts on power systems. Therefore, it is very important to identify the parameters of lightning current. According to CIGRE publication [22], typical lightning current waveshape, shown in Figure 1 Based on the above parameters, the time duration of current front, t f , is defined as time interval from t 0 to t p and is determined as shown in Figure 1. The time to half value, t h , represents time interval from t 0 to the 50% value point of the first peak (t 50 ). The energy in a lightning flash is assessed generally by its charge, Q, defined as: and specific energy E is defined as: Location Mountain Lovćen, with peak altitude of 1749 m is located in southwestern Montenegro, near the Adriatic Sea. Geographical location of mountain Lovćen and tower on which measuring equipment is installed are shown in Figure 2. Lightning current measurement equipment was installed on the 88 m high broadcasting tower, one of the most important communications hub in the region. The decision to install measurement equipment on this site was made based on previous reports and on data available from lightning location system (LLS). LLS data have revealed that Lovćen, with 1063 strokes per square kilometer per year, has more than 100 times above median value in this region. Another contributing factor when choosing this site was the 500 kA maximum lightning current amplitude recorded by LLS reported in [23]. Measuring System The lightning measurement system was constructed from a sensor, recording unit, power supply unit, central processing unit and user interface. A installed hardware is presented in Figure 3, while detailed block diagram of the system is shown in Figure 4. The sensor unit containing current transformer, electric field sensor and IP camera, was installed on tower top. The lightning current sensor used was current transformer with 500 kA input range. Changes in electric field are registered using electric field sensor BOLTEK-EFM 100 Atmospheric Electric Field Monitor. IP camera (UFG1122 HD IP Camera) with 120 fps (frames per second), equipped with SD card and infrared cut filter for day/night operations. The ecording unit is based on an industrial computer that records data from sensor unit. The acquisition unit is a four-channel card with an acquisition sampling rate of 8 MSa/s per channel and 15 bits vertical resolution. Accurate timing is provided by an integrated GPS receiver. Local ethernet connection is used for communication with the remote server. Recording and processing units were installed inside the broadcasting tower and are supplied from AC mains. A low loss cable (RG218) was used to connect output from the current transformer to the input of acquisition unit. A voltage attenuator with the ratio 10/1 was installed at the acquisition card input. Data are transferred in real-time via internet to the central server and stored into the integral information system. Detailed information on system architecture is provided in [24]. Signal Processing and Parameter Estimation Signals obtained directly from the measuring system contain considerable amount of noise. Important lightning current parameters can be distinguished without filtering directly from the measured current shape, but it is not possible to determine the exact values. Therefore, in order to extract the values of important parameters, it is necessary to apply the appropriate signal processing technique. To improve the parameter extraction process, empirical mode decomposition was introduced for lightning current waveform denoising. EMD Algorithm and Parameters Determination Empirical mode decomposition represents a method of breaking down a signal without leaving the time domain. This method is a powerful tool for analyzing natural signals, which are mostly non-linear and non-stationary. It serves as a signal processing technique based on an empirical and algorithm defined method. EMD can adaptively decompose a complex signal into a set of complete, almost orthogonal components. These components are known as Intrinsic Mode Functions (IMFs). EMD filters out IMFs without requiring any preliminary understanding of the nature and quantity of the IMF components in the data. The main advantage of EMD compared with the widely used wavelet-based technique is that EMD can be used to decompose a signal without specifying the basics functions in advance, and the degree of decomposition is adaptively determined in accordance with the nature of the signal to be decomposed [20]. Due to its performance, EMD has been widely used in many disciplines. EMD was first proposed by Huang et al. in [25] and this approach is used in computational neuroscience, biomedical signal processing, climate signal analysis, audio signal processing, image processing, and seismic signal and discharge electric field pulse analyses [26]. Therefore, details of the EMD algorithm and denoising principles can be found in the literature [25,[27][28][29]. This paper introduces the EMD algorithm into analyses of lighting current waveforms for parameter determination. This study presents the concept of EMD and its application to lightning current signal processing. Figure 5 shows the proposed EMD-based adaptive thresholding lightning current enhancement concept. The basic steps of proposed method are as follows: • Step 1 Applying EMD algorithm to the raw data (noisy lightning current waveshape), which decomposes input signal in to IMFs. • Step 2 IMFs segmentation into frames. • Step 3 Frame classification into noise dominant and signal dominant frames. • Step 4 Adaptive thresholding. • Step 5 Combining of denoised IMFs. • Step 6 Parameter determination from enhanced signal. Proposed novel lightning current signal processing and parameter determination method was implemented in MATLAB. Evaluation of Used Methodology In order to evaluate accuracy of proposed lightning current parameters estimation procedure several experiments were conducted. The proposed processing method was applied to a set of three types of synthetic signals. Three types of standard CIGRE concave lightning current waveforms, with parameters given in Table 1, are generated with sampling rate of 8 Msamples/s. It was assumed that the measured lightning current can be represented at most in accordance to CIGRE concave lightning current model. In addition, it is assumed that the noise observed in measured signals is additive white Gaussian noise. These assumptions are reasonable due to fact that most of recorded lightning strokes are very similar to the assumed CIGRE model [12,13]. Large number of randomly generated synthetic signals were generated in Monte Carlo simulations that were used for evaluation of performance of proposed method. For each type of standard lightning current waveshapes from Table 1, 1000 synthetic signals with a signal-to-noise ratio (SNR) in the range of 0 to 25 dB were generated. These signals were then processed, using the proposed method described in Section 4.1; enhanced signals were obtained and subjected to classification and parameter estimation algorithms. The difference between estimated parameters obtained from enhanced signals relative to original signal parameters (in Table 2) are listed in Table 2 and were used as criterion for performance evaluation. From Table 2, it is clear that accuracy of some estimated parameters (peak current, tail time, total charge and specific energy) is almost independent of noise, while parameters such as front time and steepens are very sensitive to noise level. Estimated peak current values are almost constant in entire range of simulated SNR. For peak currents greater than 10 kA, the average relative error was ±2.44% (from 0.81% to 6.65% with relative standard deviation below 7%). For lower peak currents (lower than 10 kA) average estimation error is slightly higher (±8.53%, with greatest error at 0 dB with value of 25.63%), and for SNR greater than 5 dB average estimation error was ±5.11%. These results suggests that proposed procedure can estimate with high accuracy peak current values in wide range of SNR (from 5 dB to 25 dB). As expected, for low current amplitudes and for SNR below 5 dB accuracy is decreasing. As it can be seen from Equations (1) and (2), total charge and specific energy are functions of lightning current and due to this fact these parameters are also estimated with high accuracy within entire region of simulated SNR with average relative error of ±2.84% and ±0.77%, respectively. Noise level does not significantly affect these parameters since integration, in principle, represents a low pass filter. Time duration and steepness parameters estimation, however, are in general more variable and more sensitive to SNR. Tail time duration is estimated with the average relative error of ±3.55%. Waveform parameters front time and steepness in the investigated range of values and noise levels are estimated with higher average relative errors of 34.99% and 16.71%, respectively. The estimation of these parameters is significantly affected by the noise level. For fast rising currents (t f around 1 µs) in extreme case (at SNR = 0 dB), estimation errors may be up to 200%, and in this case, estimation is not reliable. However, in the more common range of SNR values (>10 dB), the average relative error for the front time is ±15.04%, while for the steepness, it is ±7.32%. Considering the standard tolerances given in [30,31] (for front time ±30% and for tail time ±20%), the obtained results are within the acceptable range. Results of Observation During the observation period of one year, 163 lightning events were recorded. Using the developed approach for automated data analysis, different types of lightning discharges were identified. The total number of lightning flashes was 64. The analyzed events were classified as given in Table 3. Detailed statistics were performed only for negative strokes due to the representative sample size. Statistical Distribution of Lightning Parameters It is generally agreed that the statistical distribution of lightning parameters follows the log-normal distribution, where the statistical variation of the logarithm of a random variable, x, follows the Gaussian distribution. The log-normal probability density function, p(x), is defined as in [13]: where σ lnx is standard deviation of lnx, and x m is median value of x. Therefore, x m and σ lnx need to be known to estimate the statistical distribution of a lightning parameter. The cumulative probability, P c (x), that the parameter will exceed x, is given by integrating Equation (3) between u 0 and ∞, resulting in: For approximating the log-normal distribution P c of lightning current peak, a simplified equation given by Anderson in [22,32] is also used: where µ and ρ are calculated from empirical data. Various correlations among lightning parameters have been found [13]. Assuming log-normal distributed random variables x and y, relationship between x and y can be expressed as: Negative Flashes As can be noted from Table 3, 59 first negative strokes and 86 subsequent negative strokes were analyzed. For the purpose of such analysis novel proposed processing method was used. The statistical distribution of multicomponent lightning flashes recorded in this study and compared with Anderson and Ericson (in [22]) is given in Figure 6. The frequency of the occurrence of multicomponent flashes in this region is very similar to that of Anderson and Ericson which is widely accepted. Classification, analysis and parameter determination are more challenging tasks for lightning flashes that consists of more stokes than for lightning flashes with single stroke. Therefore, as an example, a multicomponent lightning flash that consists of four negative strokes is presented in Figure 7. Figure 7 shows the originally measured signal and enhanced signal. Determined parameters that are important for engineering applications are presented in Table 4. Statistical parameters resulting from the measurements from this study during observation period of one year are given in Tables 5 and 6. First and subsequent negative currents were considered. After the log transformation, the Lilliefors test for normality, considering the 95% level of significance, was applied for the complete data set. It proved to be significant for most parameters of first negative strokes, similarly to the results presented by Anderson and Ericson in [22,33]. Cumulative statistical distributions of various parameters for the first and subsequent strokes are presented in the figures below (from Figure 8), as well as probability plots for lognormal distribution. From the figures, it can be seen that the measured data for the first and subsequent negative stokes are in good agreement with the theoretical cumulative distribution function. As indicated by the p-value from Table 5 at a significance level of 95%, it can be concluded that most of the analyzed parameters of the first negative strokes are distributed according to log-normal distribution. The total charge and maximum steepness for first negative strokes, as well as most parameters of the subsequent strokes, are similarly distributed, but failed the Lillifors test and, therefore, the log-normal distribution of these parameters cannot be confirmed. Very important formulas for cumulative probability as a function of the peak current for the first and subsequent stokes are performed from these results. The approximate expressions for cumulative probability of first and subsequent negative strokes current are given by Equations (7) and (8) (8) It is well known that these expressions have direct application in assessment of the lightning performance of electrical systems especially in insulation coordination studies. Therefore, it is of great importance to develop such formulas for different regions worldwide. Correlations between Parameters for First Stoke in Negative Flashes Correlations between various parameters of recorded lightning current waveshapes are considered by using fitting curves given by Equation (6) or it can be also represented with equation: ln(y) = ln(a) + dln(x) Correlation coefficients among parameters are given in Table 7. Results indicated that correlations are observed between almost all parameters. Few correlations that are not significant are marked in tables. Based on p-values from Table 8 statistically significance of correlations are confirmed. Strongest correlations were observed between peak current and all other parameters except tail time. Table 9 presents correlation expressions (described with function given in (9)) along with correlation coefficients r for such parameters, following logarithmic linear regression. Similarly, as it is published in the literature, in this study, a correlation between peaks current and front time was observed. Additionally, a similar correlation exists between the peak current and maximum steepness (see Figure 9), with a similar correlation coefficients in available literature. As expected, a very strong correlation was observed between the peak current and total charge and specific energy. An interesting observation is that both the front and tail time are negatively correlated with steepness (see Figure 10). Positive Flashes Four positive lightning flashes were observed, each with single stroke. The parameters for positive flashes are given in Table 10. As an example, the original and enhanced positive stroke are shown in Figure 11. Original signal Enhanced signal Figure 11. Positive lighting flash original and enhanced signal. Bipolar Flashes During the observation period of one year, one bipolar lightning flash was recorded on the 26th of February 2016, and it is presented in Figure 12. The parameters for the bipolar flash are given in Table 11. All parameters were determined for the positive and negative parts of the lightning current waveshape. Conclusions This paper presents the statistics of lightning current parameters obtained by processing the data collected by direct measurement at the broadcasting tower on Mount Lovćen. The analyzed data were collected during the one-year observation period. A new EMD-based approach for more accurate lightning classification and lightning parameter extractionwas applied to the data analysis. The introduction of the EMD algorithm significantly improved the accuracy of determining lightning current parameters compared to the methods previously used by the authors. Unlike conventional filters, using this algorithm in the proposed scheme, the phase shift of the signal is almost eliminated. Based on the cumulative distributions of the peak current of the first and subsequent strokes, the formulas for determining the probability of the occurrence of the peak current and the expression relating the peak current and other important parameters were generated. These expressions can be used directly in power system analyses. The statistical data for this region showed that most of the parameters for the first negative stroke are distributed according to the lognormal distribution and are very similar to their representation in contemporary literature. This study also confirmed that most of recorded events are negative lightning strokes, while positive and bipolar lightning strokes are rare. However, positive and bipolar lightning flashes are very dangerous, especially bipolar flashes (which transmit a large amount of energy to the ground), and should be taken into account when designing lightning protection. Many uncertainties regarding lightning events presently exist, and therefore better methods for lightning current measurements and waveshape analyses should be developed. It should be continued with efforts to collect data for formulation of lightning parameters according to geographical regions and for developing important formulas for power system lightning protection as well as developing correlation expressions among lighting parameters. For the signal processing methodologies used, it was shown that the accuracy of determining the front time and steepness should be improved. For this improvement, better acquisition units should be installed in the measurement system, for example, with a much higher sampling rate (50 MSa/s or 100 MSa/s) in order to be able to better record front time that has a very short duration (from 1 to several microseconds). In the future, the dynamic characteristics of the measurement system should be taken into account when analyzing the signal-to-noise ratio in order to further improve the signal processing. Data Availability Statement: The data that support the findings of this study are available from the corresponding author upon reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2022-12-21T16:13:50.868Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "32d0059d084ba1bab27ac53a3a59fe0d1ad50957", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/24/9925/pdf?version=1671187170", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "19409522523d78e767116b5ee6e9c466c6677212", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
214770528
pes2o/s2orc
v3-fos-license
Fast periodic visual stimulation to highlight the relationship between human intracerebral recordings and scalp electroencephalography Abstract Despite being of primary importance for fundamental research and clinical studies, the relationship between local neural population activity and scalp electroencephalography (EEG) in humans remains largely unknown. Here we report simultaneous scalp and intracerebral EEG responses to face stimuli in a unique epileptic patient implanted with 27 intracerebral recording contacts in the right occipitotemporal cortex. The patient was shown images of faces appearing at a frequency of 6 Hz, which elicits neural responses at this exact frequency. Response quantification at this frequency allowed to objectively relate the neural activity measured inside and outside the brain. The patient exhibited typical 6 Hz responses on the scalp at the right occipitotemporal sites. Moreover, there was a clear spatial correspondence between these scalp responses and intracerebral signals in the right lateral inferior occipital gyrus, both in amplitude and in phase. Nevertheless, the signal measured on the scalp and inside the brain at nearby locations showed a 10‐fold difference in amplitude due to electrical insulation from the head. To further quantify the relationship between the scalp and intracerebral recordings, we used an approach correlating time‐varying signals at the stimulation frequency across scalp and intracerebral channels. This analysis revealed a focused and right‐lateralized correspondence between the scalp and intracerebral recordings that were specific to the face stimulation is more broadly distributed in various control situations. These results demonstrate the interest of a frequency tagging approach in characterizing the electrical propagation from brain sources to scalp EEG sensors and in identifying the cortical sources of brain functions from these recordings. | INTRODUCTION Since its first report and validation in humans (Adrian & Matthews, 1934) scalp electroencephalography (EEG) has been widely used to study dynamic neurofunctional processes and their pathology in large-scale brain networks (Lopes da Silva, 2013;Nunez & Srinivasan, 2005;Regan, 1989). Given that EEG noninvasively provides information about the unfolding of brain processes at the millisecond time resolution, understanding the relationship between scalp EEG signals and their source(s) at the cortical level is important for fundamental research. It is also of primary importance for clinical studies, in particular for the neurological study of epileptic patients, in order to define and localize brain sources of epileptic seizures (Coito et al., 2019;Gavaret, Badier, Marquis, Bartolomei, & Chauvel, 2004;Koessler et al., 2010). Unfortunately, knowledge about the relationship between scalp EEG signals and their cortical source(s) remains severely limited, for several reasons. First, scalp EEG signals are attenuated by the electrical resistance of head tissues, which remain unknown in human in vivo (especially the skull resistivity) and very difficult to estimate noninvasively and during in vivo measurements (Goncalves et al., 2003;Koessler et al., 2017;Malmivuo & Suihko, 2004). Second, the distance from brain sources to scalp sensors reduce the amplitude of EEG signal, making it difficult to capture and estimate brain sources at the deepest portions of sulci or the medial brain structures (Koessler et al., 2015;Seeber, Cantonas, Sesia, Visservandewalle, & Michel, 2019; see also Pizzo et al., 2019 in MEG). Third and perhaps most importantly, many brain sources (often co-activated in interlocked time-courses) contribute to EEG recording. Electrical signals generated by these co-activated sources are mixed when measured on the scalp with EEG sensors, making it difficult to assign a specific source to a specific EEG signal characteristic (nonlinear relationship; Kovach, Oya, & Kawasaki, 2018) and requiring to solve an undetermined inverse problem (Grech et al., 2008;Kaiboriboon, Luders, Hamaneh, Turnbull, & Lhatoo, 2012;Michel et al., 2004). The few studies which investigated the relationship between simultaneously recorded scalp EEG and SEEG signals relied on event-related potential (ERP) approaches, in which brain activity is recorded to the sudden occurrence of an event (external or internal such as epileptic spikes) and then averaged in the time domain (Dubarry et al., 2014;Jacques et al., 2019;Koessler et al., 2015;Merlet et al., 1998;Rosburg et al., 2010). At least two main factors make it difficult to perform these studies and therefore seriously limit their availability. First, relating scalp to intracerebral EEG requires a high signal-to-noise ratio (SNR) on the scalp, which typically results in long-duration experiments to collect data from a large number of events (exogenous or endogenous; Luck, 2014). SNR is particularly an issue in these clinical settings where recordings can take place over several days without the possibility to fix or replace noisy scalp electrodes. Second, the amplitude and shape of evoked responses in the time domain are difficult to relate across scalp and intracerebral EEG because of the diversity and the unknown spatiotemporal dynamic of activated sources within the brain (Lopes da Silva, 2019). One approach to overcome these difficulties would be to use a stimulus presentation technique that provides high SNR and allows to more objectively characterize the signal to be compared across recordings. A potentially powerful approach that fits these criteria is the frequency tagging approach where a stimulus is presented at a (relatively fast) fixed frequency rate, for instance, a flickering light, eliciting a neural response exactly at this frequency rate which can be therefore objectively tracked and quantified in simultaneously recorded scalp EEG and SEEG signals. This approach was discovered shortly after the first descriptions of EEG recordings in humans (Adrian & Matthews, 1934), that is, well before the first ERP recordings (Dawson, 1951;see Regan, 1972). It was already considered at the time as offering a powerful mean to understand the nature and the source(s) of EEG recordings, as stated by Adrian (1944): "All the messages which reach the cortex will produce their own electrical accompaniment, and this can be recorded well enough if electrodes can be placed on the surface of the brain. But if we can get no nearer than the scalp, the potential changes generated in any group of nerve cells will usually be obscured by those of other groups nearby, and the record will then show us nothing… Fortunately this difficulty can be overcome, in part at least, by making all the cells work in unison. This can be done, as far as vision is concerned, by making the field more or less uniform and lighting it with a flickering light. The nerve cells are then forced to work in unison at the frequency of the flicker, and we can record their electrical activity through the skull up to frequencies of about 30 a second. This gives us a method of tracing the visual messages in the brain, for by means of the flicker rhythm they can be made easy to recognize" (Adrian, 1944, p. 361). Thanks to the subsequent application of Fourier analysis to EEG recordings (Regan, 1966), these "frequency-tagged" neural responses can be investigated in the frequency domain in various sensory modalities in neurotypical adults, but also in developmental and clinical populations (Regan, 1989;see Norcia, Appelbaum, Ales, Cottereau, & Rossion, 2015 for a recent review in vision research). The main advantages of this approach are its objectivity (i.e., responses are identified at a frequency known by the experimenter) and high sensitivity (i.e., high SNR) (Norcia et al., 2015;Regan, 1989;Rossion, 2014). Moreover, by carefully manipulating the nature of the stimulus property that is periodically modulated, sensory processes, but also higherlevel brain processes such as face or word categorization for instance (e.g., Lochy, Van Belle, & Rossion, 2015;Rossion & Boremanse, 2011), can be selectively tracked in all modalities. Despite these advantages, to our knowledge, the frequency-tagging approach has never been applied to simultaneous EEG and intracerebral recordings in order to shed light on the relationship between the two types of signals. 1 Here we report simultaneous frequency-tagged scalp and intracerebral EEG responses in a unique epileptic patient implanted with three intracerebral electrodes (27 recording contacts) in the right occipitotemporal (OT) cortex and equipped simultaneously with 27 scalp electrodes on the scalp surface. Thanks to a fast periodic (6 Hz) visual stimulation with highly salient stimuli (faces), we objectively relate the quantified face-evoked responses observed inside and outside the brain. Specifically, we address the following questions: (a) How do objectively related signals recorded simultaneously inside the brain and on the scalp differ in terms of amplitude and SNR? and (b) Can frequency-tagging significantly improve the precise identification of the sources of activity recorded on the scalp? | MATERIALS AND METHODS The epileptic patient, as well as the recording settings, are identical to those reported in Jacques et al. (2019). The patient has also been described in Jonas et al. (2014). Therefore, a shortened version of the methods is reported here. | Case description KV is a right-handed female suffering from refractory occipital epilepsy related to a focal cortical dysplasia involving the right lingual gyrus and posterior collateral sulcus. The patient was 32 year-old at the time of testing. Her case was also reported as evidence of strong face identity repetition suppression effects in the lateral cortex of right IOG using fast periodic visual stimulation (FPVS) with unfamiliar faces (Jonas et al., 2014). | Simultaneous intracerebral-Scalp EEG recordings The patient underwent simultaneous intracerebral and scalp EEG recordings. The co-locations of these electrodes are shown in Figure 1. The originality of electrode placements is a relatively dense spatial coverage of the occipital-temporal cortex with both intracerebral (27 intracerebral contacts) and surface electrodes (including sO1,sOz,sO2,sPO7,sPO8,sP9,sP10,sP5,sP6). | Intracerebral electrodes The patient was stereotactically implanted with three intracerebral multicontact electrodes targeting the right ventral OT cortex, according to a well-defined and previously described procedure (Jonas et al., 2016;Salado et al., 2018). Each intracerebral electrode consists of a cylinder of 0.8 mm diameter and contains 8-11 independent recording contacts of 2 mm in length separated by 1.5 mm from edge to edge and by 3.5 mm center to center (DIXI Medical, Besançon, France). Electrodes D and L (eight recording contacts each: D1-D8 and L1-L8) sampled the right inferior occipital gyrus and posterior collateral sulcus. Electrode F (11 contacts, F1-F11) was more anterior and went from the right inferior temporal gyrus to the lingual gyrus. All intracerebral contacts except D8 were in direct contact with the gray matter. The recording surface of contact D8 was located~2 mm from the cortical surface, likely within the meninges (Figure 1c,d). | Scalp electrodes Simultaneous scalp EEG recordings were acquired with 28 Ag/AgCl electrodes of 10 mm diameter placed according to the 10-20 system | Recordings Simultaneous SEEG-scalp EEG signals were recorded at a 1,024 Hz sampling rate with a 128-channel amplifier (SD LTM 128 Headbox; Micromed, Italy). The reference electrode was a prefrontal midline scalp electrode (sFPz). The recording of the periodic visual stimulation experiment reported here was performed 2 days after the scalp electrode placement. Due to the low diameter of the skull defect at the penetration points of intracerebral electrodes (1.2 mm) and the low electrical conductivity of the guidance screw (titanium), no leakage of current was observed in scalp EEG recordings. | Rationale The main aspects of the procedure for this experiment have been previously described in three different studies comparing the presentation of trains of different faces to identical faces at a fixed frequency rate (Jonas et al., 2014;Rossion & Boremanse, 2011;Rossion, Prieto, Boremanse, Kuefner, & Van Belle, 2012). From a methodological perspective, this FPVS approach-which leads to so-called steady-state visual evoked potentials (SSVEPs, Regan, 1989, Regan, 1966)-has multiple advantages: objectivity of definition and quantification of the response of interest, high SNR, short duration of the experiment, and recording of the response of interest during a simple incidental task (Regan, 1989;Rossion, 2014), making it a tool of choice for the study of patients implanted with intracerebral electrodes. Here, faces were presented at a 6 Hz rate because this frequency rate provides the largest repetition suppression effect on the scalp over the right OT cortex (Alonso-Prieto, Van Belle, Liu-Shuang, Norcia, & Rossion, 2013), as well as in face-selective areas of the right inferior occipital gyrus and middle section of the lateral fusiform gyrus (Gentile & Rossion, 2014). | Stimuli Full-front color face pictures of 18 unfamiliar individuals (7 × 10 of visual angle for the base face size) equalized for global luminance were used. These face stimuli were the same as used in previous studies (Alonso-Prieto et al., 2013;Rossion & Boremanse, 2011) and taken from a well-known set of laser-scanned faces from the Tubingen Max Planck Institute (MPI) database of laser-scanned (Cyberware TM) human heads. They were cropped to remove external features (hair and ears) but their overall shape was preserved. | Procedure In each condition, a face stimulus appeared and disappeared (sinusoidal contrast modulation) on the screen, at a stimulation rate of six faces per second (one face every 166.66 ms; Figure 2). A trigger was intracerebral contacts in the right OT cortex. Electrode L was slightly superior to D and F electrodes. All intracerebral contacts except D8 were in direct contact with the gray matter. Contact D8 was located~2 mm from the cortical surface. (d) 3D ventral views of the posterior half of the right hemisphere of the patient, showing the anatomical location of the intracerebral contacts and scalp electrodes. The plots show the gray matter cortical surface (left) and the corresponding white matter surface (right, the gray matter surface is represented as a dotted gray outline). Since intracerebral contacts penetrate the brain tissue, contacts are only visible when stripping away the gray matter and keeping only white matter surface. Acronyms: IOG: inferior occipital gyrus, OTS: occipitotemporal sulcus, (p)CoS: (posterior) collateral sulcus, FG, fusiform gyrus, LG: lingual gyrus sent to the parallel port of the EEG recording computer at each minimal level of visual stimulation (gray background), using a photodiode placed on the left upper corner of a laptop monitor. In the same face condition, a randomly selected face picture was presented repeatedly during the whole stimulation duration (70 s). In the different faces condition, the sequence started with the repeated presentation of a randomly selected face picture for the first 15 s, after which the face identity changed at every cycle for the remainder of the sequence (i.e., from 16 to 70 s, see Rossion et al., 2012). In these different faces condition, 18 individual faces of the same sex were used and presented in random order. The same face identity never appeared twice in a row, so that the face identity change rate was always 6 Hz. To minimize repetition suppression effects due to low-level visual cues, the face stimulus changed substantially in size with each presentation, that is, at a rate of 6 Hz, in all conditions (random face size between 82 and 118% of base face size). The experiment consisted of four sequences of 70 s: each condition (same face or different faces) was repeated two times (face gender: male or female). The order of conditions was randomized. During each 70 s run, the patient was instructed to fixate on a small black cross located centrally on the face, slightly below the bridge of the nose. The fixation cross changed color (black to red) briefly (200 ms) 6 to 8 times during each run and the patient was instructed to report the color changes by pressing a response key. | Frequency domain analyses All analyses were performed using Letswave 5 (Mouraux & Iannetti, 2008) and MATLAB v7.8 (The Mathworks, Inc.). Segments of 50 s of recording during visual stimulation (i.e., 300 face stimulation cycles at 6.0 Hz) from 17 to 67 seconds were considered for analysis. These segments were cropped to contain an exact integer number of 6 Hz cycles. Segments were averaged in the time domain separately for each condition and a Fast Fourier Transform (FFT) was applied to these averaged segments to compute the amplitude and phase spectra at a high spectral resolution of 1/50 = 0.02 Hz. SNR was computed from the amplitude spectra as the ratio between the amplitude at each frequency bin and the average amplitude of the corresponding 20 neighboring bins (up to 11 bins on each side, i.e., 22 bins, but excluding the 2 bins directly adjacent to the bin of interest, i.e., 20 bins, e.g., Rossion et al., 2012). Significant responses above noise level at the stimulation frequency at each channel were defined by computing Z-scores on the amplitude spectra, using the mean and SD of the 20 neighboring bins around the frequency of interest (e.g., Liu-Shuang, Norcia, & Rossion, 2014). Statistical comparisons between conditions were similarly made by computing Zscores on amplitude spectra obtained by subtracting the spectra measured in the same face condition from the spectra measured in the different faces condition. | Time domain analyses For this and further analyses, we only used data from the different faces condition which generated the largest responses both in scalp and intracerebral recordings. Each recording sequence from 17 to 67 s relative to sequence onset was divided into epochs of 1 s duration centered on the appearance of a face. Epochs containing blinks were rejected and remaining epochs were averaged and the mean amplitude was centered on zero (dc correction). | Correlation between intracerebral and scalp EEG signals We examined the relationship between visually-driven signal recorded at intracerebral contacts relative to scalp electrodes by correlating the F I G U R E 2 Fast periodic visual stimulation procedure and experimental design. (a) Faces were presented in sequences of 70 s using a sinusoidal contrast modulation at a rate of 6 Hz. Here, the "different faces" condition is shown, with the face of a different individual presented at full contrast every 0.167 s (1/6 s). The size of faces changed at every cycle. (b) The two conditions used in the study, in which either a different face was presented at every cycle throughout the duration of the FPVS sequence (top) or the same face was repeated for the whole sequence (bottom) variations of the 6 Hz response amplitude over time across all scalp electrodes and intracerebral contacts ( Figure 3). We first applied a Morlet wavelet transform on the raw signal to compute the time-varying amplitude envelope of the electrophysiological signal around 6 Hz. The parameters of the mother wavelet (central frequency: 6 Hz, full width at half maximum [FWHM] in the frequency-domain = 0.8 Hz; FWHM in the time domain = 0.47 s) were chosen to provide a relatively high frequency resolution while preserving the dynamic in the amplitude variation over time. We kept the amplitude envelope from 15 to 69 s from each recording sequence and divided the envelope from each sequence in 9 segments of 6 s, resulting in 18 segments in total (2 sequences with 9 segments). Then, for each intracerebral contact, the signal in each segment was correlated (Pearson's coefficient) with the corresponding segment in each scalp electrode and the correlations were averaged across the 18 segments, resulting in a 27 (intracerebral) × 27 (scalp) correlation matrix. We also computed the across-segments SD of the correlation coefficients. We determined whether correlations were significantly different from zero using a randomization procedure in which, for each electrode, we randomly shuffled (5,000 times) the order of the 6 s segments prior to computing correlations and averaging the correlations across segments. For each intracerebral contact correlated with all scalp electrodes, we determined significance using a cluster-based correction (cluster-mass) for multiple comparisons (Maris & Oostenveld, 2007;Pernet, Latinus, Nichols, & Rousselet, 2015) with a cluster-forming threshold of p < .05. Note that the wavelet analysis We also used further benchmark tests for the control situations to evaluate the effect of notch filtering and of using a different analysis frequency on the correlation patterns. We reasoned that if the difference in the correlation patterns obtained in the Ori@6 Hz versus the control situations is due to the notch filtering or to the use of a different analysis frequency rather than to FPVS, then we should observe similar differences when applying notch filtering or using a different frequency on signal which does not contain a visually-driven F I G U R E 3 Procedure for correlating scalp and intracerebral 6 Hz signals during periodic face stimulation. (1) we apply a wavelet transform to the raw signal (top: 18 s of recording at two example intracerebral contacts -D6, D8-and three scalp electrodes -sPO8, sO2, sFz) to extract the variation of signal amplitude overtime at the 6 Hz frequency corresponding to the stimulation frequency (bottom: raw (s)EEG signal band-pass filtered from 5.9 to 6.1 Hz for illustration and 6 Hz wavelet amplitude envelope). This amplitude envelope is divided into segments of 6 s duration. (2) The signal in each segment is Pearson correlated across all channels. Here we show acrosschannels correlation matrices for the three segments displayed above. (3) Correlation coefficients computed for different segments are averaged and (4) correlations between individual intracerebral contacts and all scalp electrodes are visualized as scalp topographies. Last, statistics are performed to isolate significant correlations between intracerebral and scalp signals periodic response. The benchmark situations to evaluate the effect of notch-filtering on the patterns of correlations were the following: (a) NotchOri@4 Hz: correlations using amplitude envelope at 4 Hz when the original signal has been notched filtered at 4 Hz ([3.9-4.1] Hz); (b) NotchRest@6 Hz: correlations using amplitude envelope at 6 Hz when the signal during rest has been notched filtered at 6 Hz ([5.9-6.1] Hz). The benchmark situation to evaluate the effect of using the amplitude envelope at a different frequency was the following: (c) Rest@4 Hz: correlations using the amplitude envelope at 4 Hz from the signal recorded during rest. The pattern of correlations in the original, control and benchmark situations were summarized and statistically compared by computing an index of right-hemispheric lateralization: we subtracted the correlations averaged over left-hemispheric OT scalp electrodes from the correlations averaged over corresponding electrodes in the right hemisphere (sO2, sPO8, sP4, and sP10). These indices were compared against zero and against each other using a permutation test (10,000 permutations). 3.2 | Corresponding spatial location of maximal response amplitude to faces between scalp and intracerebral EEG recordings (Figure 5b), the scalp electrodes displaying the strongest scalp EEG responses were the closest to the intracerebral contacts in lateral IOG (sPO8, sO2, sP6, sP4 scalp electrodes; Euclidean distance to D8 = 27, 26, 32, and 43 mm, respectively; Euclidean distance to L8 = 28, 35, 33, and 46 mm, respectively). Second, the adaptation effect in intracerebral recording was maximally measured at two separate cortical locations: around the lateral IOG (D5, D7, D8, L7, and L8 contacts) and more anteriorly in the OTS above the lateral fusiform gyrus (F7, F8 contacts). Interestingly, this pattern of intracerebral response was associated with a maximal adaptation effect on the scalp over slightly more anterior electrodes (e.g., sP10) compared to the response in the different faces condition: scalp electrode sPO8 was closest to intracerebral contacts in the lateral IOG and scalp electrode sP10 was closest to intracerebral contacts in the lateral fusiform gyrus compared to contacts in lateral IOG (Euclidean distances from sP10 = 35 mm to F8, 45 mm to L8, and 51 mm to D8). | Strong amplitude and SNR attenuation in scalp compared to intracerebral EEG While we observed a spatial correspondence of the largest responses in scalp and intracerebral recordings, the amplitude measured on the scalp was very much attenuated relative to the intracerebral signal ( Figure 5a, compare top and bottom plots). Indeed, relative to intracerebral contacts D8 and L8 where the amplitude at 6 Hz in the different faces condition was 6.3 and 5.2 μV respectively, the amplitudes at the closest scalp electrodes sO2 and sPO8 were 0.67 and 0.63 μV respectively, which is between 7.7 and 9.9 times smaller than the corresponding intracerebral signal. Interestingly, the SNR at the stimulation frequency (i.e., 6 Hz) was less attenuated than the absolute signal amplitude when comparing scalp to intracerebral recordings. SNR F I G U R E 5 Intracerebral and scalp responses to faces in FPVS. (a) Amplitude spectrum (5-7 Hz) measured during FPVS at 6 Hz in the "different faces" condition at the three most external intracerebral contacts of electrodes F, D, and L (top), and at three right OT scalp electrodes closest to the intracerebral contacts shown above (bottom). The plots are displayed at the same amplitude scale to visualize the difference of amplitude between intracerebral and scalp recording of the same visual response. (b) Ventral view of the posterior half of the patient KV's right hemisphere (white matter surface, the gray matter surface is shown as a dotted gray outline) together with intracerebral contacts (small circles) and selected surrounding scalp electrodes (large circles). Channels are colored as a function of the amplitude of response at 6 Hz in the "different faces" condition. Note the difference in the color scale used for scalp and intracerebral data. (c) Same convention as for panel B but representing the effect of identity adaptation was between 2 and 3.5 times lower in scalp (SNR = 8.3 and 8.1 for sO2 and sPO8, respectively) compared to intracerebral (SNR = 29 and 16.7 for D8 and L8, respectively) recordings. This is due to the signal amplitude being comparatively more attenuated from intracerebral to scalp recordings than the mean noise amplitude in frequency bins around 6 Hz (mean noise at D8/L8 = 0.26 μV; mean noise at sO2/sPO8 = 0.08 μV; ratio of intracerebral to scalp noise = 3.4). | Phase investigation For the remainder of the analyses, we will focus on the signal mea- We highlight these eight intracerebral contacts since they are the closest to scalp right OT electrodes, showed the highest 6 Hz response amplitude, and exhibited the closest correspondence with signal measured at right OT scalp electrodes. Plots were obtained by cutting the recordings during FPVS sequences in segments of 1 s and averaging over these segments. The waveforms manifest a sudden phase shift from most external contacts (D7-8, L7-8) to more internal contacts (D5-6, L5-6). (b) Left: Polar plot representations of the phase of responses at channels shown in panel a (see panel A for channel legend). Right: Ventral white-matter surface view of the posterior half of the patient KV's right hemisphere together with intracerebral contacts (small circles) and selected surrounding scalp electrodes (large circles). Channels are colored as a function of the phase of the response at 6 Hz in the "different faces" condition. Intracerebral contacts in gray (F9 to F11) showed no significant response to faces at more medial contacts (D5-D6, L5-L6). This is also reflected in the phase of the frequency spectra (Figure 6b), where the phase values are similar between D8/L8 and right OT scalp electrodes (sO2, sPO8, sP10, mean phase difference: 11 ), but very dissimilar between contiguous contacts D5-D6 and D7-D8 (mean phase difference: 158 ) and between L5-L6 and L7-L8 (mean phase difference: 116 ). The observation of only a partial phase opposition between these two groups of intracerebral contacts (i.e., rather than a phase difference close to 180 ) suggests that the signal measured at these two groups arises from partly distinct generators. Only one of these neural generators, the generator contributing to the signal at lateral IOG contacts (D7-D8, L7-L8), also contributes to the EEG signal measured at OT scalp electrodes. | Correlation investigation To directly and formally quantify the relationship between scalp and intracerebral signals, we correlated the variations of the 6 Hz response amplitude over time across all scalp and intracerebral contacts (Figure 3, methods). As a reminder, this analysis disregards phase information so that only coordinated temporal variation of amplitude across channels could result in meaningful correlation coefficients. This was done to avoid correlations (positive or negative) driven simply by phase coherence across channels triggered by a common visual stimulation. | Focal and right-lateralized intracerebral-scalp correlations are specific to the FPVS signal The observed correlations suggest that these scalp and intracerebral channels pick up electrophysiological responses coming from the same cortical region responding to the periodic visual stimulation. However, these correlations may also be driven by a general electrophysiological activity (i.e., unrelated to the stimulation) coming from cortical territories to which both intracerebral contacts and scalp electrodes are sensitive to, given their spatial proximity. If this is the case, we should observe the same correlation pattern with or without the presence of a visually-driven response in the recorded electrophysiological signal. We, therefore, compared the patterns of scalpintracerebral correlations obtained at the stimulation frequency during periodic visual stimulation (Ori@6 Hz) with a series of "control" situations: (a) NotchOri@6 Hz: Pattern of correlations when the visuallydriven signal has been selectively filtered-out; (b) Ori@4 Hz: Pattern of correlations at another frequency than the stimulation frequency (4 Hz); (c) Rest@6 Hz: Pattern of correlations obtained using 6 Hz signal recorded during a rest period. These analyses revealed that intracerebral-scalp correlations were more focal when a visuallydriven periodic response was present in the electrophysiological signal. Specifically, in the three control conditions, while the patterns of scalp correlations measured for intracerebral contacts D7-D8 and L7-L8 were overall similar to the one observed at 6 Hz using the original signal, these patterns were all more widespread and included significant correlations on the scalp both in the right and the left hemisphere ( Figure 8). This observation was reflected in the larger number of scalp electrodes significantly correlated with each intracerebral contacts-D7, D8, L7, and L8-in the three control situations (mean number of significant scalp electrodes across D7, D8, L7, and L8 = 9.25 to 14.5, Figure 9a) compared to the original FPVS condition F I G U R E 8 Correlations between scalp and intracerebral signals during FPVS and control situations. Topographical maps of significant correlations (p < .05, cluster-based correction for multiple comparisons) between signal at four intracerebral contacts in the lateral IOG and scalp electrodes. Maps of correlations are shown when using the original signal at around 6 Hz during FPVS (Ori@6 Hz, top row), when using signal in which the visually-driven signal has been filtered-out (NotchOri@6 Hz, second row), when using another frequency (4 Hz) than the stimulation frequency (Ori@4 Hz, third row), and when using 6 Hz signal recorded during rest (Rest@6 Hz, bottom row) (Ori@6 Hz: 4.25 significant scalp electrodes). In addition, the right hemispheric dominance of the correlations pattern when using the original signal at 6 Hz was assessed and compared to control conditions by subtracting the correlations averaged over left-hemispheric OT scalp channels from the correlations averaged over corresponding electrodes in the right hemisphere (sO2, sPO8, sP4, sP10). This revealed that, while scalp correlations were significantly stronger in the right than in the left hemisphere in all conditions (all p's < .02, one-tailed permutation test, Figure 9b), the right hemispheric dominance was significantly stronger when using the signal at 6 Hz in the original signal where periodic visual stimulation is present (Ori@6 Hz: right hemispheric dominance = 0.24 ± 0.16) compared to the three control situations (0.09 ± 0.14, 0.1 ± 0.1, 0.1 ± 0.13, all p's < .005, one-tailed permutation test). In contrast, there was no significant difference among any of the control and additional benchmark situations (p's > 0.5, Figure 9b, see methods for benchmark situations). | DISCUSSION This study shows that a few minutes of periodic visual stimulation suffice to generate a robust signal, objectively identifiable at the exact frequency of stimulation (and harmonics) in the frequency spectrum of both SEEG and EEG signals recorded simultaneously. Here we find in the single epileptic patient tested that the response peaks over the right occipitotemporal scalp region, as in neurotypical participants tested with this paradigm (Figure 4), suggesting the generalizability of the present observations to the normal population. By combining FPVS with correlation analyses we thus provide an original approach to investigate the relationship between functional brain electrophysiological activity measured simultaneously inside the brain and on the scalp. Quantification of the 6 Hz response in the frequency domain is straightforward and reveals a tenfold decrease of amplitude at 6 Hz between the most external intracerebral contacts and the nearest scalp EEG electrodes (i.e., 25-30 mm), providing unique information about skull attenuation of electrophysiological activity (Oostendorp, Delbeke, & Stegeman, 2000;Wendel, Vaisanen, Seemann, Hyttinen, & Malmivuo, 2010). The choice of the 6 Hz stimulation frequency was dictated by previous studies showing robust responses at this frequency for face stimulation, in particular when different face identities are presented at every stimulation cycle (Alonso-Prieto et al., 2013). Note that this attenuation might even be underestimated, given that the intracranial sampling was limited and that larger responses might have been found at other nearby locations inside the brain. Nevertheless, the attenuation in SNR between intracerebral and surface EEG responses was of "only" 2-3.5. This reduced ratio indicates that the electrophysiological noise, as computed as in the present study, is significantly larger inside than outside the brain. A major factor contributing to this reduction of the ratio between amplitudes and SNR is that, rather than being computed over a prestimulus baseline as in standard ERP studies, electrophysiological noise is computed here within a small theta range frequency around the signal of interest that is, 6 Hz (Meigen & Bach, 1999;Rossion et al., 2012;Srinivasan, Russell, Edelman, & Tononi, 1999). Hence, EEG noise is "free" of alpha activity, environmental noise, eye and muscle artifacts, and so forth which typically greatly contaminate scalp EEG signals (Luck, 2014). Additionally, the cortical surface to which an EEG scalp electrode is sensitive to is likely larger than that of an SEEG electrode. Noise in scalp EEG might thus be smaller F I G U R E 9 Scalp-intracerebral correlations are more focal and right-lateralized during the periodic presentation of faces. (a) The bars represent the number of scalp electrodes significantly correlated with intracerebral signal averaged over contacts D7, D8, L7, L8 in the original, control, and benchmark situations. A lower number means fewer scalp electrodes were significantly correlated with intracerebral signal. (b). The bars represent the difference in scalp-intracerebral correlations between posterior right-and left-lateralized scalp electrodes in the original, control, and benchmark situations. Correlations between scalp electrode and intracerebral contacts D7, D8, L7, L8 were first averaged across intracerebral contacts, then averaged separately for right and left posterior scalp electrodes, and finally averaged correlations in the two groups were subtracted (right minus left). Error bars represent the SE of the mean across the 18 segments used to compute correlations. The correlations are higher in the right hemisphere for all comparisons but the righthemisphere dominance is significantly larger (all p's < .005) for the original FPVS condition. In contrast, the comparisons across control and corresponding benchmark situations are not significant compared to SEEG by virtue of averaging uncorrelated electrophysiological "noise" over a larger surface. The narrow band of the 6 Hz signal allows objective and finergrained tracking of the phase and identification of the intracerebral electrodes generating that signal on the scalp. Using FPVS in relation with our correlation approach, we found that activity recorded over right OT scalp regions directly relates to the activity measured at intracerebral contacts located in the inferior occipital gyrus either in the cortex (D7, L7, L8) or in the meninges (D8). This latter intracerebral contact likely receives electrical field propagation from the nearby cortex (i.e., about 2 mm distance) (Zaveri, Duckrow, & Spencer, 2009). This strongly suggests that one major neural source for the 6 Hz FPVS signal measured on the scalp is located in the inferior occipital gyrus at or near contacts D7-D8 and L7-L8. Other sources in neighboring location likely also contribute to the measured scalp activity, but they could not be captured here due to the limited intracerebral sampling in the clinical case. Nevertheless, the main contribution of lateral brain sources to scalp EEG by comparison to medial sources is in line with previous studies in epilepsy and especially in mesial temporal lobe epilepsy (Koessler et al., 2015;Merlet et al., 1998). The identification of the right OT scalp region is similar to our findings in a previous study measuring the relationship between the face-evoked N170 potential on the scalp and in the cortex in the same epileptic patient (Jacques et al., 2019). However, this relationship was relatively more widespread on the scalp in the previous ERP study For instance, in the current study, the use of FPVS allows taking into account the amplitude variations in a relatively narrow frequency range (i.e., around 6 Hz), which is less affected by ongoing broadband EEG noise (mostly in the theta and alpha range) compared to when using the ERP signal as in the previous study. Such broadband noise in the ERP signal may be correlated across a large cortical surface (e.g., Buzsáki, 2002;Klimesch, 1999;Miller, Foster, & Honey, 2012), therefore being picked up simultaneously by intracerebral contacts in the IOG and by many posterior scalp electrodes, resulting in a broad correlation pattern on the scalp. Therefore, while the cortical regions involved in generating the N170 and the 6 Hz FPVS face signals are likely similar given the overall similarity in the scalp topographies of the correlations (irrespective of their spatial spread) across the two studies, the FPVS approach allows revealing a more focused relationship between scalp and intracerebral recordings. In addition, we demonstrate that the relationship between scalp and intracerebral channels at 6 Hz is significantly more focal and right-lateralized during FPVS as compared to rest EEG or during stimulation but considering a frequency range outside the stimulation frequency (i.e., around 4 Hz). Hence, this relationship is specific to the stimulation frequency and cannot be attributed to general factors such as an increase of arousal during visual stimulation for instance. Moreover, other benchmark control situations (Figure 9) further indicate that the specificity of this relationship during FPVS is not driven by analyses or testing parameters. Note that since we did not stimulate with other frequencies, whether 6 Hz provides the tightest correlation between scalp and intracerebral activity cannot be determined in the present study-although this is likely given the particularly large response that it generates in this paradigm (Alonso-Prieto et al., 2013). A peculiar observation is the broader pattern of correlations for control situations, such as when using a different frequency of analysis (i.e., Ori@4 Hz), compared to when using FPVS. One might have expected no or reduced correlations in control situations when no visual response is present in the signal at the frequency of analyses. The observation of a correlation for the control situations likely stems from the close spatial proximity between scalp right OT electrodes and the most lateral intracerebral contacts, which makes it likely that these recording channels are sensitive to a similar cortical territory. This would result in correlations across channels inside the brain and on the scalp stemming from the fact that they are sensitive to the same ongoing cortical EEG activity unrelated to a visual response (i.e., "noise"). As indicated above, this EEG "noise" in the theta and alpha range (e.g., Buzsáki, 2002;Klimesch, 1999;Miller et al., 2012) is more likely to correlate across a larger brain surface than neural signal related to the visual stimulation, therefore being measured over a broader scalp surface. This would yield patterns of correlations that are more broadly distributed on the scalp over both hemispheres. This phenomenon is also likely to take place in the control situation where the visually-driven signal at 6 Hz was removed using a very narrow notch filter (i.e., NotchOri@6 Hz). In this particular situation, one possibility is that when the visually-driven 6 Hz signal is filtered out, the variations of the wavelet amplitude envelope across time (which is used to obtain the correlations) are related to other frequencies around 6 Hz (i.e., theta range) that fall within the frequency bandwidth of the wavelet at 6 Hz (full width at half maximum in the frequency-domain at 6 Hz is 0.8 Hz), resulting in broad scalp correlation patterns. In contrast, when visually-driven signal at 6 Hz is present (Ori@6 Hz), it completely dominates over surrounding frequencies in the amplitude envelope, allowing to reveal the more local neural activity specifically related to face periodic stimulation. Several studies (Cosandier-Rimele, Merlet, Badier, Chauvel, & Wendling, 2008;Ebersole, 1997;Ramantani et al., 2014;Tao, Ray, Hawes-Ebersole, & Ebersole, 2005) have shown that a large cortical surface from 6 to 30 cm 2 is required to generate detectable scalp EEG signals. According to a recent computational study (Cosandier-Rimele et al., 2008), a cortical surface of 26cm 2 is required to obtain a SNR equal to eight in scalp EEG, corresponding roughly to the SNR observed in our study. However, the FPVS approach is known to generate extremely high SNR responses on the scalp, so that more focal sources in the lateral section of the inferior occipital gyrus, corresponding to smaller cortical patches, may have been sufficient to generate this response. While fMRI studies indicate that the (right) IOG plays a key role in (unfamiliar) face individuation responses in the human brain (e.g., Gauthier, Tarr, Moylan, Skudlarski, & Gore, 2000;Schiltz et al., 2006), these responses are not confined to this region but extend to more anterior regions of the lateral fusiform gyrus. Although these regions may also contribute to the signal recorded on the scalp, the cortical orientation of the lateral fusiform gyrus makes it unlikely for this region to majorly contributes to the EEG measured over lateral OT cortex scalp regions (Jacques et al., 2019). While simultaneous scalp and intracerebral studies very often rely on relatively complex signal processing methods (e.g., blind source separation) to extract biomarkers from scalp background activity (Koessler et al., 2015;Pizzo et al., 2019), the spontaneous visibility (i.e., even without averaging method or baseline correction method) of the scalp responses during FPVS highlights the interest of this approach to clarify the scalp EEG correlates of focal cortical generators. Therefore, compared to signal analysis performed in time domain and on spontaneous activity (like epileptic spikes or seizures), FPVS in combination with our correlation approach may represent a major tool in order to understand and characterize the link between cortical source activity and scalp EEG signals. Moreover, our observation of robust intracerebral vs. scalp EEG correlations with just two sequences of stimulation (2 × 50 s of data) is a clear advantage over more conventional stimulation approach in clinical settings where these recordings take place. Given that we report data from a single patient implanted with three intracerebral electrode arrays, this approach should be further validated to ensure its applicability in different experimental research domains and settings. Nevertheless, the frequency-tagging approach is readily used to measure a wide range of responses in the visual domain (e.g., to luminance or contrast changes, but also visual words, quantities, or objects, Norcia et al., 2015) and in other modalities (e.g., low-level and high-level auditory responses: Fujiki, Jousmaki, & Hari, 2002;Nozaradan, Peretz, Missal, & Mouraux, 2011;somatosensory responses: Colon, Legrain, Huang, & Mouraux, 2015), as well as their modulation by attentional factors (Chen, Seth, Gally, & Edelman, 2003;Colon et al., 2015;Morgan, Hansen, & Hillyard, 1996;Yan, Liu-Shuang, & Rossion, 2019). This suggests that the original approach introduced here could be extended to larger samples of individual brains and other brain functions, pending appropriate adaptation in the stimulation and analyses parameters. Moreover, this approach could be extended to understand functional connectivity between intracerebral contacts in close or remote regions with a wider sampling across the brain. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request. Corentin Jacques https://orcid.org/0000-0001-8917-4346 ENDNOTE 1 A recent study recorded 48-channel ECoG and 27-channel scalp-EEG data simultaneously during light flickering in a single patient (Wittevrongel et al., 2018). However, due to insufficient quality of the simultaneously recorded scalp-EEG (dried conductive gel, as well as the influence of scarred and swollen tissue with the particularly invasive ECoG procedure), the patient's scalp-EEG was excluded from further analysis.
2020-04-02T09:20:13.138Z
2020-03-31T00:00:00.000
{ "year": 2020, "sha1": "e0e3e256d57c75815290f9ae4b284c17ee173aa9", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hbm.24952", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bbff8cc54b24c619465974f98fb6a4747a8b0545", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
247504445
pes2o/s2orc
v3-fos-license
Improved CNN Method for Crop Pest Identification Based on Transfer Learning Timely treatment and elimination of diseases and pests can effectively improve the yield and quality of crops, but the current identification methods are difficult to achieve efficient and accurate research and analysis of diseases and pests. To solve this problem, this study proposes a crop pest identification method based on a multilayer network model. First, the method provides a reliable sample dataset for the recognition model through image data enhancement and other operations; the corresponding pest image recognition and analysis model is constructed based on VGG16 and Inception-ResNet-v2 transfer learning network to ensure the completeness of the recognition and analysis model; then, using the idea of an integrated algorithm, the two improved CNN series pest image recognition and analysis models are effectively fused to improve the accuracy of the model for crop pest recognition and classification. The simulation analysis is realized based on the IDADP dataset. The experimental results show that the accuracy of the proposed method for pest identification is 97.71%, which improves the poor identification effect of the current method. Introduction As one of the most important industries in China, agriculture has become the foundation of the national economy. e quantity and quality of crop products directly affect people's daily living standards [1][2][3]. In recent ten years, the deteriorating ecological environment has made the ecological structure more fragile, resulting in large-scale outbreaks of crop diseases and pests [4]. e quantity and quality of agricultural products are closely related to diseases and insect pests. e large-scale and frequent outbreak of crop diseases and insect pests will cause irreparable economic losses [5][6][7]. erefore, it is very important to monitor and control crop diseases and insect pests. In the prevention and control of crop diseases and pests, the first and most important point is how to accurately and quickly identify the diseases and pests that harm crops. In the field of pest identification, the traditional method of manual calculation and measurement is still used: relevant technicians rely on experience to detect and identify the types of pests and diseases through the eye, so there are many defects, such as cumbersome and repetitive work, low work efficiency, few identification personnel, and identification information cannot be transmitted in real time [8,9]. With the rapid development of machine learning and artificial intelligence technology, the accuracy of pest detection technology based on deep learning in actual agricultural scenes has exceeded that of traditional agricultural experts [10,11], and the calculation and analysis of efficiency are high, which greatly widen the possibility of application of the pest detection method based on deep learning technology [12,13]. However, due to the incomplete dataset and the influence of the network's structure, the depth network model has the overfitting problem, resulting in the low accuracy of image recognition, which cannot meet the needs of efficient analysis of actual agricultural work scenes. To solve this problem, this study proposes a crop pest identification method based on an improved transfer learning network. In this study, the image analysis model based on the VGG16 network and Inception-ResNet-v2 network is constructed and fine-tuned to ensure the completeness of the image analysis model; at the same time, in order to improve the performance of the pest identification model, VGG16 and Inception-ResNet-v2 networks are effectively integrated by using the integrated algorithm to solve the problem of overfitting problem. Related Work China is a large agricultural production country. It is very important to ensure the yield, quality, and safety of crops. However, diseases and pests have a great impact on crop production, resulting in crop yield reduction and economic losses. erefore, the research on the identification of crop diseases and pests is of great significance to crop production. Pest analysis requires statistical analysis of a large number of data to analyze the correlation of various factors, so as to obtain the law for prevention [14]. For a long time in the past, the research on crop diseases and insect pests has always relied on artificial methods. A large number of agricultural experts and technicians analyze the categories of diseases and pests according to their own experience through measurement, statistics, and calculation. ere are many problems with traditional manual identification methods. On the one hand, the knowledge and experience of different staff are different, which will lead to errors in the identification results of plant diseases and pests, lead to the invalidity of the whole work, and cause losses to agricultural production; on the other hand, the manual identification method is only applicable to small-scale planting. When the crop planting area is wide, a large number of technicians and time are required to manually identify the types of diseases and pests, which are too costly and inefficient [15]. e development of big data technology and artificial intelligence technology provides new ideas for pest analysis and research [16][17][18]. rough the continuous training and learning of the multilayer network model, the pest dataset effectively extracts the image features, so as to build a corresponding and reliable image recognition and analysis model [19]. At present, researchers have carried out research and analysis on this problem. Reference [20] proposed a potato pest detection model based on Faster R-CNN, which introduced residual convolution network and feature pyramid network into the recognition network to realize effective potato pest detection; reference [21] classified and analyzed four vegetable pests including whitefly, Plutella xylostella, and thrips based on the word bag model and support-vector machine (BOF-SVM), so as to provide reliable support for improving vegetable quality; reference [22] uses the normalized segmentation algorithm based on spectrum theory to segment the pest image and uses the CNN model to segment and identify the pest dataset; reference [23] proposes a MobileNetv2-YOLOv3 lightweight network model. However, it should be noted that due to the structural characteristics of the deep network itself, the deep network structure will lead to overfitting during model training or testing, resulting in the decline of image analysis accuracy; in addition, the image information of diseases and pests is complex, and the current methods cannot be supported by reliable and complete datasets. To solve the above problems, this study proposes a pest image analysis method based on the improved transfer learning network. is method integrates VGGl6 and Inception-ResNet-v2 network to realize accurate and effective pest type recognition and analysis. Dataset. e dataset used in this experiment comes from the Image Database for Agricultural Diseases and Pests Research (IDADP). IDADP contains a large number of image resources of crop diseases such as rice, wheat, and corn. Each disease has hundreds or even thousands of images, and its original image resolution reaches 20 million pixels. It establishes an image dataset for crop disease and pest identification research that can provide training and test samples for machine learning. In this study, rice and wheat are selected as the research objects. Each research object selects three common pest categories, a total of six categories, namely, rice bacterial blight, rice flax spot, rice blast, wheat powdery mildew, wheat scab, and wheat leaf rust, including 600 pictures of rice and 400 pictures of wheat. A total of 3,000 images were used for training and verification, and another 300 were used for testing. e resolution of each sample image is 3000 × 2000 × 3. Some sample examples are shown in Figure 1. Dataset Preprocessing. e number of plant diseases and insect pests and crop leaf photographs provided on the IDADP dataset website is uneven. For some plants, there are only dozens of pictures of one kind of plant diseases and insect pests, and there are no healthy samples as the control group, which has no training value. erefore, the dataset needs to be optimized and made into a training set and test set suitable for reading [24]. e image preprocessing steps include the following: (1) Optimize dataset: in this study, 38 subsets are selected from the image samples provided by the IDADP dataset as the experimental subset. rough some classification problems with fewer categories and a total number of samples, whether the selected convolution neural network model is feasible, how to adjust the network structure, and how to select parameters are tested, so as to reduce the time cost of the experimental process. It should be noted that the operation processing of equations (2) to (4) below is applicable to all subdatasets. (2) Image transformation: in this study, bilinear and quadratic trilinear interpolation methods are used to adjust the image size to 224 × 224. rough comparison, it is found that the nearest neighbor interpolation method has the best zoom effect on this image set. is zoom method can retain the intuitive morphological features of leaves, and the edges and disease textures are well preserved. (3) Image standardization: the deviation diagrams of red, green, and blue channels of all samples in the dataset are obtained, respectively, and the standardized pictures can be obtained. When the training sample is large enough, according to the statistical law, the average values of the training set and the test set converge and are equal, which can strengthen the classification characteristics and improve the accuracy of classification according to the deviation of a single sample. (4) Data enhancement: in order to improve the accuracy of network classification, the read pictures are randomly sorted to ensure that some statistical characteristics of the training set and the test set are similar, and then, a circular queue with 16 pictures as a batch is generated for multiple iterations during training. After completing the above steps, a dataset for the training model can be obtained, and the obtained dataset can also be randomly divided into the training set and test set. e proportion of the two parts of data is adjusted by setting the training ratio. Identification Process of Crop Diseases and Insect Pests. Using the concept of transfer learning, this study selects VGG16 and Inception-ResNet-v2 networks as the pretraining model on the IDADP dataset and then uses the enhanced pest dataset to fine-tune the network. e flow of two fine-tuning network models for the pest identification algorithm is shown in Figure 2. Based on the enhanced pest dataset, the pest identification model is obtained by VGGl6 and Inception-ResNet-v2 network pretraining and fine-tuning. e specific steps are as follows: (1) e initial network model is obtained by using VGG16 and Inception-ResNet-v2 trained by the IDADP dataset (2) Data enhancement methods such as changing leaf brightness were used to expand the pest dataset, and the enhanced dataset was obtained (3) Enhanced training set and enhanced verification set were used to fine-tune the pretraining model (4) After several iterations to optimize the network convergence, the corresponding CNN series pest identification models ft-VGG16 and ft-Inception-ResNet-v2 are obtained (5) e pest images were input to be tested into the transferred convolution neural network, and the corresponding recognition results were output Although the VGGNet network has many layers, its structure is clear. irteen convolution layers and 3 full connections are the core of VGGNet. e innovation of the VGGNet network is to convert a single-layer network into several identical 3 × 3 stackings of convolution [25,26]. A 5 × 5 convolution layer is equivalent to the superposition of two 3 × 3 convolution layers, which will make two 3 × 3 convolution layers have a greater receptive field effect than one 5 × 5 convolution layer. is replacement can increase the network depth, improve the performance, reduce the network parameters, and reduce the consumption of memory and computing resources. Computational Intelligence and Neuroscience 3 VGG16 network structure and parameters are shown in Table 1. e first step is to preprocess the image and normalize the pest image to 224 × 224 × 6. In the above table, the first group and the second group have the same convolution group structure, both of which contain two convolution layers, but the number of output feature maps is different. After the first set of convolution operations, 64 feature maps are obtained. e size of the convolution kernel is 3 × 3 in each layer of each group. After multiple convolution operations, the dimension of the feature map is 224 × 224 × 128. After the max pooling of size 2 × 2, the 112 × 112 × 64 feature map is obtained as the input of the next layer. e number of the second group of convolution characteristic maps is 128, and the other parameters are the same as those of the first group. e third group of convolutional layers adds a layer of convolution to each of the first two groups. e convolution kernel has the same dimensions and a slightly different number. e number of feature maps is 256. Similarly, after max pooling, 28 × 28 × 512 is obtained as the input of the next layer. e structure of the fourth and fifth convolution groups is similar to that of the third group. Each group uses three convolution layers to extract image features, and finally, 512 feature maps are obtained. e last three layers of the network are the fully connected layers. Due to the characteristics of the full connection structure, the parameters trained by these three layers account for most of the whole network. e output dimension 1,000 of softmax regression is the number of all classification results. e whole network needs to train 256, 32, and 156 parameters, and the amount of data is still large. . In order to more finely extract the features of the original input image, a tighter connection structure is adopted in the inception module. Due to the characteristics of dense connection blocks, the feature reuse rate increases, and the network can more comprehensively learn the original data [27]. However, the increase in dense blocks also increases the amount of calculation and affects the calculation efficiency. In order to solve this problem, the number of feature maps is halved, and the depthwise convolution is used. Improved Inception- e Inception-ResNet network structure is realized by a series of convolution and residual connections [28], and the formula is as follows: where u a represents the feature mapping of the output of the a Inception-ResNet block; o b (.) represents the extracted feature map; C r s is the s output feature map of the convolution network of layer j; f is the activation function; C r−1 j is the s output feature map of the convolution network of layer r − 1; V r s is the weight of the j convolution kernel of the convolution network of layer r; and e r s is the offset of the s feature map of the layer r convolution network. e high-level features of diseases and pests extracted by Inception-ResNet structure have increased variance of estimated value caused by limited neighborhood size. In order to reduce this error, average pooling is used to process the output feature map of the convolution layer. e pooled feature mapping expression is as follows: where D t y represents the s pest feature map obtained by Inception-ResNet block operation; q represents the pooling and parameters x and y are feature channels. e whole Inception-ResNet-v2 model is composed of inception modules with different functions. Figure 3 shows the improved Inception-ResNet module, which is used to change the width and height of input data. Depthwise separable convolution convolutes a convolution process in two steps: depthwise process and pointwise process. For a standard convolution process, it is assumed that there are N × H × W × C inputs and K convolution of 3 × 3. If pad � 1 and stripe � 1 are set, the standard convolution output is N × H × W × C. For the depthwise separable convolution, the input N × H × W × C is divided into C groups in the depthwise stage, and then, 3 × 3 convolution is carried out for each group to extract the spatial features of each channel; in the pointwise stage, K ordinary convolutions of 1 × 1 are performed on the input N × H × W × C to extract the features of each point of the picture. Compared with the ordinary convolution process, the same input and the same feature map depth can be obtained. e amount of depthwise separable convolution parameters can be greatly reduced, and the operation speed can be improved. e overall network structure of the improved inception is shown in Figure 4. CNN Model Integration Algorithm. VGG16 and Inception-ResNet-v2 CNN migration network models have different structures and have their own characteristics and advantages. e effective fusion of the two models can solve the overfitting problem of the network model of the CNN network itself. erefore, the fusion of the two models is conducive to improve the recognition accuracy of crop diseases and pests. As shown in Figure 2, VGGl6 and Inception-ResNet-v2 networks are selected as pretraining by using the ImageNet dataset, and then, the enhanced pest dataset is used for parameter migration and fine-tuning network to obtain two CNN models: ft-VGGl6 and ft-Inception-ResNet-v2: (1) Two models are integrated using the mean method: the mean value of the prediction results of ft-VGG16 and ft-Inception-ResNet-v2 transfer learning models are calculated to obtain the final prediction results. As shown in formula (4), m represents the number of integrated models, and P i represents the predicted value of the i model. (2) e weighted method integrates the two models: considering the different weights of each model, the parameter λ i is introduced to represent the weight of the i model. Where m i�1 λ i � 1, as shown in formula (5), the weight of the model with a high recognition rate is naturally larger. Experiment and Analysis e experiments in this study are carried out in the same platform environment. e experimental platform is Ubuntu 16.04 operating system, and the kernel is Linux 4.10 14 and PyTorch 1.1.0 development environment; the hardware environment is NVIDIA GeForce MX350 graphics card, Intel Core i7 1185G7 processor, and the main frequency is 3.0 GHz. First, the 128 × 128 sliding window is used for image segmentation according to the fixed step size, and then, the IDADP dataset is enhanced based on the method in Section 3.2. Model Optimization Analysis. In order to verify that the model proposed in this study has better graphics analysis and processing ability, this study analyzes the model performance of the fine-tuned ft-VGG16 and ft-Inception-ResNet-v2, respectively, and the model image analysis accuracy is shown in Figure 5. As shown in Figure 5, the two models have good recognition performance for the recognition accuracy of the IDADP dataset, and the recognition accuracy of the ft-Inception-ResNet-v2 model is better. Although the number of network layers is a little deeper than that of ft-VGG16, the accuracy of the ft-Inception-ResNet-v2 model is slightly higher than that of ft-VGG16 because of its dense connection block characteristics and high reuse rate. e optimization of the integration algorithm is further analyzed, and the mean and the weighted method are used to integrate the two models, respectively. Table 2 shows the experimental results of the integration of the two migration models on the expanded pest dataset. Computational Intelligence and Neuroscience As shown in Table 2, using the integrated algorithm to realize the effective collection of the two CNN models can significantly improve the accuracy of identification and analysis of IDADP datasets; but at the same time, through comparative analysis, it can also be seen that the recognition accuracy of the model integrated by the mean method is 96.47%, and the recognition accuracy of the model integrated by the weighted method is 97.71%. erefore, this study uses the weighting method to realize the corresponding model integration processing of ft-VGG16 and ft-Inception-ResNet-v2 models. Model Recognition Performance Analysis. In order to verify the effectiveness of this method, this study uses the methods of reference [20,22] to do experimental comparative analysis on crop pest identification. Identification of Performance Evaluation Indicators. In order to measure the performance of the recognition method proposed in this study, a general objective evaluation index is needed to ensure the fairness of the algorithm evaluation. Accuracy Pre (Precision), recall Re (Recall), and F 1 value (F 1 − meansure) are commonly used in big data image classification research and can be used to analyze the performance of crop pest identification results in this study. e mathematical calculation formulas are shown in equations (5) to (7). Accuracy Pre: accuracy represents the correct frequency value predicted in the example with positive prediction, that is, how many samples with positive prediction are real-positive samples: Recall Re: recall indicates that the correct frequency value is predicted in the example with a positive label, that is, how many positive examples in the sample are correctly predicted: F 1 − meansure: F 1 − meansure is used to measure the accuracy rate and recall rate and is the harmonic mean of accuracy Pre and recall Re: where TP represents the positive sample predicted by the model as positive; TN represents the negative sample predicted by the model as negative; FP represents the negative sample predicted by the model as positive; and FN represents the positive sample predicted by the model as negative. Identification of Performance Analysis. In this study, different methods are used to realize image analysis of the IDADP dataset, and the pest identification is shown in Figure 6. It can be seen from Table 1 that in the task of pest identification, the image analysis performance of reference [22] is the worst, and the average test accuracy is only 84.62%; reference [20] can achieve an average accuracy of 89.13%, and there is overfitting; in this method, ft-VGG16 and ft-Inception-ResNet-v2 are effectively integrated to effectively alleviate overfitting, and the recognition accuracy reaches 97.17%. At the same time, Table 3 shows the image classification under different methods. It can be seen from Table 3 that different models have obvious differences in the recognition effects of different categories. e difference between the recognition accuracy of sheath blight and rice blast in reference [22] is up to 5.03%, and the recognition performance of rice blast is the worst, with obvious recognition imbalance. e recognition accuracy of each category of the method in this study is uniform, the difference is 1.01%, and the recognition accuracy of each category is higher than that in reference [20]. is is because VGG16 and Inception-ResNet-v2 image analysis models are integrated into this study to solve the problem of overfitting of the network model in multilayer network, so the recognition accuracy of crop diseases and pests is improved. e recall Re and F 1 values are used to evaluate the performance of the model, and the evaluation indicators are shown in Figure 7. As can be seen from Figure 7, the recall Re of this method is 89.82%, and the ability to find each pest category is the strongest. e statistical result of F 1 value of this method is 90.01%, and the comprehensive performance of the model is the best. In this study, the preprocessed pest dataset is used, and the image analysis network model is constructed based on VGG16 and Inception-ResNet-v2 network, and the image analysis model is further refined to ensure the completeness of the image analysis model. e statistical results of identification model indexes of diseases and pests show that the model in reference [20] cannot extract the characteristics of different types of diseases and pests well. Reference [22] due to its structural limitations, slow convergence, overfitting, and other phenomena results in low comprehensive indicators. e pest identification model based on this method can better extract different types of pest characteristics and obtain good pest identification results. Conclusions In view of the low performance of the current pest identification methods, this study proposes a pest image recognition and analysis method based on the multilayer network model. In this study, the preprocessed pest dataset is used, and the image analysis network model is constructed based on VGG16 and Inception-ResNet-v2 network, and the image analysis model is further refined to ensure the completeness of the image analysis model. Simulation results show that the proposed algorithm can realize the task of crop pest identification and classification and has good network model performance. e future research work will continue to explore the image analysis of diseases and pests and realize the calculation of the effective area of crop diseases and the judgment of the severity of plant diseases and insect pests, so as to carry out an orderly and effective treatment and prevent largescale economic losses. Data Availability e data used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper. The proposed method Reference [20] Reference [22] Computational Intelligence and Neuroscience 7
2022-03-18T15:23:57.436Z
2022-03-16T00:00:00.000
{ "year": 2022, "sha1": "2217739f0fe8c74f3378f6100b40615203b601d9", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cin/2022/9709648.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fab47244f49c76dc6d35eebd13190085b5b4a70f", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
232313347
pes2o/s2orc
v3-fos-license
Characteristics of Salmonella From Chinese Native Chicken Breeds Fed on Conventional or Antibiotic-Free Diets Salmonella is a common food-borne Gram-negative pathogen with multiple serotypes. Pullorum disease, caused by Salmonella Pullorum, seriously threatens the poultry industry. Many previous studies were focused on the epidemiological characteristics of Salmonella infections in conventional antibiotic use poultry. However, little is known about Salmonella infections in chicken flocks fed on antibiotic-free diets. Herein, we investigated and compared Salmonella infections in three Chinese native breeders fed on antibiotic-free diets, including the Luhua, Langya, and Qingjiaoma chickens, and one conventional breeder, the Bairi chicken, via analyzing 360 dead embryos in 2019. The results showed that the main Salmonella serotypes detected in a total of 155 isolates were S. Pullorum (82.6%) and S. Enteritidis (17.4%). Coinfection with two serotypes of Salmonella was specifically found in Bairi chicken. The sequence type (ST) in S. Pullorum was ST92 (n = 96) and ST2151 (n = 32), whereas only ST11 (n = 27) was found in S. Enteritidis. The Salmonella isolates from three breeder flocks fed on antibiotic-free diets exhibited phenotypic heterogeneity with a great variety of drug resistance spectrum. Most of the isolates among three chicken breeds Luhua (64.9%, 50/77), Langya (60%, 12/20) and Qingjiaoma (58.3%, 7/12) fed on antibiotic-free diets were resistant to only one antibiotic (erythromycin), whereas the rate of resistance to one antibiotic in conventional Bairi chicken isolates was only 4.3% (2/46). The multidrug-resistance rate in Salmonella isolates from layer flocks fed on antibiotic-free diets (20.2%, 22/109) was significantly (P < 0.0001) lower than that from chickens fed on conventional diets (93.5%, 43/46). However, high rate of resistance to erythromycin (97.4%~100%) and streptomycin (26%~41.7%) were also found among three breeder flocks fed on antibiotic-free diets, indicating resistance to these antibiotics likely spread before antibiotic-free feeding in poultry farms. The findings of this study supplement the epidemiological data of salmonellosis and provide an example of the characteristics of Salmonella in the chicken flocks without direct antibiotic selective pressure. Salmonella is a common food-borne Gram-negative pathogen with multiple serotypes. Pullorum disease, caused by Salmonella Pullorum, seriously threatens the poultry industry. Many previous studies were focused on the epidemiological characteristics of Salmonella infections in conventional antibiotic use poultry. However, little is known about Salmonella infections in chicken flocks fed on antibiotic-free diets. Herein, we investigated and compared Salmonella infections in three Chinese native breeders fed on antibiotic-free diets, including the Luhua, Langya, and Qingjiaoma chickens, and one conventional breeder, the Bairi chicken, via analyzing 360 dead embryos in 2019. The results showed that the main Salmonella serotypes detected in a total of 155 isolates were S. Pullorum (82.6%) and S. Enteritidis (17.4%). Coinfection with two serotypes of Salmonella was specifically found in Bairi chicken. The sequence type (ST) in S. Pullorum was ST92 (n = 96) and ST2151 (n = 32), whereas only ST11 (n = 27) was found in S. Enteritidis. The Salmonella isolates from three breeder flocks fed on antibiotic-free diets exhibited phenotypic heterogeneity with a great variety of drug resistance spectrum. Most of the isolates among three chicken breeds Luhua (64.9%, 50/77), Langya (60%, 12/20) and Qingjiaoma (58.3%, 7/12) fed on antibiotic-free diets were resistant to only one antibiotic (erythromycin), whereas the rate of resistance to one antibiotic in conventional Bairi chicken isolates was only 4.3% (2/46). The multidrug-resistance rate in Salmonella isolates from layer flocks fed on antibiotic-free diets (20.2%, 22/109) was significantly (P < 0.0001) lower than that from chickens fed on conventional diets (93.5%, 43/46). However, high rate of resistance to erythromycin (97.4%∼100%) and streptomycin (26%∼41.7%) were also found among three breeder flocks fed on antibiotic-free diets, indicating resistance to these antibiotics likely spread before antibiotic-free feeding in poultry farms. The findings of this study supplement the epidemiological data of salmonellosis and provide an example of the characteristics of Salmonella in the chicken flocks without direct antibiotic selective pressure. Keywords: multilocus sequence typing, serotype, antibiotic resistance, chicken, Salmonella, antibiotic-free INTRODUCTION Salmonella is a clinically common food-borne gram-negative pathogen with over 2,600 serotypes (1). It is demonstrated that Salmonella is predominantly found in poultry, eggs and dairy products (2). Salmonella species are considered as intracellular pathogens and carry a number of virulence factors for entry and survival in the intracellular environment, including Salmonella pathogenicity islands (SPIs) and Salmonella virulence-plasmids (3). Salmonella can spread not only horizontally but also vertically through eggs (chicken embryos) (4). When Salmonella colonizes the fallopian tubes, it can settle in the reproductive tract of poultry and contaminate fresh eggs, and contaminated chicken embryos may die due to the pathogenicity of Salmonella (5). The non-dead chicken embryos will still carry Salmonella after hatching, which will cause healthy chicks to be infected with Salmonella disease. For example, Salmonella enterica serovar Gallinarum biovar Pullorum (S. Pullorum), the causative agent of pullorum disease (PD) in chickens, results in a high mortality rate among embryos and chicks, as well as weakness and white diarrhea (6). Therefore, improper treatment of Salmonella infection may greatly increase cost on the disease management and flock breeding (7,8). However, strains of Salmonella spp. with antibiotic resistance are now widespread in both developed and developing countries (9). The emergence of Salmonella with antimicrobial resistance is mainly promoted by the use of antibiotics in animal feed to promote the growth of food animals, and in veterinary medicine to treat bacterial infections in those animals (2). This poses a high risk of zoonotic disease caused by the transmission of multidrugresistant Salmonella strains from animals to humans via the ingestion of contaminated food or water (10,11). To limit the negative impacts, the European Union Commission, U.S., China and many other countries banned antibiotics use for enhancing growth in livestock in 2006, 2017, and 2020, respectively (12)(13)(14). Recent studies have shown that antibiotic resistance patterns from agricultural settings can be indistinguishable, and a better understanding of the background data is required for effective agricultural management (15). A few studies (16,17) investigated the characteristics and antibiotic resistance profile of Salmonella from antibiotic-free poultry or chicken meat. However, little is known about the characteristics of Salmonella in Chinese native chicken flocks reared on an antibiotic-free diet. There are a variety of indigenous layer breeds in China, including the Luhua chicken, Langya chicken, Qingjiaoma chicken and Bairi chicken. The Luhua chicken has a unique black and white feather color and produces high-nutrition eggs. The Langya chicken has a small body size and high egg production. The Qingjiaoma chicken has cyan feet and black spots in body and feather. The Bairi chicken has a small body size and a Ushaped back. No antibiotics were used during the entire feeding process for the Luhua, Langya and Qingjiaoma chickens for at least 4 years. Earlier research in our previous study found that the detection rate of Salmonella in dead embryos could evaluate the Salmonella infection rate in chicken flocks (18,19). In the current study, we mainly investigated the serotypes and antibiotic resistance profiles of Salmonella from dead embryos of Chinese native breeders fed on antibiotic-free or conventional diets in 2019. This study will help to supplement the epidemiological data of Salmonella infection in Chinese chicken flocks fed on antibiotic-free diets. Samples and Salmonella Isolation A total of 360 dead chicken embryos (18 days of incubation) were used to isolate Salmonella from three Chinese native layer breeders fed on antibiotic-free diets and one conventional native breeder with 90 dead embryos in each farm in 2019. In 2020, dead embryos, cloacal swabs, feed samples and waterline drip samples (nipple drinkers) were collected for Salmonella isolation. The Luhua breeder has been not fed with antibiotic for 6 years and its flock size is 200,000. The Langya and Qingjiaoma chickens were fed on antibiotic-free diets for 4 years in the same breeder farm with flock sizes 10,000 and 50,000, respectively. The three chicken flocks fed on antibiotic-free diets had ever used antibiotics to treat bacterial diseases before antibiotic-free feeding and they were 1-day or about 18 weeks old when the antibiotic-free diet was started. The conventional Bairi breeder farm used antibiotics to promote growth intermittently prior to this study with flock size 50,000. These native breeder flocks were generally maintained for 1.0∼1.5 years, and therefore the chicken embryos from these flocks fed on antibiotic-free diets were probably in the 4th−6th generation. All of these chicken farms are located in eastern China. The Bairi chicken farm is 42 km away from Luhua layer farm and 271 km away from Langya and Qingjiaoma chicken farm ( Figure 1A). The liver, spleen and large intestine were taken from the dead chicken embryos with sterile forceps and placed in sterile microcentrifuge tubes (20). Discolored embryos, engorged blood vessels or liver necrosis were usually observed in these dead embryo samples. Salmonella strains were isolated from these samples using the Chinese National Standard method (GB 4789.4-2010) with some modifications. Briefly, each embryo sample was added into 4.5 mL of buffered peptone water (BPW, Land Bridge Technology, Beijing, China) and the BPW mixture was incubated at 37 • C for 14 h for pre-enrichment. Approximately 0.5 mL of pre-enriched cultures were inoculated into 4.5 mL tetrathionate broth base (TTB, Qingdao Hope Biotechnology Co., Ltd.). After 20 h of incubation at 37 • C for selective enrichment, one loopful of each TTB broth culture was streaked onto Xylose-Lysine-Tergitol 4 (XLT4) agar (Qingdao Hope Bio-technology Co., China) plates and incubated at 37 • C for 48 h (21). About 3∼5 suspected Salmonella colonies were identified by polymerase chain reaction (PCR) assays with primers designed for Salmonella invA (product of 331 bp) and S. Pullorum iPAJ (740 bp) (19). Only one colony with the same morphology per sample was picked and confirmed by MALDI PCR reactions were conducted using annealing at 55 • C for invA and 58 • C for iPAJ. The standard strain of S. Enteritidis (CVCC3377) and S. Pullorum (CVCC535) purchased from the China Veterinary Culture Collection Center (Beijing, China) were used as control strains. Salmonella Serotyping According to the manufacturer's instructions from the Salmonella serotyping kit (Tianrun Bio-Pharmaceutical, Ningbo, China), all isolates used in this study were serotyped by slide agglutination using a commercial Salmonella antisera kit (Tianrun Bio-Pharmaceutical, Ningbo, China). The kit contained Vi antiserum and monovalent and polyvalent H and O antisera with a total of 60 factors. A single colony of Salmonella on the nutrient agar plate was mixed with polyvalent O antisera first, then with the specific monovalent antisera testing agglutination within 60 sec. Once the O and H antigens are identified, the serotype can be determined according to the Kauffmann-White scheme (22,23). Multilocus Sequence Typing Seven housekeeping genes (aroC, dnaN, hemD, hisD, purE, sucA, and thrA) were used to characterize Multilocus sequence typing (MLST) of Salmonella isolates according to the instructions from the University of Warwick (http://mlst.warwick.ac.uk/mlst/). The primer pairs for the PCR amplification of internal fragments of these genes were used according to the protocols on the EnteroBase website (https://enterobase.readthedocs.io/en/latest/ mlst/mlst-legacy-info-senterica.html). All PCR reactions were conducted by using an annealing temperature of 55 • C. Gene products were sequenced (Sangon Biotech, Shanghai, China) and the allele number of the corresponding sequence for each of the seven housekeeping genes was obtained by sequence alignment with BioEdit software based on the "Salmonella enterica MLST Database." The sequence type (ST) was assigned according to the Achtman seven Gene MLST scheme as described online (http:// mlst.warwick.ac.uk/mlst/dbs/Senterica) (20,24). Antimicrobial Susceptibility Testing According to the Kirby Bauer method recommended by the World Health Organization and the manual of clinical and Laboratory Standards Institute (CLSI, 2017), antimicrobial susceptibility testing of the Salmonella isolates obtained in this study was performed with a total of 14 antibiotics (Hangzhou Binhe Microorganism Reagant Co., Ltd., China), including ampicillin (AMP; 10 µg), cefoxitin (FOX; 30 µg), ceftazidime Multiple drug resistance (MDR) was defined as bacteria isolates with resistance to one or more antibiotics in three or more antibiotic classes. The MDR rates of Salmonella isolates in these chicken breeds were calculated by the number of MDR isolates divided by the number of screening isolates. The total S. Pullorum and S. Enteritidis isolates from three antibiotic-free chicken breeds to different kinds of antibiotics were aggregated, respectively, and the relative antibiotic resistance rate was presented as a percentage and compared with that from the conventional Bairi chicken breed. Data Analysis The Chi-squared test or Fisher's exact test were used for analyzing the data (26). Serotypes of Salmonella In this study, a total of 155 Salmonella isolates were recovered from 360 dead chicken embryos from three Chinese native breeder flocks fed on antibiotic-free diets and one conventional layer breeder Bairi chicken in 2019 ( In order to evaluate the Salmonella infection in breeder flocks after implementing the Salmonella eradication project and strengthening feeding management, various samples were collected from Luhua and Langya breeder flocks in 2020. Compared with the high isolation rate of Salmonella (85.6%) in Luhua breeder flocks in 2019, the infection rate of Salmonella in Luhua chicken was remarkably reduced to 2.08% (5/240) in 2020 by examining 140 dead embryos and 100 cloacal swabs (P < 0.0001) ( Table 2). However, for Langya chicken flocks without Salmonella eradication project implementation, the isolation rate of Salmonella spp. (13.91%, 32/230) was significantly lower than that in Luhua chicken flocks (P < 0.0001). The 32 Salmonellapositive samples included 29 of 100 dead embryos (29%), 2 of 15 feed (13.33%), 1 of 100 cloacal swabs (1%) and 0 of 15 waterline drip samples (nipple drinkers) ( Table 2). FIGURE 2 | Comparison of the resistance rates of Salmonella isolates between three Chinese native breeder flocks fed on antibiotic-free diets and one breeder flock fed on conventional diets. The total Salmonella isolates from three Chinese breeder flocks fed on antibiotic-free diets were aggregated and presented as a percentage. The difference was analyzed by chi-squared test. ***P < 0.0005; ****P < 0.0001. of MDR Enterobacteriaceae infection. Moreover, the rates of resistance in antibiotic-free-fed chicken and conventional chicken isolates to FOX (4.6% and 0, respectively) and SXT (1.8%, 0, respectively) were low (Figure 2). Most of the isolates among three chicken breeds Luhua (64.9%, 50/77), Langya (60%, 12/20) and Qingjiaoma (58.3%, 7/12) fed on antibiotic-free diets were resistant to only one antibiotic (ERY), whereas the rate of resistance to one antibiotic in conventional Bairi chicken isolates was only 4.3% (2/46) (Figure 3). The largest proportion in conventional Bairi chicken was occupied by MDR isolates, up to 93.5% (43/46). However, the MDR rate in Luhua, Langya and Qingjiaoma chicken isolates was 15.6% (12/77), 30% (6/20) and 33.3% (4/12), respectively (Figure 3). The total MDR rate in isolates from chickens fed on antibiotic-free diets (20.2%, 22/109) was significantly (P < 0.0001) lower than that from chickens fed on conventional diets (93.5%, 43/46) (Figure 4A). One isolate (1/77) from the Luhua chicken was shown to be susceptible to all antibiotics tested in this study (Figure 3). Moreover, the MDR profile of both S. Pullorum and S. Enteritidis isolated from chickens fed on antibiotic-free diets exhibited a diverse drug resistance spectrum (Supplementary Table 1). By comparing the S. Pullorum isolates from chicken flocks fed on conventional and antibiotic-free diets, we showed that the MDR rate of conventional breeder chicken isolates (100%, 25/25) was much higher than that from three breeder flocks fed on antibiotic-free diets (21.4%, 22/103) (P < 0.0001) (Figure 4B). Approximately 62.1% of S. Pullorum isolates from three breeder flocks fed on antibiotic-free diets were resistant to only one antibiotic ERY, followed by 15.5% of S. Pullorum isolates that were resistant to two antibiotics tested in this study (Figure 4B). For S. Enteritidis, the MDR rate from the conventional Bairi chicken was up to 85.7% (18/21), whereas no isolate with MDR was found among a total of 6 isolates from three chicken breeds fed on antibiotic-free diets (Figure 4B). DISCUSSION Among three chicken breeds (Luhua, Langya and Qingjiaoma) fed on antibiotic-free diets and one conventional Bairi chicken, different serotypes and ST types of Salmonella were identified. However, the dominating serotype among these breeder flocks in this study was S. Pullorum, which was significantly different from the prevalent isolates (100% S. Enteritidis) from fecal swabs and chicken embryos of large-scale breeder farms in China (19). Zhao et al. (27) investigated the prevalence and characteristics of Salmonella in free-range chickens in China and showed that a total of 38 Salmonella isolates (38/300, 12.7%) were recovered and the most common serotype was S. Enteritidis (81.6%). Certainly, some Salmonella species may be missed in these flocks due to the choice of methods and media used for isolation of Salmonella. We were not able to differentiate the susceptibility of these chicken breeds to Salmonella infection due to lack of data in the study. However, the Salmonella infection rates in embryos were higher than that in samples from cloacal swabs and farm environments. These farms were located in different regions of China east, and different native breeder flocks had different diets with or without antibiotics added, so it seemed that the infection rate of Salmonella may be significantly associated with geographical distribution and feeding management level in China (18). The prevalence of Salmonella associated with chick mortality at hatching was investigated in three hatcheries in Jos, central Nigeria. The results showed that 45(9%) of the 500 samples were positive for Salmonella and the prevalent serotypes were S. Kentucky (75.6%) and S. Hadar (24.4%) (28). Bailey et al. (29) tracked the serotype of Salmonella through integrated broiler chicken operations in the US. The results showed that the rate of Salmonella-positive samples from the hatchery in 1999-2000 was the highest and the predominant serotype found in hatchery samples was S. Senftenberg. An association between the serotypes found in the hatchery and those found on the final processed carcasses was observed. PD caused by S. Pullorum is strongly associated with vertical transmission directly from contamination of the egg in the genital tract or indirectly from chick-to-chick contact in the hatchery (30). It was demonstrated that S. Pullorum colonized both the ovary and the oviduct of hens and led to 6% of laid eggs being infected by S. Pullorum via more than one mechanism of egg infection (31). S. Pullorum is not excreted extensively in the feces, unlike many other Salmonella serotypes that are more frequently associated with human food poisoning (32). In the current study, S. Pullorum isolates from dead embryos in these chicken flocks exhibited limited genetic diversity in ST and only ST92 and ST2151 were determined among the total 128 isolates. Hu conducted the whole-genome sequencing of a panel of 97 S. Pullorum isolates between 1962 and 2014 from four countries across three continents, and Hu also found most of the strains belonged to ST92 (33). However, the S. Pullorum isolates in these chicken flocks fed on antibiotic-free diets exhibit phenotypic heterogeneity with relatively low antibiotic resistance rates, providing an example of Salmonella characteristics for the chicken production system without direct selective pressure. The most effective means of controlling pullorum disease is a combination of stringent management procedures and eradication by a serological test (34). In the United States, PD was brought under control after the implementation of the National Poultry Improvement Plan and the vaccination of flocks (35). The European Union also established a regulation focused on preventing, monitoring or eradicating Salmonella in poultry, and the incidence of salmonellosis had decreased since 2003 (36). However, related control programs for S. Pullorum eradication and available vaccines were still absent in China. The positive rate of Salmonella isolates from Luhua chicken in 2019 was the highest, however, Salmonella serological tests had been regularly Pullorum and S. Enteritidis isolates from three antibiotic-free-fed breeder flocks to different kinds of antibiotics were aggregated together respectively, and the difference of resistant rates to antibiotics in isolates from breeder flocks fed on antibiotic-free or conventional diets was analyzed with chi-squared test. ****P < 0.0001. done to eliminate positively infected chickens promptly since 2019. And with daily feeding management strengthening, the infection rate of Salmonella in Luhua chicken was dramatically reduced to 2.08% in the survey of 2020, lower than 13.91% in Langya chicken. S. Enteritidis is the serovar most frequently associated with egg infection due to its unique long term ability to colonize the ovary and the oviduct of laying hens and its spread and persistence in the parental breeder flock population (37). The frequency of egg contamination by S. Enteritidis depends on the level of contamination of the flock, and eggs are more likely to become internally contaminated around the onset of lay (38,39). The isolation rate of S. Enteritidis in the present study was 17.4% (27/155) and all were resistant to erythromycin. This was quite different from a survey with rectal swabs collected from three Chinese large-scale conventional chicken farms, which showed 80.8% (63/78) of MDR isolates with the most common serovar being S. Enteritidis (88.5%) (21). Unlike S. paratyphoid serovars that only infect humans by causing enteric fever, S. Enteritidis is a zoonotic pathogen of substantial concern to global human and animal health (40). Many studies using whole genome sequencing linking epidemiology, phylogeny and virulotyping are performed with Salmonella isolates from the harmonized monitoring of poultry and from human disease to facilitate attribution studies and identify trends associated with virulence and stress-response genes (41,42). Among the 14 antibiotics used in this study, the resistance rates of all Salmonella isolates to 11 antibiotics in conventional chicken were higher than those from chickens fed on antibiotic-free diets. The average MDR rate (20.2%, 22/109) of Salmonella isolates from chickens fed on antibiotic-free diets was significantly lower than the rates of 100% among 63 isolates examined by Yang et al. (19) from conventional farms in a similar geographical region, and also lower than that from other poultry farms in China (43). These data indicated that the use of antibiotics may promote the development of MDR Salmonella (44). Liu et al. (45) found high abundances of aminoglycoside, sulfonamide and tetracycline resistance genes in one antibiotic-free layer farm without direct antibiotic selective pressure. Similarly, the high resistance rate to streptomycin belonging to aminoglycoside antibiotic was also found in Salmonella isolates from antibiotic-free-fed layer farms in this study. Besides, high resistance rate to erythromycin was seen amongst three antibiotic-free-fed layer flocks in this study. The mechanisms of erythromycin resistance in Salmonella contain modification of the ribosomal target of macrolides and hydrolysis of the macrolide lacton ring catalyzed by erythromycin esterases (such as ereA and ereB) (46). Modification of the ribosomal target of macrolides is a common mechanism, and confers broad cross-resistance to macrolide-lincosamide-streptogramin antibiotics. This modification can occur by mutation and methylase encoded by erm (erythromycin ribosome methylase) genes (47). Antibiotics were used to treat bacteria diseases in the three flocks of layer breeder in this study 4-6 years ago, but those resistance antibacterial and/or genes may still circulate within the living environment of flocks. Management practices and contaminated eggs, and feces or wastewater have been attributed to the spread and persistence of antibiotic resistant Salmonella in the environment (2,44). Together, these data provided an example of the Salmonella antibiotic resistance profiles in the chicken flocks fed on antibiotic-free diets. To the best of our knowledge, there have been no previous studies that investigated the characteristics of Salmonella infection in such native layer flocks fed on antibiotic-free diets in China. In summary, the current study showed that majority of Salmonella isolates from three Chinese native breeder flocks fed on antibiotic-free diets were ST92 and ST2151 S. Pullorum and ST11 S. Enteritidis. The antibiotic resistance rates and MDR rates in three chicken breeds fed on antibiotic-free diets were significantly lower than that from a conventional Bairi chicken farm. Moreover, the Salmonella isolates in these chicken flocks fed on antibiotic-free diets exhibit phenotypic heterogeneity with a diverse drug resistance spectrum, providing an example for the occurrence of antibiotic resistance in the chicken production system without direct selective pressure. DATA AVAILABILITY STATEMENT The original contributions generated for this study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The animal study was reviewed and approved by Animal Care and Use of Shandong Agricultural University (SDAUA-2018-027). Written informed consent was obtained from the owners for the participation of their animals in this study.
2021-03-23T13:30:15.320Z
2021-03-23T00:00:00.000
{ "year": 2021, "sha1": "e6f39b49aaf896743a56a631da983080443a54fe", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2021.607491/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e6f39b49aaf896743a56a631da983080443a54fe", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
12611743
pes2o/s2orc
v3-fos-license
Compositional Reasoning for Interval Markov Decision Processes Model checking probabilistic CTL properties of Markov decision processes with convex uncertainties has been recently investigated by Puggelli et al. Such model checking algorithms typically suffer from the state space explosion. In this paper, we address probabilistic bisimulation to reduce the size of such an MDP while preserving the probabilistic CTL properties it satisfies. In particular, we discuss the key ingredients to build up the operations of parallel composition for composing interval MDP components at run-time. More precisely, we investigate how the parallel composition operator for interval MDPs can be defined so as to arrive at a congruence closure. As a result, we show that probabilistic bisimulation for interval MDPs is congruence with respect to two facets of parallelism, namely synchronous product and interleaving. Introduction Probability, nondeterminism, and uncertainty are three core aspects of real systems. Probability arises when a system, performing an action, is able to reach more than one state and we can estimate the proportion between reaching each of such states: probability can model both specific system choices (such as flipping a coin, commonly used in randomized distributed algorithms) and general system properties (such as message loss probabilities when sending a message over a wireless medium). Nondeterminism represents behaviors that we can not or we do not want to attach a precise (possibly probabilistic) outcome to. This might reflect the concurrent execution of several components at unknown (relative) speeds or behaviors we keep undetermined for simplifying the system or allowing for different implementations. Uncertainty relates to the fact that not all system parameters may be known exactly, including exact probability values. Probabilistic automata (PAs) [20] extend classical concurrency models in a simple yet conservative fashion. In probabilistic automata, concurrent processes may perform probabilistic experiments inside a transition. PAs are akin to Markov decision processes (MDPs), their fundamental beauty can be paired with powerful model checking techniques, as implemented for instance in the PRISM tool [18]. In PAs and MDPs, probability values need to be specified precisely. This is often an impediment to their applicability to real systems. Instead it appears more viable to specify ranges of probabilities, so as to reflect the uncertainty in these values. This leads to a model where intervals of probability values replace probabilities. This is the model studied in this paper, we call it interval Markov decision processes, IMDPs. In standard concurrency theory, bisimulation plays a central role as the undisputed reference for distinguishing the be-haviour of systems. Besides for distinguishing systems, bisimulation relations conceptually allow us to reduce the size of a behaviour representation without changing its properties (i.e., with respect to logic formulae the representation satisfies). This is particularly useful to alleviate the state explosion problem notoriously encountered in model checking. If the bisimulation is a congruence with respect to a parallel composition operator used to build up the model out of smaller ones, this can give rise to a compositional strategy to associate a small model to a large system without intermediate state space explosion. In several related settings, this strategy has been proven very effective [5,11]. Markov chains are known to be closed under interleaving parallelism (if considering the continuous-time setting) and under synchronous (also called synchronous product) parallelism (if considering the discrete-time setting). The more general concept of asynchronous parallelism with synchronisation (as in CCS or CSP) is known to require nondeterminism so as to arrive at closure properties (yielding PA for discrete time and interactive MC [10] for continuous time). These observations are conceptually echoed in the setting considered in the present paper, albeit for very different reasons. While nondeterminism is a genuine asset of IMDPs, a closure property can not be established for asynchronous parallelism with synchronisation. It has been recently investigated in [9] the possibility of establishing a asynchronous parallelism with synchronisation for IMDP models. However, the underlying construction is problematic since it does not manage correctly the spurious distributions. More precisely, for a pair of IMDP components the equality of the emerged sets of spurious distributions as a parallelism result should be guaranteed in order to establish the congruence result. This fact is not treated precisely in the setting of [9] for the defined asynchronous parallelism with synchronisation. In this work instead IMDPs are shown to be closed under interleaving parallelism, as well as under synchronous parallelism. This enables us to develop compositionality results with respect to bisimulation for these two facets of parallelism. Related work. Compositional specification of uncertain stochastic systems has been explored in various works before. Interval MCs [13,17] and Abstract PAs [6] serve as specification theories for MCs and PAs featuring satisfaction relation, and various refinement relations. In order to be closed under parallel composition, Abstract PAs allow general polynomial constraints on probabilities instead of interval bounds. Since for Interval MCs it is not possible to explicitly construct parallel composition, the problem of whether there is a common implementation of a set of Interval MCs is addressed instead [7]. To the contrary, interval bounds on rates of outgoing transitions work well with parallel composition in the continuous-time setting of Abstract Interactive MCs [16]. The reason is that unlike probabilities, rates do not need to sum up to 1. Authors of [24] successfully define parallel composition for interval models by separating synchronizing transitions from the transitions with uncertain probabilities. Organization of the paper. We start with necessary preliminaries in Section 2. In Section 3, we give the definition of probabilistic bisimulation for IMDPs and discuss the main results of [8]. Furthermore, we show that the probabilistic bisimulation over IMDPs is compositional and transitive. Finally, in Section 6 we conclude the paper. Preliminaries Given n ∈ N, we denote by 1 ∈ R n the unit vector and by 1 T its transpose. In the sequel, the comparison between vectors is element-wise and all vectors are column ones unless otherwise stated. For a given set P ⊆ R n , we denote by CH(P) the convex hull of P and by Ext(P) the set of extreme points of P. If P is a polytope in R n then for each i ∈ {1, . . . , n}, the projection proj e i P of P is defined as the interval [min i P, max i P] where min i P = min{ x i | (x 1 , . . . , x i , . . . , x n ) ∈ P } and max i P = max{ We denote by I is a set of closed subintervals of [0, 1] and, for a given [a, b] ∈ I, we let inf[a, b] = a and sup[a, b] = b. Given a set X, we denote by I X the identity equivalence relation I X = { (x, x) | x ∈ X }. We may drop the subscript X from I X when the set X is clear from the context. Given two relations R ⊆ X × Y and S ⊆ U × V, we denote by If X is an equivalence relation on X and Y an equivalence relation on Y, then X × Y is an equivalence relation on X × Y. For a given set X, we denote by ∆(X) the set of discrete probability distributions over X and by δ x ∈ ∆(X) the Dirac distribution on x, that is, the distribution such that for each y ∈ X, δ x (y) = 1 if y = x, 0 otherwise. Given two sets X and Y and two distributions ρ X ∈ ∆(X) and ρ Y ∈ ∆(Y), we denote by ρ X ×ρ Y the . Given a finite set of indexes I, a multiset of distributions { ρ i ∈ ∆(X) | i ∈ I }, and a multiset of real values { p i ∈ R ≥0 | i ∈ I }, we say that ρ is the convex combination of { ρ i ∈ ∆(X) | i ∈ I } according to { p i ∈ R ≥0 | i ∈ I }, denoted by ρ = i∈I p i · ρ i , if i∈I p i = 1 and for each x ∈ X, ρ(x) = i∈I p i · ρ i (x). For an equivalence relation R on X and ρ 1 , ρ 2 ∈ ∆(X), we write ρ 1 L(R) ρ 2 if for each C ∈ X/R, it holds that ρ 1 (C) = ρ 2 (C). By abuse of notation, we extend L(R) to distributions over X/R, i.e., for ρ 1 , ρ 2 ∈ ∆(X/R), we write ρ 1 L(R) ρ 2 if for each C ∈ X/R, it holds that ρ 1 (C) = ρ 2 (C). Interval Markov Decision Processes Let us formally define Interval Markov Decision Processes. We denote by A(s) the set of actions that are enabled from state s, i.e., A(s) = { a ∈ A | ∃s ∈ S : I(s, a, s ) [0, 0] }. Furthermore, for each state s and action a ∈ A(s), we let s a −→ µ s mean that µ s ∈ ∆(S ) is a feasible distribution, i.e., for each state s we have µ s (s ) ∈ I(s, a, s ). We require that the set P s,a = { µ s | s a −→ µ s } is non-empty for each state s and action a ∈ A(s). An IMDP is initiated in some state s 1 and then moves in discrete steps from state to state forming an infinite path s 1 s 2 s 3 . . . . One step, say from state s i , is performed as follows. First, an action a ∈ A(s) is chosen nondeterministically by scheduler. Then, nature resolves the uncertainty and chooses nondeterministically one corresponding feasible distribution µ s i ∈ P s i ,a . Finally, the next state s i+1 is chosen randomly according to the distribution µ s i . For a more formal treatment of the IMDP semantics, we refer the reader to [8,9]. Observe that the scheduler does not choose an action but a distribution over actions. It is well-known [20] that such randomization brings more power in the context of bisimulations. Note that for nature this is not the case, since P s,a is closed under convex combinations, thus nature can choose all distributions. Action Agnostic Probabilistic Automata We now introduce the action agnostic probabilistic automata we use in this paper, based on the probabilistic automata framework [20], following the notation of [21]. Note that the probabilistic automata we consider here correspond to the simple probabilistic automata of [20]. In practice, we consider the subclass of (simple) probabilistic automata of [20] having as set of actions the same singleton { f }, that is, all transitions are labelled by the same external action f . Since this action is unique, we just drop it from the definitions. We denote by [P] the class of all finite-state finite-transition probabilistic automata and we assume that each state in S is reachable froms. We may drop action agnostic since this is the only type of probabilistic automata we consider. The start state is also called the initial state; we let s, t, u, v, and their variants with indices range over S . We denote the generic elements of a probabilistic automaton P by S ,s, AP, L, T, and we propagate primes and indices when necessary. Thus, for example, the probabilistic automaton P i has states S i , start states i , and transition relation T i . A transition tr = (s, µ) ∈ T, also written s −→ µ, is said to leave from state s and to lead to the measure µ. We denote by src(tr) the source state s and by trg(tr) the target measure µ, also denoted by µ tr . We also say that s enables the transition (s, µ) and that (s, µ) is enabled from s. Example 1. An example of PA is the one shown in Figure 1: the set of states is S = {s, r, y, g, , , }, the start state iss, the set of atomic propositions is AP = S , the labelling function L is such that for each s ∈ S , L(s) = s, and the transition relation T contains the following transitions:s −→ ρ with ρ = {(r, 0.3), (y, 0.1), (g, 0.6)}, r −→ δ , y −→ δ , g −→ δ , r −→ δs, and g −→ δs. Synchronous Product The following definition of synchronous product is a variation of the definition of parallel composition provided in [20,21], where the synchronization occurs for each pair of enabled transitions. This corresponds to the original definition of parallel composition for probabilistic automata having all transitions labelled by the same external action. For two PAs P 1 and P 2 and their synchronous product P 1 ⊗ P 2 , we refer to P 1 and P 2 as the component automata and to P 1 ⊗ P 2 as the product automaton. Probabilistic Bisimulation As for the definition of synchronous product, the following definition of (strong) probabilistic bisimulation is a variation of the definition provided in [21], where all actions are treated as being the same external action. We first introduce the definition of combined transition. Definition 4. Given a PA P and a state s, we say that s enables a combined transition reaching the distribution µ, denoted by s −→ c µ, if there exist a finite set of indexes I, a multiset of transitions { (s, µ i ) ∈ T | i ∈ I }, and a multiset of real values { p i ∈ R ≥0 | i ∈ I } such that i∈I p i = 1 and µ = i∈I p i · µ i . Definition 5. Given a PA P, an equivalence relation R ⊆ S ×S is a (strong) (action agnostic) probabilistic bisimulation on P if, for each (s, t) ∈ R, L(s) = L(t) and for each s −→ µ s , there exists a combined transition t −→ c µ t such that µ s L(R) µ t . Given two states s and t, we say that s and t are probabilistically bisimilar, denoted by s ∼ p aa t, if there exists a probabilistic bisimulation R on P such that (s, t) ∈ R. Given two PAs P 1 and P 2 , we say that P 1 and P 2 are probabilistically bisimilar, denoted by P 1 ∼ p aa P 2 , if there exists a probabilistic bisimulation R on the disjoint union of P 1 and P 2 such that (s 1 ,s 2 ) ∈ R. The proof is a minor adaptation of the corresponding proof (cf. [20]) for the original definition of probabilistic bisimulation and parallel composition of PAs. In the following, we use the subscript " j, 3" with j ∈ {1, 2} to refer to the component of the PA P j,3 = P j ⊗ P 3 . Let R be the probabilistic bisimulation justifying P 1 ∼ p aa P 2 and R = R × I S 3 ; we claim that R is a probabilistic bisimulation between P 1 ⊗P 3 and P 2 ⊗P 3 . The fact that R is an equivalence relation follows trivially by its definition and the fact that R is an equivalence relation. The fact that ((s 1 ,s 3 ), (s 2 ,s 3 )) follows immediately by the hypothesis that (s 1 ,s 2 ) ∈ R and (s 3 ,s 3 ) ∈ I S 3 . IMDPs vs. PAs A cornerstone towards establishing compositional reasoning for IMDPs essentially relies on transformations from IMDPs to PAs and vice versa. To this aim, we define two mappings namely, unfolding which unfolds a given IMDP as a PA and folding which transforms a given PA to an IMDP. It is worthy to note that the unfolding mapping might transform an IMDP to a PA with an exponentially larger size. This is in fact due to the exponential blow up in the number of transitions in the resultant PA which in turn depends on the number of extreme points of the polytope constructed for each state and action in the given IMDP. An example of unfolding is given in Figure 2. In order to transform a given PA to an instance of IMDPs, we use the folding mapping defined as follows: An example of the folding mapping is shown in Figure 3. The PA P has three transitions from t with label a; in particular, it is worthwhile to note that for all these transitions the probability of reaching y is larger than the probability of reaching z, so this has to happen for every combined transition leaving t. According to Def. 7, the folding of P is the IMDP I. It is immediate to see that the unfolding mapping is not surjective as there may be some probabilistic transitions in the generated It is worthy to note that the unfolding mapping might transform an IMDP to a PA with an exponentially larger size. This is in fact due to the exponential blow up in the number of transitions in the resultant PA which in turn depends on the number of extreme points of the polytope constructed for each state and action in the given IMDP. An example of unfolding is given in Figure 2. In order to transform a given PA to an instance of IMDPs, we use the folding mapping defined as follows: An example of the folding mapping is shown in Figure 3. The PA P has three transitions from t with label a; in particular, it is worthwhile to note that for all these transitions the probability of reaching y is larger than the probability of reaching z, so this has to happen for every combined transition leaving t. According to Def. 7, the folding of P is the IMDP I. It is immediate to see that the unfolding mapping is not surjective as there may be some probabilistic transitions in the generated IMDP specification which cannot be mapped to a probability distribution in the given PA. In fact, one of such distributions is µ o such that µ o (x) = 2 5 , µ o (y) = 1 5 , and µ o (z) = 2 5 that clearly violates the condition µ o (y) > µ o (z). This is better recognizable by comparing the corresponding polytopes in a graphical way. Figure 4 shows the three polytopes involved in I: the purplish large triangular polytope is the standard 2-simplex in the three dimensional space; the reddish small triangular and the bluish parallelogram-like polytopes represent the convex hull of 7 10 , 1 5 , 1 10 , 1 2 , 2 5 , 1 10 , 0, 3 5 , 2 5 and the polytope P t, f , respectively, both being a sub-polytope of the 2-simplex. Clearly there are points in P t, f that do not belong to the reddish polytope, such as the black dot corresponding to µ o . 1. UF(F(P)) P F(UF(I)) I As we will discuss later, the general incompleteness property of the folding mapping does not influence on the generality of our compositional reasoning for IMDP specifications. We will dive into this point later in Section 4. Probabilistic Bisimulation for Interval MDPs We now recall the main results on probabilistic bisimulation for IMDPs, as developed in [8]. In this work, we consider the notion of probabilistic bisimulation for the cooperative resolution of nondeterminism. This semantics is very natural in the context of verification of parallel systems with uncertain transition probabilities in which we assume that scheduler and nature are resolved cooperatively in the most adversarial way. Moreover, resolution of a feasible probability distribution respecting the interval constraints can be either done statically [13], i.e., at the beginning once for all, or dynamically [12,22], i.e., independently for each computation step. In this paper, we focus on dynamic approach in resolving the stochastic nondeterminism that is easier to work with algorithmically and can be seen as a relaxation of the static approach that is often intractable [1,3]. Let s −→ µ s denote that a transition from s to µ s can be taken cooperatively, i.e., that there is a scheduler σ ∈ Σ and a nature π ∈ Π such that µ s = a∈A(s) σ(s)(a) · π(s, a). In other words, s −→ µ s if µ s ∈ CH( a∈A(s) P s,a ). Definition 8 (cf. [8]). Given an IMDP I, let R ⊆ S × S be an equivalence relation. We say that R is a probabilistic bisimulation if for each (s, t) ∈ R we have that L(s) = L(t) and for each s −→ µ s there exists t −→ µ t such that µ s L(R) µ t . Furthermore, we write s ∼ c t if there is a probabilistic bisimulation R such that (s, t) ∈ R. Intuitively, each (cooperative) step of scheduler and nature from state s needs to be matched by a (cooperative) step of scheduler and nature from state t; symmetrically, s also needs to match t. In order to support the compositional reasoning, ∼ c needs to be an equivalence relation. It is not difficult to see that ∼ c is reflexive and symmetric. What remains is to show that it is also transitive. This is indeed a property of ∼ c , as stated by the following proposition: 4 Figure 4: Comparison of polytopes resulted from folding mapping F IMDP specification which cannot be mapped to a probability distribution in the given PA. In fact, one of such distributions is µ o such that µ o (x) = 2 5 , µ o (y) = 1 5 , and µ o (z) = 2 5 that clearly violates the condition µ o (y) > µ o (z). This is better recognizable by comparing the corresponding polytopes in a graphical way. Figure 4 shows the three polytopes involved in I: the purplish large triangular polytope is the standard 2-simplex in the three dimensional space; the reddish small triangular and the bluish parallelogram-like polytopes represent the convex hull of 7 10 , 1 5 , 1 10 , 1 2 , 2 5 , 1 10 , 0, 3 5 , 2 5 and the polytope P t, f , respectively, both being a sub-polytope of the 2-simplex. Clearly there are points in P t, f that do not belong to the reddish polytope, such as the black dot corresponding to µ o . 1. UF(F(P)) P F(UF(I)) I As we will discuss later, the general incompleteness property of the folding mapping does not influence on the generality of our compositional reasoning for IMDP specifications. We will dive into this point later in Section 4. Probabilistic Bisimulation for Interval MDPs We now recall the main results on probabilistic bisimulation for IMDPs, as developed in [8]. In this work, we consider the notion of probabilistic bisimulation for the cooperative resolution of nondeterminism. This semantics is very natural in the context of verification of parallel systems with uncertain transition probabilities in which we assume that scheduler and nature are resolved cooperatively in the most adversarial way. Moreover, resolution of a feasible probability distribution respecting the interval constraints can be either done statically [13], i.e., at the beginning once for all, or dynamically [12,22], i.e., independently for each computation step. In this paper, we focus on dynamic approach in resolving the stochastic nondeterminism that is easier to work with algorithmically and can be seen as a relaxation of the static approach that is often intractable [1,3]. Let s −→ µ s denote that a transition from s to µ s can be taken cooperatively, i.e., that there is a scheduler σ ∈ Σ and a nature π ∈ Π such that µ s = a∈A(s) σ(s)(a) · π(s, a). In other words, s −→ µ s if µ s ∈ CH( a∈A(s) P s,a ). Definition 8 (cf. [8]). Given an IMDP I, let R ⊆ S × S be an equivalence relation. We say that R is a probabilistic bisimulation if for each (s, t) ∈ R we have that L(s) = L(t) and for each s −→ µ s there exists t −→ µ t such that µ s L(R) µ t . Furthermore, we write s ∼ c t if there is a probabilistic bisimulation R such that (s, t) ∈ R. Intuitively, each (cooperative) step of scheduler and nature from state s needs to be matched by a (cooperative) step of scheduler and nature from state t; symmetrically, s also needs to match t. In order to support the compositional reasoning, ∼ c needs to be an equivalence relation. It is not difficult to see that ∼ c is reflexive and symmetric. What remains is to show that it is also transitive. This is indeed a property of ∼ c , as stated by the following proposition: Theorem 1. Given three IMDPs I 1 , I 2 , and I 3 , if I 1 ∼ c I 2 and I 2 ∼ c I 3 , then I 1 ∼ c I 3 . Proof. Let R 12 and R 23 be the equivalence relations underlying I 1 ∼ c I 2 and I 2 ∼ c I 3 , respectively. Let R 13 be the symmetric and transitive closure of the set { (s 1 , We claim that R 13 is a probabilistic bisimulation justifying I 1 ∼ c I 3 . The fact thats 1 R 13s3 is trivial since by hypothesis we have thats 1 R 12s2 ands 2 R 23s3 , so (s 1 ,s 3 ) ∈ R 13 by construction. In the following, assume that s 1 ∈ S 1 and s 3 ∈ S 3 ; the other cases are similar. The labelling is respected: for each s 1 R 13 s 3 , we have that there exists s 2 such that s 1 R 12 s 2 and s 2 R 23 s 3 ; this implies that L 1 (s 1 ) = L 2 (s 2 ) and L 2 (s 2 ) = L 3 (s 3 ), thus L 1 (s 1 ) = L 3 (s 3 ) as required. To complete the proof, consider s 1 R 13 s 3 and s 1 −→ µ 1 . By hypothesis, there exists s 2 such that s 1 R 12 s 2 and s 2 R 23 s 3 ; moreover, by R 12 being a probabilistic bisimulation, we know that there exists s 2 −→ µ 2 such that µ 1 L(R 12 ) µ 2 . Since R 23 is a probabilistic bisimulation, we have that there exists s 3 −→ µ 3 such that µ 2 L(R 23 ) µ 3 . By construction of R 13 and the properties of lifting, it follows that µ 1 L(R 13 ) µ 3 , as required. It is shown in [8] that ∼ c is sound with respect to the PCTL properties. Furthermore, probabilistic bisimulation for IMDPs is computed using standard partition refinement approach [14,19] in which the core part is to verify the violation of bisimulation definition that can in turn be done by checking the inclusion of polytopes defined as follows. For s ∈ S and an action a ∈ A, recall that P s,a denotes the polytope of feasible successor distributions over states with respect to taking the action a in the state s. By P s,a R , we denote the polytope of feasible successor distributions over equivalence classes of R with respect to taking the action a in the state s. Formally, for µ ∈ ∆(S /R) we set µ ∈ P s,a R if, for each C ∈ S /R, it is Furthermore, we define P s R = CH( a∈A(s) P s,a R ), the set of feasible successor distributions over S /R with respect to taking an arbitrary distribution over enabled actions in state s. As specified in [8], checking violation of a given pair of states amounts to check equality of the corresponding constructed polytopes for the states. Compositional Reasoning for IMDPs The compositional reasoning is a widely used technique (see, e.g., [4,11,15]) that permits to deal with large systems. In particular, a large system is decomposed into multiple components running in parallel; such components are then minimized by replacing each of them by a bisimilar but smaller one so that the overall behaviour remains unchanged. In order to apply this technique, bisimulation has first to be extended to pairs of components and then to be shown to be transitive and preserved by the synchronous product operator. The extension to a pair of components is trivial and commonly done (see, e.g., [2,20]): Definition 9. Given two IMDPs I 1 and I 2 , we say that they are probabilistic bisimilar, denoted by I 1 ∼ c I 2 , if there exists a probabilistic bisimulation on the disjoint union of I 1 and I 2 such thats 1 ∼ cs2 . The next step is to define the synchronous product for IMDPs: Definition 10. Given two IMDPs I 1 and I 2 , we define the synchronous product of I 1 and I 2 as I 1 ⊗ I 2 := F(UF(I 1 ) ⊗ UF(I 2 )). A schematic representation of constructing the synchronous product of two IMDPs I 1 and I 2 is given in Figure 5. As discussed earlier, the folding mapping from PA to IMDP, i.e. the red arrow, is not complete and in principle, this transformation may add additional behavior to the resultant system. For each state and action in the resultant IMDP, these extra behaviors are essentially a set of probability distributions that do not belong to the convex hull of the enabled probability distributions for that state in the original PA. At first sight, these extra behaviors generated from the folding mapping might be seen as an impediment towards showing that ∼ c is a congruence for the synchronous product. Fortunately, as it is shown by the next theorem, these extra probability distributions are in fact spurious and do not affect the congruence result. To this aim and in order to pave the way for establishing the congruence result, we first prove two intermediate results stating that the folding and unfolding mappings preserve bisimilarity on the corresponding codomains. Proof . Let R be the probabilistic bisimulation justifying I 1 ∼ c I 2 ; we claim that R is also a PA probabilistic bisimulation for UF(I 1 ) and UF(I 2 ), that is, it justifies UF(I 1 ) ∼ p aa UF(I 2 ). In the following we assume without loss of generality that s 1 ∈ S 1 and s 2 ∈ S 2 ; the other cases are similar. The fact that R is an equivalence relation and that for each (s 1 , s 2 ) ∈ R, L 1 (s 1 ) = L 2 (s 2 ) follow directly by definition of ∼ c . Let (s 1 , µ 1 ) ∈ T 1 : by definition of UF, it follows that µ 1 ∈ Ext(P s 1 ,a 1 ) for some a 1 ∈ A(s 1 ), thus in particular µ 1 ∈ P s 1 ,a 1 , hence µ 1 ∈ CH( a∈A(s 1 ) P s 1 ,a ). By hypothesis, we have that there exists µ 2 ∈ CH( a 2 ∈A(s 2 ) P s 2 ,a 2 ) such that µ 1 L(R) µ 2 . Since µ 2 ∈ CH(∪ a 2 ∈A(s 2 ) P s 2 ,a 2 ), it follows that there exist a multiset of real values { p a 2 ∈ R ≥0 | a 2 ∈ A(s 2 ) } and a multiset of distributions { µ a 2 ∈ P s 2 ,a 2 | a 2 ∈ A(s 2 ) } such that a 2 ∈A(s 2 ) p a 2 = 1 and µ 2 = a 2 ∈A(s 2 ) p a 2 · µ a 2 . For each a 2 ∈ A(s 2 ), since µ a 2 ∈ P s 2 ,a 2 , it follows that there exist a finite set of indexes I a 2 , a multiset of real values { p a 2 ,i ∈ R ≥0 | i ∈ I a 2 } and a multiset of distributions { µ a 2 ,i ∈ Ext(P s 2 ,a 2 ) | i ∈ I a 2 } such that i∈I a 2 p a 2 ,i = 1 and µ a 2 = i∈I a 2 p a 2 ,i · µ a 2 ,i . This means that µ 2 = a 2 ∈A(s 2 ) p a 2 · i∈I a 2 p a 2 ,i · µ a 2 ,i = a 2 ∈A(s 2 ) i∈I a 2 p a 2 · p a,i · µ a 2 ,i . Since for each a 2 ∈ A(s 2 ) and i ∈ I a 2 we have that µ a 2 ,i ∈ Ext(P s 2 ,a 2 ), it follows that (s 2 , µ a 2 ,i ) ∈ T 2 , thus we have the combined transition s 2 −→ c µ 2 obtained by taking as set of indexes I = { (a 2 , i) | a 2 ∈ A(s 2 ), i ∈ I a 2 }, as multiset of real values { q a 2 ,i ∈ R ≥0 | (a 2 , i) ∈ I, q a 2 ,i = p a 2 · p a 2 ,i }, and as multiset of transitions { (s 2 , µ a 2 ,i ) ∈ T 2 | (a 2 , i) ∈ I }: in fact, it is immediate to see that (a 2 ,i)∈I q a 2 ,i = (a 2 ,i)∈I p a 2 · p a 2 ,i = a 2 ∈A(s 2 ) i∈I a 2 p a 2 · p a 2 ,i = a 2 ∈A(s 2 ) p a 2 · i∈I a 2 p a 2 ,i = a 2 ∈A(s 2 ) p a 2 · 1 = 1 and that (a 2 ,i)∈I Moreover, by hypothesis, we have µ 1 L(R) µ 2 , as required. Likewise computation of probabilistic bisimulation for IMDPs, we use the standard partition refinement approach as a ground procedure to compute ∼ p aa for PAs. Still the core part of the approach is to decide bisimilarity of a pair of states. For each state in the given PA, we construct a convex hull polytope which encodes all possible behaviors that can be taken by a scheduler. Hence, for a given pair of states, we show that verifying if two states are bisimilar can be reduced to comparison of their corresponding convex polytopes with respect to set inclusion. Strictly speaking, for an equivalence relation R on S and s ∈ S , we denote by P s R the polytope of feasible successor distributions over equivalence classes of R with respect to taking a transition in the state s. Formally, where, for a given µ ∈ ∆(S ), [µ] R ∈ ∆(S /R) is the probability distribution such that for each Lemma 3 (cf. [2, Thm. 1]). Given a PA P, there exists an equivalence relation R on S such that for each pair states s, t ∈ S , it holds that s ∼ p aa t if and only if s R t, L(s) = L(t), and P s R = P t R . To simplify the presentation of the proof, we first introduce some notation. Given an equivalence relation R on S , for each distribution µ ∈ ∆(S ), letμ ∈ ∆(S /R) denote the corresponding distributionμ = [µ] R , i.e.,μ is such thatμ(C) = s ∈C µ(s ) for each C ∈ S /R. Proof . We show the two implications separately. For the implication from left to right, suppose that s ∼ p aa t; this implies that there exists a probabilistic bisimulation R such that s R t and L(s) = L(t). We want to show that P s R = P t R holds. To this aim, let η ∈ P s R . By definition of P s R , it follows that there exist a finite set of indexes I η , a multiset of real values { p η,i | i ∈ I η } and a multiset of distributions { η i ∈ P s R | i ∈ I η , ∃(s, µ s,i ) ∈ T : η i = [µ s,i ] R } such that i∈I η p η,i = 1 and i∈I η p η,i · η i = η. Since s R t and R is a probabilistic bisimulation, it follows that for each i ∈ I η there exists a combined transition t −→ c µ t such that µ s,i L(R) µ t . By definition of combined transition, it follows that there exist a finite set of indexes I t , a set of transitions { (t, µ t,i ) ∈ T | i ∈ I t } and a multiset of real values { p t,i ∈ R ≥0 | i ∈ I t } such that i∈I t p t,i = 1 and µ t = i∈I t p t,i · µ t,i . This implies that for each i ∈ I t ,μ t,i ∈ P t R . Moreover, since by definition of lifting we have that for each C ∈ S /R, µ s (C) = µ t (C), it follows immediately thatμ t =μ s , thus we have that η =μ s =μ t ∈ P t R , hence P s R ⊆ P t R . By swapping the roles of s and t, we can show in the same way that P t R ⊆ P s R , hence P s R = P t R as required. For the implication from right to left, fix an equivalence relation R on S such that for each (s, t) ∈ R it holds that L(s) = L(t) and P s R = P t R ; we want to show that R is a probabilistic bisimulation, i.e., whenever s R t and s −→ µ s then there exists t −→ c µ t such that µ s L(R) µ t . Let (s, t) ∈ R; if P s R = ∅, then the step condition of the probabilistic bisimulation is trivially verified since there is no transition s −→ µ s from s that needs to be matched by t. Suppose now that P s R ∅ and consider a transition s −→ µ s so thatμ s ∈ P s R . By hypothesis,μ s ∈ P s R = P t R , thus there exist a finite set of indexes I, a multiset of distributions { µ i ∈ P t R | i ∈ I } and a multiset of real values { p i ∈ R ≥0 | i ∈ I } such that i∈I p i = 1 and i∈I p i · µ i =μ s . This implies, for each i ∈ I, that there exist a finite set of indexes J i , a multiset of real values { p i, j ∈ R ≥0 | j ∈ I i }, and a multiset of distributions { µ i, j ∈ P t R | j ∈ J i } such that j∈J i p i, j = 1, j∈J i p i, j · µ i, j = µ i , and for each j ∈ J i , µ i, j =μ t,i, j where (t, µ t,i, j ) ∈ T. Consider now the combined transition t −→ c µ t obtained by taking as set of indexes J = { (i, j) | i ∈ I, j ∈ J i }, as multiset of real values { q i, j ∈ R ≥0 | (i, j) ∈ J, q i, j = p i · p i, j }, and as set of transitions To complete the proof, we have to show that µ s L(R) µ t , that is, for each C ∈ S /R, µ s (C) = µ t (C). Let C ∈ S /R: we have that as required. Lemma 4. Given a PA P and an equivalence relation R on S , for n = |S /R|, it holds that for each (s, t) ∈ R, if P s R = P t R then ( C∈S /R proj e C P s R ) ∩ ∆ n = ( C∈S /R proj e C P t R ) ∩ ∆ n . Proof. The proof is trivial, since by P s R = P t R it follows that for each C ∈ S /R, proj e C P s R = proj e C P t R . This implies that C∈S /R proj e C P s R = C∈S /R proj e C P t R thus ( C∈S /R proj e C P s R ) ∩ ∆ n = ( C∈S /R proj e C P t R ) ∩ ∆ n , as required. Lemma 5. Given two PAs P 1 and P 2 , if P 1 ∼ p aa P 2 then F(P 1 ) ∼ c F(P 2 ). Proof . Let R be the equivalence relation justifying P 1 ∼ p aa P 2 ; we claim that R is also an IMDP probabilistic bisimulation for F(P 1 ) and F(P 2 ), that is, it justifies F(P 1 ) ∼ c F(P 2 ). In the following we assume without loss of generality that s 1 ∈ S 1 and s 2 ∈ S 2 ; the other cases are similar. The fact that R is an equivalence relation and that for each (s 1 , s 2 ) ∈ R, L 1 (s 1 ) = L 2 (s 2 ) follow directly by definition of ∼ p aa . Since P 1 ∼ p aa P 2 , it follows from Lemma 3 that P s 1 R = P s 2 R . Additionally, it is not difficult to see that for s j ∈ {s 1 , s 2 }, P 1 ] R implies that for each C ∈ S /R, s∈C µ 1 (s) = s∈C µ 2 (s), i.e., µ 1 L(R) µ 2 . This means that we have found s 2 −→ µ 2 with µ 1 L(R) µ 2 , as required. By using Lemmas 2 and 5 and Proposition 1, we can now show that ∼ c is preserved by the synchronous product operator introduced in Definition 10. Interleaved approach In the previous sections, we have considered the parallel composition via synchronous production, which is working by the definition of folding collapsing all labels to a single transition. Here we consider the other extreme of the parallel composition: interleaving only. Definition 11. Given two IMDPs I l and I r , we define the interleaved composition of I l and I r , denoted by I l I r , as the IMDP I = (S ,s, A, AP, L, I) where S = S l × S r ;s = (s l ,s r ); A = (A l × {l}) ∪ (A r × {r}); AP = AP l ∪ AP r ; for each (s l , s r ) ∈ S , L(s l , s r ) = L l (s l ) ∪ L r (s r ); and I((s l , s r ), (a, i), (t l , t r )) =              I l (s l , a, t l ) if i = l and t r = s r , I r (s r , a, t r ) if i = r and t l = s l , [0, 0] otherwise. Proof . Let R be the probabilistic bisimulation justifying I 1 ∼ c I 2 and define R = R × I S 3 ; we claim that R is a probabilistic bisimulation between I 1 I 3 and I 2 I 3 . The fact that R is an equivalence relation follows trivially by its definition and the fact that R is an equivalence relation. The fact that ((s 1 ,s 3 ), (s 2 ,s 3 )) follows immediately by the hypothesis that (s 1 ,s 2 ) ∈ R and (s 3 ,s 3 ) ∈ I S 3 . Concluding Remarks In this paper, we have studied the probabilistic bisimulation problem for interval MDPs in order to speed up the run time of model checking algorithms that often suffer from the state space explosion. Interval MDPs include two sources of nondeterminism for which we have considered the cooperative resolution in a dynamic setting. We have revised and extended the compositionality reasoning in [9] by further exploration on the possibility of defining the parallel operator for IMDP models which preserve our notion of probabilistic bisimulation.
2016-07-28T14:47:23.000Z
2016-07-28T00:00:00.000
{ "year": 2016, "sha1": "21eb28480b00af28c0df6144bd569fb5e45ab761", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "21eb28480b00af28c0df6144bd569fb5e45ab761", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
258278074
pes2o/s2orc
v3-fos-license
A Sociological study on Status of Quality of Life among citizens of Hubli-Dharwad city Today, population is increasing at such an alarming rate that material resources have failed to keep pace with it. There is a less land but more people. It has more mouths to feed but less food. There are more schools and yet more illiterates. The unemployment is increasing. There is very slow improvement in our living conditions. The expression quality of life denotes a relatively recent idea that has grown more complex over time. But it is perhaps one of the most potent factor in determining the character and extent of development and progress of any country. The present study focuses attention especially on the Quality of Life of Indian Urban people. It will be conducted in the Hubli-Dharwad city, Karnataka. It seeks to understand the status of Urban Quality of Life. From the findings of the study it is understood that basic facilities are not at all good in the Hubli-Dharwad city. Therefore it is suggested to the authorities of the concerned department and municipal corporations to take the immediate measures to facilitate such needy ones in the context of sociological healthy environment. Introduction: Today, population is increasing at such an alarming rate that material resources have failed to keep pace with it.There is a less land but more people.It has more mouths to feed but less food.There are more schools and yet more illiterates.The unemployment is increasing.There is very slow improvement in our living conditions.Even with the best intentions and planning it may not be possible to solve the problems associated with expansion of education, un-employment, poverty, shortage and inadequacy of civic amenities etc. Unless these problems are tackled in the context of the total population problem of the country, any planning to develop material or human resources is bound to fail without a concurrent reduction in the incidences of births. The expression quality of life denotes a relatively recent idea that has grown more complex over time.But it is perhaps one of the most potent factor in determining the character and extent of development and progress of any country.Many development activities also affect the environment in a way that typically affect the quality of life of urban people.Environmental pollution is one of the serious problems faced by the people especially in the urban areas, which not only experiences a rapid growth of population due to high fertility and increasing rural urban migration but also industrialization which is accompanied by growing number of vehicles. Quality of Life: The term "quality of life" is extremely complex; it is affected by a number of factors, and in the literature is interpreted in different ways.It should be noted that the history of the term itself depends on the work of economists and sociologists including John Kenneth Galbraith, Denisa Riesman and Ronald Freedman, who were associated with the criticism of the consumer lifestyle in the USA.They criticized the orientation of American society on consumption and its emphasis on the quantity of produced and consumed goods negatively affects quality of life.Moreover, in such a lifestyle they saw wasted • Email: editor@ijfmr.com IJFMR23012415 Volume 5, Issue 1, January-February 2023 2 resources and a danger to humanity.Quality of life should not be confused with the concept of standard of living, which is based primarily on income. Review of Literature : Nor Rashidah Zainal et al in their study on " Housing Conditions And Quality of Life of the Urban Poor in Malaysia", measured the quality of life by four dimensions: health status, personal safety, existing social support and involvement in social activities.They witnessed in their study a high number of respondents (52 %) claimed of having chronic illness but only 13 per cent were seeking hospital treatments for their illnesses.Respondents also reported of feeling vulnerable and stressful.On a poverty scale of 1 to 10 where 1 stands for "very poor" and 10 stands for "not poor", almost 60 per cent ranked themselves as below.However, 50 per cent also reported of an increase in their living standards for the past two years.Further the findings provide empirical evidence of the relationship between poverty, housing conditions, and quality of life.Housing is not only physical shelter but also plays a significant role in a person"s physical, mental, and emotional health conditions with regards to the qualitative dimensions provided by the housing condition and the surrounding environment of the housing area. Unfortunately, the housing conditions of the urban poor in Malaysia are lacking all these aspects and failed to provide these important dimensions.Due to the strong significant relationship with the quality of life, they highly suggest that housing condition to be seriously considered as a socio-economic indicator in the assessment or measurement of urban poverty.A study should also be done on finding the optimal housing conditions of the urban poor in Malaysia in terms of the physical aspect of the house(design, size, materials used) and the surrounding areas (location, landscape, availability of public amenities and services).Failures to address the housing issues of the urban poor might cause the group to be continuously marginalized in the society and deprived of a quality of life. Jha found that the quality of life of the slum dwellers is low and it differs from slum to slum. In their study on "Measuring the quality of Urban life and neigbourhood satisfaction: Findings from Gazimagusa (Famagusta) Area Study" DeryaOktay and Ahmet Rustemli revealed that, compared to satisfaction with an individual"s dwelling and the immediate neighborhood and its attributes, satisfaction with overall quality of urban life in Famagusta is lower.While almost two-thirds(66 per cent) of the overall sample were satisfied with their neighborhood, just 40 per cent were satisfied with the Quality of urban life.In general, people in Famagusta are more likely to be dissatisfied than satisfied than satisfied with recreational facilities, greenery, maintenance of streets, and traffic in their city.However, an important point needs to be attended when one interprets the mean values. Considering the limited range of responses, the standard deviations are high.This means that there were high degrees of differences among the city dwellers in respect to satisfaction domains of the city and the urban life, and a preliminary study by the authors have proved the existence of these Statement of the Problem: The present study intends to assess and analyse the quality of life in an urban setting.There is widely prevalent notion that urban quality of life is good as compared to rural.But because of governments negligence, rapid urbanization process and industrialization the quality of life in an urban setting of India has gone down heavily.In this context it is necessary to assess and understand the real picture of urban quality of life in India.The lack of services such as water supply, sanitation, drainage of the storm water, treatment and disposal of waste water, management of solid and hazardous wastes, supply of safe food, water and housing are all unable to keep pace with urban growth.Also the unplanned location of industries in urban and suburban areas followed by traffic congestion, poor housing, poor drainage and garbage accumulation causes serious. Objectives of the Study: The present study has been under taken with the following objectives; 1) To assess the indictors of Quality of Life in selected study area. 2) To analyse the impact of Urban Quality of Life on the well-being of family in particular and society in general. 3) To analyse the sociological implications of the findings of the study and to put forth suggestions towards the improvement of Quality of Life. Importance of the Study: The present study focuses attention especially on the Quality of Life of Indian Urban people.It will be conducted in the Hubli-Dharwad city, Karnataka.It seeks to understand the status of Urban Quality of Life.The significance of the present study is that it analyses the sociological implication of the study that is how the urban quality of life influence on family well-being in particular and society well-being in general.So far no study has been conducted exclusively in sociological context.A few studies available deals exclusively in environmental perspective rather than sociological aspects. Data analysis and Interpretation : The city of Hubli-Dharwad has 10,00,000 respondents, among out of which 100000 respondents have been selected with the help of random sampling by using lottery method.All respondents who are able to read and write are consisting of 100000 respondents have been selected while identifying universe or population for the study the following criteria are considered in order to fulfil the objectives of the study: 1. Respondent should be above the age group of 30 years. 2. Respondents should be married 3. Respondents should have undergone education. Based on the above requirements the eligible population in the study area was 100000 and out of the total universe, 384 which constitute the total universe were selected as respondents for the present study.From the above table which shows the status of the public toilets in the Hubli-Dharwad city almost 71 per cent and 19.27 per cent of the respondents in the study were not at all satisfied and not very satisfied with the status of public toilets.Only 3 per cents were of the opinion that they are fairly satisfied.And 7.04 per cent stated that they don"t know or not applicable.Table 2 shows that 57.55 per cent and 22. 92 per cent of the respondents were "not very satisfied" and "not at all satisfied" respectively in the study on the Cleanliness of the cities.Further only 3.9 per cent fairly satisfied and 2.34 per cent of the respondents were very satisfied with the cleanliness of the both cities.Only 13.29 per cent were of the opinion that they don"t know or not applicable.The above table reveals that almost 46 per cent of the respondents in the study were "not at all satisfied" and 25. 26 per cent were not very satisfied with the quality of the air which is prevailing in the Hubli -Dharwad city.Only 7. 03 per cent and 8.08 per cent were very satisfied and fairly satisfied respectively with the quality of air prevailing in Hubli-Dharwad city.14. 32 per cent were responded as Don"t know/ not applicable.According to above table almost 72 per cent of the respondents were very dissatisfied and 21.36 per cent of the respondents are dissatisfied with the status of the roads in Hubli-Dharwad city.Only 2.08 per cent and 1.04 per cent were of the opinion that they are satisfied and very satisfied respectively.Further 4.16 per cent of respondents opined that they are neither satisfied or dissatisfied with the status of roads in the Hubli-Dharwad City.From the above table, it is revealed that, 59. 13 per cent of the respondents in study were very dissatisfied with the availability, quantity and quality of water which is basic thing to any individual in particular and society in general.Further 25. 52 per cent of the respondents were of the opinion that they are also dissatisfied with the water facility available to their residence.Only 8.07 per cent of the respondents opined that they are satisfied and 1. 56 per cent are stated that they are very satisfied.Further 5. 72 per cent of the respondents were expressed that they are neither satisfied or dissatisfied towards the water facility to their residence. Measurements No The above table shows the respondents opinion on overall satisfaction of their life.In which 71.09 per cent of the respondents opined that they are not at all satisfied with the satisfaction of overall life.Further only 10.93 per cent of the respondents were of the opinion that they are satisfied with the overall life satisfaction.5.72 per cent and 4. 68 per cent were stated very satisfied and fairly satisfied towards their overall life satisfaction. Findings of the Study: 1. Majority i.e. 71 per cent and 19.27 per cent of the respondents in the study were not at all satisfied and not very satisfied with the status of public toilets. &Tripathi in their study on "Quality of Life in Slums of Varanasi City: A Comparative Study" in the year 2014 witnessed different results from different parameters.They witness that 7o per cent of sample household in slums used electricity, while 30 per cent household used kerosene as a source of lighting.But it is notable that majority of household had no legal connection, 36 per cent of cooking LPG is used by the majority of sample household.Though very little numbers of people have got LPG connection, most of them use "the small cylinders of 2 kg and 4 kg".Further 32 per cent residents of slums use illegal electric connection for illuminating their heaters.And rest using kerosene, coal and cow-dung-cakes for cooking.64 per cent residents of slum areas use water of hand pumps, while 36 per cent use water of municipal tap for drinking.Further out of 150 houses, 128 houses had • Email: editor@ijfmr.comIJFMR23012415 Volume 5, Issue 1, January-February 2023 3 only one with poor sewerage and no adequate arrangement for the dumping of domestic wastes, result of which only one third households used the place fixed by municipality for the dumping of domestic wastes.48 per cent of slum dwellers used private clinic and 35 per cent used government hospital, while 17 per cent used traditional medical practitioner, and the area has poor literacy condition in the slums of Varanashi, where the female literacy is very poor which indicates bad condition of women.Overall it is differences (Oktay, Rustemli, and Marans, 2009).MojdehNikoofam, AbdollahMobaraki in their study on "Assessment of Quality of Life in the Urban Environment; Case Study : Famagusta, N. Cyprus" found that, Famagusta, as one of the most important cities in North Cyprus, is evaluated according to these indicators.Although people are not pleased or satisfied with the maintenance and management of the trails, the safe urban environment and sense of place attachment enhances individual well-being, level of satisfaction, and the quality of life in the city.Furthermore mixed land uses, familiar or friendly environment, diversity, easy access to different types of housing, and cultural aspects of the city help increase the level of satisfaction.In their study on " A perception Survey for the Evaluation of Urban Quality of Life in Kocaeli and a Comparison of the Life Satisfaction with the European Cities" Nihal Senlier, ReyhanYildiz, E. DigdemAktas, found that, In the industrial city of Kocaeli, in time, the industrial and residential areas overlapped and developed in an unhealthy and unplanned manner.Shifting of development areas towards agricultural and forest areas as a result of rapid increase in population, and the continual decrease in the environmental quality are emerging as very important threats for the sustainable development of Kocaeli city.The main environmental problems are the pollution of the Gulf as a result of domestic and industrial wastes, and air, water and noise pollution resulting from the industry.Dense accommodation structure of the city, and inadequacy of green areas in the urban areas are important factors in the decrease in the air quality.On the other hand, D-100 highway and the railway passing through the center of the city cause noise pollution.
2023-04-23T15:16:39.081Z
2023-01-23T00:00:00.000
{ "year": 2023, "sha1": "0efe69adb8f5a3b9894a988f505af426d8e7e34f", "oa_license": "CCBYSA", "oa_url": "https://www.ijfmr.com/papers/2023/1/2415.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "55f77d603ad43e5c57fc4ae9363ab1fd7e7e8a40", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
55077090
pes2o/s2orc
v3-fos-license
Spatial Evolution of Phosphorus Fractionation in the Sediments of Rhumel River in the Northeast Algeria The objective of the present study is the characterization of the spatial evolution of phosphorus forms in sediments of Rhumel River located in northeast Algeria during winter conditions. Sediments samples were collected along the river in Constantine city during the year 2012. The samples were subjected to physicochemical characterization and metals analysis. Phosphorus was fractionated by sequential extractions procedure in exchangeable, oxyhydroxides bound; calcium bound; organic and residual fractions. The distribution of the different forms of phosphorus in the sediments appears to be influenced by the physicochemical characteristics, which depend on the sampling location. Phosphorus speciation along the river is characterized by the predominance of inorganic phosphorus forms. The exchangeable fraction is the lowest. Phosphorus concentration in this fraction does not exceed 20 mg/kg. The fraction bound to calcium is the most important in retaining inorganic phosphorus with concentrations varying from 328 to 490 mg/kg. Phosphorus bound to oxyhydroxides represents an average of 172 mg/kg. Along the river, the contribution of the different fractions in the phosphorus retention follows the order: exchangeable < bound to oxyhydroxides ~ organic < bound to calcium < residual. As estimated by the sum of exchangeable, bound to oxyhydroxides and bound to organic matter, an average of about 28% of the total phosphorus can become bioavailable. The predominant fraction in the Rhumel sediments changes from residual at upstream Constantine city to bound to calcium at downstream from it. Introduction Phosphorus is an essential element in the functioning of aquatic ecosystems, it is considered as one of the major nutrients required by primary producers (Liu et al., 2012). However, it is also identified as a key nutrient responsible for eutrophication of aquatic environments which has become a serious environmental problem. Phosphorus is naturally present in the aquatic environment. It has various natural sources including leaching from rocks, drainage of forests and soil erosion. During the last century, the amount of phosphorus in freshwater has been greatly increased and amplified by human influence through industrial, agricultural and domestic activities. Sediments play an important role in the phosphorus cycle; they can adsorb large quantities of it and can also release it into the overlying water column when the concentration in water decreases and/or under the conditions of strong water dynamic or change of redox potential (Yang & He, 2010). The nature of the chemical and physical links of phosphorus with sediments is the most important factor that governs its release. The mechanisms involved can be of chemical or biological nature or a combination of both (Slomp, Van Raaphorst, Malschaert, Kok, & Sandee, 1993). Generally, phosphorus in sediments can be adsorbed by Fe, Al and Mn oxihydroxides, tied in organic substances and bound to calcium (Balzer, 1986). The mobilization of phosphorus can be affected by many factors such as temperature, dissolved oxygen, pH and the nature of the sediments (Hasnaoui et al., 2001). Under anoxic conditions, the release of the chemically bound phosphorus is due to reduction of iron oxides (Sallade & Sims, 1997), mineralization of organic matter (Golterman, 1995) and acidification of sediments (Golterman, 1998). The main objective of the present work is the evaluation of phosphorus mobility in the sediments of Rhumel River which traverses Constantine city in eastern Algeria. In our knowledge no such study has been undertaken in the area. Study Site Rhumel River is located in the northeastern Algeria ( Figure 1). It originates from the northwestern Bellaa in Setif. It traverses the high plains of Constantine, with an orientation southwest -northeast until Constantine city. Then, it suddenly changes the direction and turns to the right and flows obliquely towards the northwest (Mébarki, 1984), it confluences with the Oued Endja around Sidi Merouene in Mila town. The main tributary of the river is Oued Boumerzoug which drains industrial and urban zones. The climate of the area is a semiarid type; characterized by wet winters and dry and hot summers. The quality of the river water is characterized by neutral to alkaline pH and high electrical conductivity. Samples Collection and Pretreatment The studied sediments were collected at five stations along the river ( Figure 1, Table 1) in January 2012. Samples were placed in plastic bags and transported to the laboratory where they were dried at 40 °C then ground and sieved using a 0.215 mm sieve and conserved in polyethylene bottles until use. Physicochemical Characterization Measurements of pH and electrical conductivity were performed in suspensions formed with distilled water. Organic matter was determined by loss on ignition at 550 °C. The total phosphorus was extracted with HCl (3.5 M) after calcination. Phosphorus was measured in extracts by UV-visible spectrophotometry using the method of Murphy and Riley (1962). In this method, orthophosphate ions react with molybdate to form a yellow phosphomolybdic complex. Ascorbic acid specifically reduced the phosphomolybdic complex to give a blue color. The absorbance was measured at 700 nm with a spectrophotometer Shimadzu UV-1650PC. The metals were determined after calcination and acid digestion by flame atomic absorption using an atomic absorption spectrometer Varian AA140. Fractionation of Phosphorus in Sediments The different forms of phosphorus were extracted using the fractionation procedure described by Hieltjes and Lijklema (1980). The target phases and the reagents used are illustrated in Table 2. Phosphorus in all extracts was determined by the method described above. All results are average values of triplicate determinations. Physicochemical Characterization The physicochemical results are presented in Table 3. The sampled sediments have an alkaline pH reflecting the dominance of limestone and clay and the buffering capacity associated with these sedimentary materials (Nassali, Ben bouih, & Srhiri, 2002). At the first sampling station (R1), the lowest pH and the highest electrical conductivity are observed, showing the effect of the industrial zone located upstream. Generally, high values of electrical conductivity of the sediments are due to the enrichment by monovalent and divalent ions (Nassali et al., 2002). The low water contents reflect a low fluidity of these sediments (Abdallaoui, Derraz, Bhenabdallah, & Lek, 1998). The important organic matter contents ranging from 4% to 6% are probably due to the degradation of dead cells of the fauna and flora within the River and leaching of surrounding soils (Abdellaoui, 1998). Calcium is the most abundant element in the studied sediments. The measured concentrations of this metal ranged from 134.48 g/kg to 182.35 g/kg. The Rhumel sediments are quite rich in iron and aluminum. The concentrations of the two metals vary between 15 g/kg -20 g/kg and 13 g/kg -18 g/kg respectively. Generally, the metals concentrations in the Rhumel sediments follow the order Mn < Al < Fe < Ca. Along the river, only calcium and manganese show a linear correlation in their spatial evolution (R: 0.86). Concentrations of total phosphorus vary from one site to another. The higher contents are observed at the two stations R1 and R4 located downstream the industrial zone and Constantine city respectively. Along the river, phosphorus is correlated with organic matter. The spatial evolution of its total concentration shows a decrease downstream. Phosphorus concentrations found in this study are similar to those measured in Oued D'Kor (1287 mg/kg) and Oued Beht (1343 mg/kg) in Morocco (Abdallaoui, 1998). Fractionation of Phosphorus in Sediments The used sequential fractionation scheme (Hieltjes & Lijklema, 1980) allowed us to distinguish five fractions: soluble phosphorus; phosphorus bound to iron, aluminum and manganese oxyhydroxides; phosphorus bound to calcium; organic phosphorus and residual fraction. The last fraction is calculated as the difference between total phosphorus and the sum of the four other fractions. Spatial Evolution of the Fraction Bound to Oxyhydroxides In the sediments, phosphorus is frequently associated with Fe, Al and Mn oxides and hydroxides (Pardo, Lopez-Sanchez, & Rauret, 2003). This fraction plays an important role in phosphorus exchange at the sediment-water interface (Kemmou et al., 2006). It is easily mobilized and is responsible for an increase of eutrophication (Zhou, Gibson, & Zhu, 2001). In the Rhumel sediments, concentrations of phosphorus extracted by NaOH ranged from 130 mg/kg to 221 mg/kg (Figure 3). The spatial evolution of this fraction shows that phosphorus is more closely related to Fe than Al and Mn. Consequently, anoxic conditions mediated by bacteria result in the release of sorbed phosphorus from iron oxyhydroxides. The Fe/P ratio of 2 has been regarded as a threshold of phosphorus saturation in soil or sediments (Blomqvist, Gunnars, & Elmgren, 2004). Elsewhere, the molar ratio P/(Fe+Al) has been considered as a better indicator of the potential availability of phosphorus in river sediments (Nair, Portier, Graetz, & Walker, 2004). In the present study, Fe/P ratios are above 2; the highest calculated value concern the sediments collected downstream from Constantine city. The calculated P/(Fe+Al) molar ratios vary around 0.05 implying the importance of phosphorus immobilization along the river. Spatial Evolution of the Fraction Bound to Calcium This fraction is sensitive to low pH. It is assumed to be composed mainly of calcium bound phosphorus as apatite such as Ca 5 (PO 4 ) 3 (OH, F, Cl) and phosphorus bound to calcium carbonate. The first fraction is highly insoluble and redox-insensitive; it can only be attacked by strong acids (Smolders et al., 2006). The Rhumel sediments are characterized by high concentrations of phosphorus bound to calcium varying from 328.35 mg/kg to 490.79 mg/kg (Figure 4). This is related to the significant calcium contents (Table 2). According to Golterman (1995), when sediments are acidified a part of the phosphorus bound to calcium carbonate might be solubilized. In the present study, pH does not vary significantly. Consequently, this fraction is the most uniform along the river. Generally, phosphorus bound to calcium is considered as the main route of permanent storage of phosphorus in sediments and soils (Gonsiorczyk, Casper, & Koschel, 1998). This fraction is released from sediments with difficulty. Consequently, it is not easily used by algae (Kozerski & Kleeberg, 1998;Kaiserli, Voutsa, & Samara, 2002). Spatial Evolution of Organic Fraction In the studied sediments, concentrations of organic phosphorus are lower than those of inorganic phosphorus ( Figure 5). The spatial distribution of this fraction is generally similar to that of the organic matter. At the sampling stations located in the urban area (R2, R3, R4), a marked increase in organic phosphorus is observed. An increase in the organic matter amount of the sediment leads to an increase in the amount of associated phosphorus. It has been suggested that organic phosphorus in sediments is predominantly associated with humic material by complexation and chelation reactions involving metallic cations (Garcia & de Iorio, 2003). The complexes of organic matter with iron can also adsorb phosphorus (Kemmou et al., 2006). Under anoxic conditions, the organic fraction can become bioavailable after sediments mineralization. The speciation of sedimentary phosphorus in Rhumel River (Figure 6), show that it is mostly in inorganic forms. Along the river, the exchangeable form is the lowest compared to the other fractions. Phosphorus availability in the Rhumel sediments appeared to be related to phosphorus sorption by oxyhydroxides and complexation with organic material. Upstream the confluence with Boumerzoug tributary (R1, R2), the contribution of the organic fraction is more important than the one of the oxyhydroxides. However at downstream, the two fractions are closer. The fraction related to calcium is the most important part of the inorganic phosphorus. It has been suggested that the high phosphorus contents of this fraction could be also explained by the fact that a part of phosphorus extracted with NaOH is readsorbed on calcium (De Groot, & Golterman, 1990). In addition, a part of the organic fraction can be solubilized by acid extraction resulting in an overestimation of the phosphorus amount extracted during this step. The contribution of the residual fraction decreases along the river. In this fraction, phosphorus can be associated to crystalline iron oxides, silicates (Buffle, de Vitre, Perret, & Leppard, 1989) and crystalline aluminum-silicate species (Jonsson, 1997). The predominant phosphorus fraction in the Rhumel sediments changes from residual at upstream of Constantine city to bound to calcium downstream from it. Figure 6. Spatial evolution of phosphorus distribution in Rhumel sediments Conclusion Sedimentary phosphorus in Rhumel River is mainly inorganic. The fraction directly available is the lowest. The two fractions, residual and bound to calcium considered as permanents, are the most important. As estimated by the sum of exchangeable, bound to oxyhydroxides and bound to organic matter, about 28% of the total phosphorus in Rhumel sediments can become bioavailable.
2018-12-11T09:34:34.955Z
2013-12-04T00:00:00.000
{ "year": 2013, "sha1": "f15a357f9e8a84b72754a886be11f1eb1cc342c5", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/ep/article/download/30301/18948", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "a88433bb94aa3dff97c095fb33b4b9d43880490a", "s2fieldsofstudy": [ "Environmental Science", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
15819112
pes2o/s2orc
v3-fos-license
Anomalous refraction of airborne sound through ultrathin metasurfaces Similar to their optic counterparts, acoustic components are anticipated to flexibly tailor the propagation of sound. However, the practical applications, e.g. for audible sound with large wavelengths, are frequently hampered by the issue of device thickness. Here we present an effective design of metasurface structures that can deflect the transmitted airborne sound in an anomalous way. This flat lens, made of spatially varied coiling-slit subunits, has a thickness of deep subwavelength. By elaborately optimizing its microstructures, the proposed lens exhibits high performance in steering sound wavefronts. Good agreement has been demonstrated experimentally by a sample around the frequency 2.55 kHz, incident with a Gaussian beam at normal or oblique incidence. This study may open new avenues for numerous daily life applications, such as controlling indoor sound effects by decorating rooms with light metasurface walls. Similar to their optic counterparts, acoustic components are anticipated to flexibly tailor the propagation of sound. However, the practical applications, e.g. for audible sound with large wavelengths, are frequently hampered by the issue of device thickness. Here we present an effective design of metasurface structures that can deflect the transmitted airborne sound in an anomalous way. This flat lens, made of spatially varied coiling-slit subunits, has a thickness of deep subwavelength. By elaborately optimizing its microstructures, the proposed lens exhibits high performance in steering sound wavefronts. Good agreement has been demonstrated experimentally by a sample around the frequency 2.55 kHz, incident with a Gaussian beam at normal or oblique incidence. This study may open new avenues for numerous daily life applications, such as controlling indoor sound effects by decorating rooms with light metasurface walls. I t is well-known that optic components (OCs) can be flexibly designed to control light propagation by gradually tailoring phase fronts, which leads to a great number of practical applications. Comparatively, acoustic components (ACs) have received much less attention, although they are expected to undertake similar functionalities in acoustics. The dilemma is mainly originated from two critical factors that limit the performance of the conventional ACs, especially for airborne sound in the audible regime that is closely related to our daily life. The first major limitation stems from the acoustic opaqueness of natural solids for airborne sound, due to the extreme impedance contrast with respect to air. The opaqueness strongly suppresses the transmission efficiency of ACs. This drawback is considerably relaxed by the recent progress on the air-based artificial structures . In 2002 Cervera et al. have reported 1 that sonic crystals can be used to design various acoustically transparent refractive devices at the frequency of the first band. For the higher frequency bands, sonic crystals have also been proposed to fabricate planar lens based on the fascinating negative refraction effect [2][3][4] . Intuitively, the excellent transparency in sonic crystals stems from the substantially improved impedance-matching due to the existence of air channels for direct sound propagation. Unwanted reflections from such devices can be further reduced by attaching carefully designed anti-reflection layers 5 . Transparency can also be realized in acoustic metamaterials [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] , which often closely connects with resonant acoustic responses [6][7][8][9][10] or strong anisotropies [11][12][13][15][16][17][18][19][20] of subwavelength units. The unnatural sound responses endow the metamaterials with unprecedented capabilities in tailoring sound, such as subwavelength imaging [11][12][13][14][15][16] and cloaking [18][19][20] . Note that the transparent artificial structures are not necessary periodic. For example, highly efficient sound focusing 21 and demultiplexing 22 devices have been designed by irregular arrays of acoustic scatterers. The second barrier to the high performance of ACs is their notable thicknesses. Similar to the conventional OCs, the thicknesses of ACs are often much larger than the wavelength of operation. This severely restricts the miniaturization of ACs, especially for low frequency (e.g. audio range). This issue cannot be solved by the artificial materials based on bulk effects as well. Recently, the two-dimensional (2D) equivalent of the metamaterial, i.e. the so-called metasurface structure (MS), has attracted a tremendous interest in the optics community [23][24][25][26][27][28][29] . Yu and coworkers 23 have demonstrated an unusual manipulation of light wavefronts through an ultrathin MS, where the reflection or refraction waves are redirected and follow the so-called generalized Snell's law (GSL). The anomalous wavefront redirection is accomplished by designing a constant gradient of the phase accumulation over a flat layer decorated with spatially varying plasmonic units. In terms of physics, the momentum mismatch between the incident wave and the deflected wave is compensated by the MS-induced transversal momentum. Based on a similar principle, the flat MS can even reshape light wavefronts in nearly arbitrary ways providing that appropriate 2D spatial phase profiles are molded [23][24][25][26][27] . Comparing to the conventional OCs, the ultrathin property enable the MSs to be more compatible with on-chip nano-photonic devices, which is of significant importance for future applications. Based on the surface equivalence principle 28 or the optical nanocircuit concept 29 design routes have been further proposed to improve the coupling efficiency to the desired transmitted beams through the implementation of matched impendence. The concept of the gradient MS can also be introduced into acoustics to circumvent the thickness restriction imposed on conventional ACs. Recently, by using ultrathin MSs designed with transversally gradient phase 30 or impedance 31,32 profiles, novel sound manipulations on reflected wavefronts have been theoretically investigated. Here we focus on the acoustic MS that demonstrates anomalous refraction (AR) behavior for airborne sound in kilohertz regime. The flat MS is elaborately designed by arranging spatially varied subunits with coiling slits, where the elongated sound paths enable substantial phase delays. The proposed design manifests high coupling efficiency into the desired transmitted beam through a MS with a deep subwavelength thickness (,1/6.7 operational wavelength), which beats simultaneously the both limitations inherent in the conventional ACs. The redirected sound wavefronts have been successfully validated by experimental field patterns. To the best of our knowledge, so far this is the first design and experimental demonstration of the GSL-based AR phenomenon in acoustics. (Note that during the review stage of the paper, we found another two experimental works related to the wavefront shaping of sound waves by thin metasurfaces 33,34 ). The present design strategy can be flexibly extended to modulate transmitted wavefronts to realize a wide variety of functionalities unattainable with conventional ACs. Results Design of the transmitted MS for airborne sound. To efficiently steer the transmitted beam, it is necessary to introduce highly controllable and position-dependent phase shifts over the whole 2p range. In optic systems, the desired phase coverage can be readily obtained by anisotropic resonators through the cross coupling between polarizations. Unfortunately, this scheme cannot be extended to the system for airborne sound which is essentially a scalar wave. By resorting to building blocks made of coiling slits, recently Ref. 30 has realized the required phase profile in an ultrathin sample (and consequently demonstrated a high-quality manipulation of anomalous reflection wavefronts in full-wave simulations). A similar route is employed here. In a microscopic view, the coiling structure forces the sound to travel in a zigzag path and thus effectively elongates the propagation distance of sound. Note that the transmitted phase delay cannot be simply expected by the total slit length since it is determined consistently by the inherent interference among the waves traveling back and forth (due to the unavoidable impedance mismatch at the slit exits). However, the elongated path indeed gives a possibility to achieve wide phase coverage over a deep subwavelength thickness, as to be shown in Fig. 1b. In fact, zigzag channels have been extensively employed to modulate sound in applied acoustics 35 . Such folded structures are presently attracting new interest in designing metamaterials with fascinating properties 36-41 , e.g. negative refractions and zero indices. Our design strategy is described as follows. As depicted in Fig. 1a, each basic building block is integrated by a couple of vertical bars and several horizontal bars, where the air space forms zigzag slits. Specifically, the horizontal bar always starts from the top-left to form an outlet in the top-right of the subunits. This treatment provides nearly equal-distance among the neighboring outlets when different subunits are assembled together, which facilitates the practical sample design. In principle, there are many structure parameters can be tailored to attain required amplitude and phase responses. However, a comprehensive analysis on all parameters is cumbersome and beyond the scope here. In Fig. 1a all dimensions are fixed except the length l of the horizontal bars. As exhibited later, this variable plus the total number n of the horizontal bars can already provide a wide range of local amplitude and phase responses. The accuracy of design is safely guaranteed by full-wave simulations based on the finite-element method, where the solid bar is modeled as acoustically rigid with respect to air (see methods). Besides, dissipative losses are not involved in all simulations. We first calculated the sound field distribution for a periodical array of identical subunits (specified by l and n), excited by a plane wave normally onto to the structure. From the transmitted far-field the amplitude and phase shift can be extracted. As an example, in Fig. 1b we present a set of phase and amplitude spectra for a typical configuration, where the amplitude is normalized by that of incident wave. It is observed that the phase accumulation grows rapidly near the resonances and indeed covers a wide range of value over a thickness of 2 cm. Here we focus on a specific frequency of 2.55 kHz (corresponding to air wavelength l < 13.3 cm), which is selected after a full consideration of the multi-scale nature of the practical sample (see methods). For this prefixed frequency, the repeated process of numerous different configurations gives eight optimized basic building blocks, as labeled in Fig. 1c with geometry details listed in methods. In Fig. 1c we present the corresponding transmitted phase shift (red) and amplitude (blue) responses. It demonstrates that the eight discrete phase shifts cover the entire phase range and increase with a step ,p/4 among the nearest neighbors. The corresponding transmitted amplitudes are considerably large (fluctuating around 0.77, achieved by intentionally choosing configurations near resonances), which is of great benefit to high transmission efficiency. Similar to the optic cases, our MS is constructed by a one-dimensional periodical array of supercells, each formed by assembling the eight different subunits together. As shown below, thanks to the nearly constant phase gradient and amplitude profiles, such a thin MS (with thickness , l/6.67) controls effectively the transmitted The GSL equation also implies that the desired transmitted beam would become evanescent provided that the incident angle is tuned beyond a critical value h c 5 arcsin(122pk a /l) < 9.6u. Note that the current design is considerably different from the coiling MS employed in Ref. 30 which aims to demonstrate anomalous reflections of sound wavefronts. Apart from removing the rigid substrate (used to produce total reflection), a crucial modification here is the elimination of the air spacing among the coiling subunits. As shown later, this treatment would significantly improve the conversion efficiency of the transmitted energy to the AR beam. Otherwise, the direct propagation of sound through the interspace will lead to a considerable contribution to the ordinary beam; this unwanted component can even dominate the transmission since the sound energy tends to transport through the straight channels directly, rather than to squeeze through narrow and long coiling slits. Another striking difference is the relaxation of the number of horizontal bars in each subunit (which is discrete and finite). This facilitates acquiring simultaneously the desired local phase and amplitude responses without incurring heavy simulation tasks. Numerical demonstrations. To verify the AR behavior predicted by GSL, we first simulated a system of finite size for the prefixed operational frequency, 2.55 kHz. This will provide a useful guideline in practical experiments, where the whole system could be heavily restricted by the multi-scale nature and thus the finite size effect should be understood in advance. Specifically, here a MS with length 224 cm (,17l) is considered, impinged normally by a Gaussian beam of width 80 cm (,6l). Figures 2a and 2b present the amplitude and temporal fields, which manifest a couple of transmitted wavefronts strikingly deflected from the incidence. The bright one propagating toward the right-hand side is exactly the desired AR beam, as predicted from the GSL with deflection angle h t < 56.4u (see arrows). This transmitted beam can be simply regarded as a consequence of constructing interference among the deep subwavelength sound sources emitted from the coiling slits. In terms of physics, the momentum mismatch between the AR beam and the incident one is compensated by the transversal gradient of the phase shifts. Due to the periodicity of the supercells arranged, the anomalous beam can also be regarded as the 11 order diffraction 42 , whereas the faint beam outgoing toward the left-hand side corresponds to the 21 order one. It is of interest that the 0 order branch, i.e., the so-called ordinary refraction propagating along the incidence, is strongly suppressed. This beam, despite very weak, can be noticed in the phase pattern displayed in Fig. 2c, if away from the interfering region created by the two relatively stronger nonzero order beams. Different from the dominant 11 order branch, the weak beams of 21 order and 0 order stem mostly from the imperfect design in the phase and amplitude responses. Overall, Fig. 2 states that a field region of only several wavelengths is enough to demonstrate the AR phenomenon. Experimental validations. Below we present experimental validations for the above numerical results. The sample has been fabricated by using a commercial 3D printer, which is made of plastic and behave acoustically rigid with respect to air. Fig. 3a shows a photograph of the supercell assembled by eight different subunits, which has a length 16 cm, a thickness 2.0 cm and a height 1.2 cm. The whole sample is formed by periodically arranging a total of 7-super-cells together (more supercells used for the oblique incidence later). In experiment, the sample is tightly sandwiched between a laboratory table and a covering Plexiglass plate. For the frequency range under consideration, the parallel gap in between behaves as a waveguide and supports only a 2D propagation of sound. Absorbers are placed at the open ends of the waveguide to reduce the unwanted reflection from the free space. A Gaussian beam (of width ,60 cm, i.e. ,4.5l) is produced by a narrow microphone together with a parabolic concave-mirror 43 . The sound field behind the sample is measured by two identical microphones (of diameter ,0.7 cm, B&K Type 4187): one is fixed to act as phase reference, and the other is movable to scan the field distribution behind the sample point by point. Finally, the acoustic signals are analyzed by a multi-analyzer system (B&K Type 3560B), from which both of the wave amplitude and phase can be extracted. In the upper panels of Figs. 3b-3d, we present the experimental amplitude, temporal, and phase fields excited by the Gaussian beam under normal incidence. The field regions displayed are 102 cm 3 62 cm (,8l 3 5l), slightly above the interface of the sample (gray). From the measured temporal and phase patterns, high-quality planar wavefronts can be observed in the bottom-right field region, associated with the notable outgoing beam in the amplitude distribution. These sound field profiles demonstrate clearly that the MS bends the sound propagation toward right-hand side, where the direction of wavefront precisely coincides with the theoretical prediction from GSL (indicated by the green arrow). For comparison, in the lower panels of Figs. 3b-3d we present the corresponding full-wave simulations similar to Fig. 2, but with a shorter sample and a narrower Gaussian beam (same as in the experiment). It is observed that the measured sound field profiles agree excellently well with the fullwave simulations, especially in the region of high amplitudes that demonstrate the AR behavior. The difference observable in the weak field region may come from the unavoidable measurement noise or reflection from the boundary. Performance evaluations. In Fig. 4a we present the numerical transmission spectra for an idealized system, i.e. an infinite array of supercells. It is observed that the total transmission is considerably high (,80%) around the designed frequency (2.55 kHz). To further evaluate the conversion efficiency to the AR beam, the transmission is rigorously decomposed into its diffractive components by implementing Fourier transform of the transmitted field. Within this frequency range, only three diffractive beams are allowed, i.e. 21, 0 and 11 orders. As shown in Fig. 3a, most of the transmitted energy is converted to the beam of 11 order, i.e. the desired AR beam predicted by GSL, associated with much less energy coupled into the other two. In particular, the energy transported through the so-called ordinary refraction (i.e. 0 order) turns even negligible near 2.55 kHz, as consistent with Fig. 2. This is because that the wave energy can only be funneled through the slits, which will produce the single AR beam as predicted from GSL. Therefore, the transmitted component of the ordinary refraction stems only from the imperfect design of the MS, e.g. the fluctuating amplitude responses and the inevitable nearfield coupling among the subunits. This is different from many designs in optics where the wave energy can directly penetrate through the dielectrics supporting metallic resonators, leading to a considerable contribution to the ordinary beam. In Fig. 4b the red line shows the frequency dependent conversion efficiency for the AR beam, defined by the energy ratio to the total transmission. It is considerably high (.80%) over the whole frequency range under consideration, although the design is optimized for a specific frequency. The frequency broadening effect has also been verified well in experiments. To roughly explain this behavior, we have studied the frequency dependences of the phase and amplitude responses for the eight subunits individually (similar to Fig. 1b). Within this frequency range, overall, the phase shifts cover a full 2p span and exhibit a trend of monotonous increase from the subunit #1 to #8, consistently leading to positive (despite nonuniform) transversal momentums. So it is the average effect (over all subunits) that results in the broadening of operating frequency. This qualitative picture is further tested by a simple model based on the Huygens-Fresnel principle: the slit-exits are approximated as subwavelengthsized point sources and assigned orderly with the simulated phase shifts and amplitudes. The radiation ratio of the desired AR beam can be extracted straightforwardly from the sound field superposed by such an array of point sources. As manifested by the blue line, high performance indeed covers a wide frequency range, associated with almost perfect conversion near the designed 2.55 kHz. Comparatively, in the full-wave simulation, the maximum conversion efficiencies (exceeding 98%) slightly deviate from the prescribed frequency and occurs at 2.50 kHz and 2.58 kHz. As displayed in the insets, for both frequencies the unwanted 21 order transmitted beam is considerably reduced with respect to 2.55 kHz (see Fig. 2a). This improvement can be attributed to the unavoidable coupling effect among the different subunits. The angular robustness of the AR effect is highly desirable for the further realization of relevant devices (e.g. focusing lenses), based on the primary design starting from the normal incidence. In Fig. 5a we present the power transmission over a wide range of incident angles, together with its diffractive components. (Note that the 12 order diffractive beam, despite small, appears as the incident angle h i ,241.8u.) It is observed that as a whole the total transmission is considerably high, where the major contribution comes from the desired AR beam. This leads to the angularly robust high conversion efficiency in Fig. 5b (red line). Physically, this broad-angle effect stems from the extreme anisotropy of the subunits: sound can only propagate through the slits and there is no direct transversal coupling among different slits (except via the exits). Therefore, both the transmitted amplitude and phase responses vary softly with incident angles. Again, the high conversion can be roughly understood from the simple model (see blue line). The difference turns remarkable near the angles associated with Wood's anomalies (see green arrows), because of the increasing coupling among the different subunits in the real acoustic MS. To realize oblique incidence in the experiment, it is convenient to tilt the sample with respect to the whole fixed measurement system. Here we present examples for the incident angles h i 5 5u, 210u and 220u, where the first one closes to the critical angle h c , and the last one corresponds to a relatively large angle allowed in the measurement. The instant pressure fields are demonstrated in Fig. 6, where the upper and lower panels correspond to the experimental data and numerical comparisons, respectively. Again, for each case the sound field pattern displays close resemblance between the measured and simulated results: the dominant diffraction comes from the AR beam predicted by GSL (as indicated by the green arrow). Discussion Note that in our design the sample thickness is determined by the subunit supporting the largest phase delay. Qualitatively, a periodical array of the coiling subunits can be viewed as a thin layer of effective medium with high refractive index, such that the transmitted sound could be heavily delayed through it. In spite of this, the effective medium is not replaceable by the natural solid with low sound speed, e.g. the soft rubber. This is explained as follows. For the effective medium, roughly, the bulk modulus k e can be estimated from its filling ratio, and the refractive index n e can be estimated by the elongated slit length over the layer thickness. For example, the effective parameters (scaled by those of air) for the eighth configuration employed here k e , 1.5 and n e , 7, which further give rise to the effective mass density r e , 75 and impedance Z e , 10. As shown in Fig. 1b, the moderate Z e provides relatively wide resonances over a considerable transmission background, which benefits to the performance of MS. In fact, the impedance matching could be further improved by optimizing more geometric parameters, such as introducing a spatial gradient of the length for horizontal bars in each subunit. In contrast, the impedance of the natural solid is usually two to three orders higher than air due to the great density ratio. Besides the strong impedance contrast, another notable reason (to exclude natural solids) is the huge transversal impedance that arises from the impenetrability through the vertical bars. The extreme anisotropy enables almost invariant phase delay for different incident angles, since the wave energy can only be transported along the thickness direction of the sample (associated with angularly invariant propagating distance). This guarantees the effectiveness of the broadangle AR designed only from the normal incidence, and also ensures further realization of arbitrary wavefront manipulations based on a similar design. The dissipative loss is an important issue in the practical applications, which mainly comes from the viscosity of air within a thin layer near the channel surface. In our experimental setup, it is not easy to provide a quantitative characterization of the dissipation. In spite of this, the predicted AR behavior has been well demonstrated, as displayed by the experimental field patterns. In fact, similar system parameters have been widely employed in the coiling metamaterials and the other holey structures 11,13,38,39 . The influence of dissipation could be further reduced as the system size is scaled up. In conclusion, we have demonstrated an effective design on the AR effect of airborne sound through an ultrathin MS (,1/6.7 operational wavelength). By elaborately optimizing the subunit geometries, the proposed flat MS exhibit numerically excellent performances: high conversion efficiency over a broad range of frequencies and incident angles. The measured sound field patterns exhibit high-quality redirected wavefronts and agree well with those predicted from full-wave simulations. In principle, a similar design can be extended to the 3D case (by using coiled hole-arrays) to achieve arbitrary 3D shaping of wavefronts, such as in generating acoustic vortices with well-defined orbital angular momentums and non-diffracting Bessel beams. This study may pave the way to significant advances in steering transmitted wavefronts by compact acoustic elements. Methods Simulations. Throughout the paper, all full-wave simulations are accurately performed based on the commercial finite-element solver (COMSOL Multiphysics), in which the sound speed 340 m/s is employed for the practical room temperature of ,15uC. In the simulations, all microstructures with actual geometric sizes are fully considered. The plastic frame of the coiling MS is modeled as acoustically rigid. (According to the well-known mass-density law, the transmission through a plastic plate of thickness 0.8 mm, i.e., the smallest thickness involved here, can be estimated as low as 0.004 around 2.55 kHz.) Except for the periodical boundary condition applied in the specified cases, the radiation boundary condition is set for the remaining situations. The total power transmission is calculated by integrating the Poynting vectors and normalized to the incidence. The relative weights for varied diffractive branches are extracted after precisely calculating the scattering matrix of the complex sample. Sample preparation. In the procedure of sample design, a practical issue originated from the multi-scale nature of the whole experimental system must be fully taken into account. There are four length scales involved in descending order: the total length of sample, the wavelength, the size of subunit, and the size of microstructure in each subunit. In our experiment, the maximum feature size, i.e. the sample length, is limited by the size of laboratory table (150 cm 3 300 cm), and the minimum feature size, i.e. the thickness of the horizontal bars, is determined by the manufacture accuracy (,0.1 mm). A comprehensive assessment of them leads to the currently used wavelength (,13.3 cm), and the sample geometry, i.e. the thickness (h) and length (d) of the subunit h 5 d 5 2.0 cm, the thickness (t) and spacing (s) of the horizontal bars t 5 0.8 mm and s 5 1.0 mm, respectively. Besides the specified geometries, each subunit is featured by two tunable parameters: the number (n) and the length (l) of the horizontal bars. The former is manifested directly in the inset of Fig. 1b. The latter for the eight subunits are orderly listed as follows: 8.4, 8.9, 11.4, 9.5, 12.4, 14.2, 17.1, and 13.2 millimeters. The sample (glued with many supercells) is fabricated with thermo-plastics via 3D printing technique, where a supercell is finished in a single printing.
2018-04-03T05:25:41.987Z
2014-06-13T00:00:00.000
{ "year": 2014, "sha1": "34bcb72d626ded92852d8cc7f2aecb3ec31224ec", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep06517.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "34bcb72d626ded92852d8cc7f2aecb3ec31224ec", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics", "Materials Science", "Computer Science" ] }
234256134
pes2o/s2orc
v3-fos-license
ANSWERS TOOLS FOR UNCERTAINTY QUANTIFICATION AND VALIDATION ANSWERS is developing a set of uncertainty quantification (UQ) tools for use with its major physics codes: WIMS/PANTHER (reactor physics), MONK (criticality and reactor physics) and MCBEND (shielding and dosimetry). The Visual Workshop integrated development environment allows the user to construct and edit code inputs, launch calculations, postprocess results and produce graphs, and recently uncertainty quantification and optimisation tools have been added. Prior uncertainties due to uncertainties in nuclear data or manufacturing tolerances can be estimated using the sampling method or using the sensitivity options in the physics codes combined with appropriate covariance matrices. To aid the user in the choice of appropriate validation experiments, the MONK categorisation scheme and/or a similarity index can be used. An interactive viewer has been developed which allows the user to search through, and browse details of, over 2,000 MONK validation experiments that have been analysed from the ICSBEP and IRPhE validation sets. A Bayesian updating approach is used to assimilate the measured data with the calculated results. It is shown how this process can be used to reduce bias in calculated results and reduce the calculated uncertainty on those results. This process is illustrated by application to a PWR fuel assembly. INTRODUCTION When calculating best estimate reactor parameters of interest it is not only important to provide an accurate estimated value of a given parameter, but also to provide a reliable estimate of the uncertainty on that estimated value. The move in recent years from pessimistic estimates to BEPU (best estimate plus uncertainty) requires the use of sophisticated tools for uncertainty quantification (UQ) [1]. The aim of an ongoing strand of ANSWERS [2] development work is to establish UQ tools for use with the major ANSWERS' physics codes, including: WIMS/PANTHER (reactor physics), MONK (criticality and reactor physics) and MCBEND (shielding and dosimetry). For some years ANSWERS has been developing Visual Workshop, an Integrated Development Environment to accompany the physics codes. This allows the user to construct and edit code inputs, launch calculations, post-process results and produce graphs, and recently uncertainty quantification and optimisation tools have been added. Initial UQ tool development focused on the sampling method in which the user can specify statistical distributions rather than numerical values for user-specified input parameters [3]. We have also produced sampled nuclear data libraries in which the data on the evaluated nuclear data files are selected from statistical distributions, rather than using the reported central values. Monte Carlo sampling or Latin hypercube sampling can be chosen by the user. Additionally, capabilities have been included in the physics codes to calculate sensitivities which can be combined with a covariance matrix for the input parameters as an alternative way of undertaking UQ. These methods are described and results for a PWR fuel assembly are presented. The above approaches do not account for evidence obtained from plant measurements or validation experiments, which can be used to refine best estimate values for parameters and their uncertainties. When using validation data, a major concern is what constitutes appropriate data. Two main tools are provided to aid the user in the choice of appropriate experiments: the MONK categorisation scheme (see ref [4] for details) and a similarity index described in Section 5. To aid this an interactive viewer has been developed which allows the user to search through details of roughly 2,000 MONK validation experiments that have been analysed from the ICSBEP and IRPhE validation sets. ANSWERS has investigated a number of methods for combining plant calculations and validation data including: data assimilation, Bayesian updating, maximum likelihood estimation and extreme value theory. In this paper we concentrate on the Bayesian updating approach and describe how this is implemented in ANSWERS software. It is shown how this process can be used to reduce bias in calculated results and reduce the uncertainty on the estimated quantities. This process is illustrated by application to a PWR fuel assembly. VISUAL WORKSHOP Visual Workshop is the ANSWERS' IDE (integrated development environment) for preparing and verifying models, launching calculations, post-processing results and graphical display, see Figure 1. It is designed to work with ANSWERS' physics codes, including WIMS, MONK ® , MCBEND and RANKERN. Visual Workshop also contains tools to help the user undertake uncertainty analyses with ANSWERS' codes, as described in Sections 3 to 6 below. SAMPLING TOOL FOR UNCERTAINTY QUANTIFICATION Tools have also been implemented in Visual Workshop for uncertainty quantification and optimization [5]. A sampling methodology is available for estimating prior uncertainties, by running a number of calculations in which uncertain input parameters are varied by choosing values from user-specified distributions; Monte Carlo, stratified and Latin hypercube sampling options are currently available [3]. Wilks method [6] is also available for user-defined probability and confidence levels [3]. Figure 2 shows an example input for the sampling tool, for estimating prior uncertainties due to uncertainties arising from manufacturing tolerances (geometry, composition and density). In this simple, illustrative example, a 19 x 19 UO2 fuel assembly partially immersed in water is investigated. The uncertainty in the calculated value of k-effective (using MONK's "K(THREE)" estimator) arising from uncertainties in fuel enrichment, fuel density, length of fuel pins, pitch of the fuel pins, fuel pellet diameter and clad thickness, is estimated. This is achieved by sampling the uncertain parameters from normal distributions in this instance; truncated-normal, uniform and beta distributions are also available. Only five sampled calculations are requested in order to keep the output to manageable proportions for display in Figure From the output it is a simple matter to estimate the mean and standard deviation and such basic statistics are saved in the runref.statistics.csv file. The nuclear data used by the codes are themselves subject to uncertainty. The values of the cross-sections etc. in the evaluated nuclear data files, such as the JEFF, ENDF/B, CENDL and JENDL series of evaluations, are provided with uncertainties by the evaluators. The cross-sections etc. must be processed to produce the continuous energy (BINGO) nuclear data libraries required by the MONK and MCBEND Monte Carlo codes and also to produce the multigroup libraries required by WIMS/PANTHER. In order to propagate the evaluated nuclear data uncertainties through the physics calculations, sets of nuclear data libraries have been produced in which the evaluated parameters are drawn from statistical distributions chosen to represent the nominal values and their associated uncertainties. These are processed into sets of sampled BINGO and WIMS libraries as described in [5]. Sets of 25, 60 and 120 Latin hypercube sampled libraries have been produced. In addition a set of 1,000 Monte Carlo sampled libraries has been generated in the WIMS energy group scheme as a reference set. These libraries can be chosen for use with the UQ calculations to allow the uncertainty resulting from nuclear data to be evaluated. The sampled libraries can also be used in combination with variations in the geometric and compositional data to estimate the total uncertainty [3]. VALIDATION DATABASE VIEWER Once the prior uncertainty has been estimated the next task is to choose measured data for validation. Here "validation" is defined to be the process by which measured data are combined with calculated results to refine the calculated values for parameters of interest; i.e. to remove calculation bias and update the estimated uncertainty. At the time of writing, the ANSWERS' criticality database contains 828 Tier 1 (independently checked) experimental configurations and 1205 Tier 2 (self checked) configurations for use with the MONK reactor physics and criticality code, see ref [5] for more details. The Tier 1 and 2 validation cases are displayed in Figure 3 To assist with the choice of measured data for MONK analysis, a validation database viewer has been implemented in Visual Workshop. The viewer allows the user to search and browse the Tier 1 and 2 cases in the MONK validation database, and click on individual cases to display details, as shown in Figure 4. This gives a value that indicates how similar the nuclear data sensitivities of systems B and S are, that essentially ranges from 0 (no similarity) to 1 (complete similarity). A Similarity Index tool evaluates the similarity indices for each of the validation experiments appropriate to the chosen application and displays the results in descending order of magnitude. Figure 5. Screen Shot from the Similarity Index Tool An example is shown in Figure 5. In this case, the top 20 matches all have ESUM similarity indices between 0.94 and 0.95. (Also given are the total sensitivity and two quantities, AVALS and DSUM, associated with an alternative similarity measure not discussed here.) VALIDATION A number of methods are being made available within Visual Workshop to combine the measured data with the calculated results to improve the estimated value of keffective and its uncertainty. The UK Working Party on Criticality (WPC) produced a summary of general techniques available to derive the safety criterion used in criticality assessments [8],including (where EPD = error in physical data and USL = upper sub-critical limit):  EPD -standard error method;  EPD -standard deviation method;  Systematic bias and uncertainty -subtraction;  Systematic bias and uncertainty -addition;  USL method 1 -H to fissile material ratio;  USL method 1 -mean log of exponential energy of neutrons causing fission (MLENCF);  USL method 1 -mean log of exponential energy of neutrons causing capture (MLENCC). In addition, a Bayesian updating scheme is available based on the method discussed in ref [9] and also the generalized linear least squares (GLLS) method described below. The estimated bias for application case α, kα,bias, is given by (using the Einstein summation convention over repeated suffices): where , is the sensitivity of the application (α), experiment (ε), respectively to nuclear data item i, is the nuclear data covariance matrix, is the covariance between experiments ε and δ resulting from uncertainties in dimensions and compositions etc. and (∆ )/ is the relative code bias for experiment δ. The posterior uncertainty, , , is related to the prior uncertainty, , , by: EXAMPLE CALCULATION An example calculation has been performed for a GBC-32 flask holding PWR fuel elements with a burnup of 45 GWd/te and five years of cooling; actinide only compositions were transferred from the reactor to the flask using the COWL material transfer facility in MONK [10]. The similarity to 1967 experimental configurations was evaluated and those with similarity index > 0.78 were chosen, giving 175 experiments for consideration. The prior uncertainty was estimated using the sensitivity matrix and the nuclear data covariance matrix ( ). The MONK calculations were run using 5,000 superhistories per stage with a target standard deviation of 0.0002 on keffective. The results of the GLLS analysis are displayed in Table I. Note that the use of the experimental data has more than halved the estimated uncertainty on the calculated result. Also the bias corrected value of keffective plus three standard deviations is less than 0.95. Note, however, that the correlation between experiments within an experimental series has been neglected. Estimating such correlations is a complex and time-consuming process. A way to approach this is described in [11,12]. A simple approach to get around this is discussed below. Note that, although the posterior estimate of keffective is higher than the prior estimate, the posterior estimate of keffective plus three standard deviations is lower than the prior estimate. For comparison, results for two of the methods listed in Section 6 are displayed in Table II. Both of the methods indicate that the maximum allowable value for the prior keffective is less than the value of 0.9241 arrived at above. Hence the operation would be not be considered safe despite the GLLS analysis indicating that the posterior best estimate value of keffective is nearly nine standard deviations below 0.95. Neglecting correlations in the uncertainties of experiments in a single series can lead to an underestimate of the uncertainty. One way to address this is to use only a single experiment from each series. In this case the experiment with the highest similarity index was chosen from each series. This reduced the number of experimental configurations used to 13. The results of the revised analysis are shown in Table III. Again the use of the experimental data leads to a significant reduction in the estimated uncertainty. In this case 0.95 is more than seven standard deviations above the posterior best estimate value for keffective.. Although the posterior estimate of keffective is higher than the prior estimate, the posterior estimate of keffective plus three standard deviations is again lower than the prior estimate. For comparison, results for two of the methods listed in Section 6 are displayed in Table IV. In this case use of the EPD methods would again indicate that the operation is not safe, but the USL method would suggest that it is safe. This illustrates some of the issues associated with establishing safe critical limits, but also shows that the ANSWERS tools available for uncertainty analysis can greatly assist in providing increased confidence, or when used carefully could potentially support a less conservative approach. CONCLUSIONS ANSWERS is developing a coherent set of tools to aid the user in the estimation of uncertainty on predicted values. The tools are implemented in the Visual Workshop IDE so that they are available for use with ANSWERS' WIMS/PANTHER, MONK and MCBEND physics codes. The tools have been applied to the criticality safety of irradiated PWR fuel elements in a flask. More traditional approaches are compared to the best estimate plus uncertainty (BEPU) approach. The BEPU approach is shown to provide a higher degree of confidence in the criticality safety of the configuration studied.
2021-05-11T00:05:59.446Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "090b063cd64753a49ccf66490e6320c04a168450", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2021/01/epjconf_physor2020_15015.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fddbf779963a918f0dacec8c99134b28a1e7237b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
150141616
pes2o/s2orc
v3-fos-license
“Religious feeling, morality and ethical feelings: the case study on Indonesia” There is no guarantee that people will follow their professional code of ethics. Large number of violation occurred in almost every organization. In this study we argued that commitment toward code of ethics, which is related to ethical feelings, is imperative to predict whether a person will obey their professional code. This study predicted that commitment to the code of ethics is determined by individual morality (i.e. moral judgment and moral maturity), and religious feeling. The survey was conducted through online questionnaire to Indonesian employees from various sectors and undergraduate students. The analysis revealed that moral judgment cannot predict commitment toward code of ethics. The result showed that religious feeling and moral maturity have positive association with commitment to code of ethics. In addition, these two concepts also produced favorable effect on moral judgment. Discussion, implication, and limitation are provided in the final part of article. INTRODUCTION Since ethical issues are becoming crucial and imperative concern among the corporates, various types of formal professional code of ethics were established.The existing of such formal code of conduct could be seen as expected behavior to follow by organization members.Wotruba (1990) stressed three major purposes of establishing the code of ethics: to state organizations concerned of ethical issues, share and transfer of organizational value to its members, and influencing the behavior of organization members.Despite the various benefits of having the formal code of ethics, the scholars argued that the established rules of ethical guidelines are not sufficient in shaping the individual behavior (Chao, Li, & Chen, 2016;Somers, 2001).In order to strengthen the codes, each organization member should have commitment to the professional code of ethics (Chao et al., 2016) and blend it with other organizational system. Literature suggested that commitment is a consequence of interaction between personal traits and experiences, organizational forces, and the alignment among those factors (Kaur, 2017).Many scholars believed that commitment is an imperative construct on shaping productive behavior at work.It is believed to be able to reduce turnover intention (Mohamed, Taylor, & Hassan, 2006;Vandenberghe & Tremblay, 2008), enhance work motivation (Kaur, 2017), and increase performance (Camilleri & Van Der Heijden, 2007;Sharma, Kong, & Kingshott, 2016).However, commitment has a focal point (Redman & Snape, 2005).A person could have multiple commitment toward various things such as supervisor, CEO, union representative, or toward code of conduct. Professional commitment consists of three dimensions, namely normative, affective, and continuance (Allen & Meyer, 1990; Hall, Smith, & Langfield-Smith, 2005; Meyer & Allen, 1991).It covers rational consideration, emotional attachment, and individual obligation about their working condition.As such, commitment can also be understood as individual specific mental judgment about their surroundings.Similarly, commitment codes of ethics should be determined by individual judgment about ethical dilemma.As people come to a judgment of their moral reasoning, it should affect their level of commitment toward particular issues, including code of ethics. Social cognitive theory suggested that people judgment depends on the level of their moral development (Kracher, Chatterjee, & Lundquist, 2002;Martynov, 2009).Each level reflected people orientation when dealing with dilemmas and alternatives.Although literatures have sound consistency regarding the effect of moral development, the issues might be distinct when associated with moral maturity.Instead differing people with a certain level, moral maturity focused on ability to distinguish right from wrong and willingness to act morally (Chao et al., 2016;Mujtaba & Sims, 2006;Philibert, 1982).Mature people might have better judgment on moral issues and stronger commitment to formal code of ethics. Judgment could also be influenced by individual values and norms (Finkeilstein, Hambrick, & Cannella, 2009), which reflected from consistent behavior or activity.Since ethical issues are closely related to right or wrong, people who have commitment to a religion and its doctrine might hold strong framework about ethical dilemmas.Religion usually consists of meaning, values, norm, and for particular also covers specific guidelines on behavior code of conduct (Zimmer, Jagger, Chiu, Beth, & Rojo, 2016).High engagement to religion activity such as pray, reading holy text, preaching, and others might shape individual judgment and attitude toward code of ethics. This study was partially inspired by the work of Chao et al. (2016).Instead of considering commitment to codes of ethics, as a consequence, they used it as predictors of moral judgment.Problem of their study is that they did not really discussed commitment to code of ethics, but rather the established hypothesis about the effect of having professional code of conduct.This study intended to answer specific questions: how moral maturity and religiosity affect individual moral judgment; and how moral judgment influences the level of individual commitment to codes of ethics. LITERATURE REVIEW Various literatures have discussed and conceptualized the commitment.The most popular is commitment as a psychological state of employee indicating whether to involve or to not with organization (Meyer & Allen, 1991).It has components that reflect the individual desire (i.e.affective), need (i.e.continuance), and obligation (i.e.normative) (Allen & Meyer, 1990;Meyer & Allen, 1991).Commitment was also defined beyond psychological state as attitudinal and behavioral tendency (Mowday, Porter, & Steers, 1982).As attitudinal commitment could be understood as individual mind set regard-ing their involvement with organization, behavioral commitment is related to how a person is engaged in organization through their performance of action (Meyerson & Kline, 2008;Mowday et al., 1982). Both streams of definition indicated that commitment is related to emotional attachment, rational decision, and moral obligation influencing people attitude and behavior.In more recent study, commitment should be considered as multilevel and multifocal construct (Redman & Snape, 2005).The person might be more attached to other than their organization such as union representative, supervisor, or code of conduct. Commitment to code of ethics could also be understood as psychological state or attitudinal and behavioral tendency toward code of ethics.It has to be able to indicate whether people having a stable mindset to commit with their professional code and to perform consistent actions following such code.We argued that the existence of code of ethics is not sufficient unless it is shared and stressed among employees.Evidence was found that although employees are aware of having a professional code, large numbers of code violation still remain (Chokprajakchat & Sumretphol, 2017;Somers, 2001).As such, being committed to ethical guidelines is more important than merely having formal code of conduct. Kholberg's cognitive moral development explained why people might end up with different judgment of moral dilemma for the same existing issues (Shawver & Sennetti, 2009;Wright, 1995).The theory suggested that people are different in terms of the level of moral development.Consequentlly, the level consist of pre-conventional, which is oriented to self-interest, conventional level, which refers to social norms and value, and post-conventional, which relies on the ideals right (Kracher et al., 2002;Martynov, 2009).Moral judgment was argued as behavioral prediction, and it was established before decision (Gold, Pulford, & Colman, 2015).It means people might perform different action while having opposite judgment.Gold et al. (2015) explained that doing right action cannot completely be justifed as right, it still remains morally discreditable.Chao et al., (2016) described moral judgment as a result of moral reasoning.It was defined as process of understanding the situation, recognizing ethical issues and dilemmas, and arriving at moral judgment (Chao et al., 2016;Shawver & Sennetti, 2009).It can be said that moral reasoning is a decision-making process.However, as argued by Gold et al. (2015), result on moral reasoning (i.e.moral judgment) could be arrived at different decision of behavior.They provided evidence that people judgment regarding moral dilemma was incongruent with their behavior as the result of economic consequences. Scholars agreed that moral maturity has an important role in shaping the individual reasoning and judgment (Chao et al., 2016;Ferguson & Cairns, 1996).Maturity is often associated with development and progress.Such concept was defined as "growth or ability to distinguish right from wrong, to develop a framework of ethical values, and learn to act morally" (Chao et al., 2016; Jadack, Hyde, Moore, & Keller, 1995; Mujtaba & Sims, 2006).Philibert (1982) suggested that the imperative measure of maturity level is the individual willingness to take a responsible view of ethical questions.High level of maturity should be able to lead people arriving at the morally judgment toward ethical dilemmas.However, people could trap in moral truncation (Fields, 1973), which indicated the delay on individual maturity level.Many causes could affect the delay.Evidence showed that the maturity level of individual was influenced by social environment and family.Ferguson and Cairns (1996) found that children and adults in conflict area have lower maturity than those in the more stable area.Additionally, maturity level also determined the learning process in family and from parents (Simmons, 1982). Religiosity is closely related with religion, which could be understood as "specific foundation of principles that are organized around distinct system of belief, practices, and rituals" (Zimmer et al., 2016) While most studies were focused on the overall commitment, this article scrutinized specific dimension of individual commitment toward code of ethics.The measurement of such dimension does not only only cover individual attitudes and values, but also considers behavioral tendencies (Laczniak & Murphy, 2006).Previous literature has investigated behavioral and attitude determinants of commitment.However, this article con-siders individual competencies of moral judgment as an important factor affecting the commitment to code of ethics. HYPOTHESES Although attitude could be influenced by various factors, belief and values were imperative as they consist of fundamental principle and meaning.Jin and Drozdenko (2003) found that values hold by manager influence their ethical attitudes.Religion does not only consisted of rituals and practice, but also has norms and values that are fundamental reason of such religious activity (Zimmer et al., 2016).The more active a person implements their religious activity, the more embedded are the norms of religion.Since religion consists of positive norms, which encompassed people to engage in good things, an individual with high level of religiosity should arrive at ethical judgment.Experiment of Piazza and Sousa (2014) revealed that those with high level religiosity negative judgment toward consensual incest.Religiosity was also found to influence students' negative perception on corruptive behavior (Yahya et al., 2015), decreasing hedonism (Hamzah et al., 2014), less likely to cheat (Bloodgood et al., 2008), and more likely to choose "halal" product (Mukhtar & Mohsin Butt, 2012). H1: There is a positive association between individual religiosity and commitment to code of ethics. H2: There is a positive association between individual religiosity and moral judgment. Relying on Kohlberg's (1981) found positive association between moral maturity and moral judgment.In addition, study of Gibbs et al., (1986) showed that more mature people tend to have courage and be committed to ideal action. H3: There is a positive association between individual moral maturity and commitment to code of ethics. H4: There is a positive association between individual moral maturity and moral judgment. Moral judgment is a result of moral reasoning.As people have specific judgment regarding particular ethical issues, it does not implicate their decision and behavior.Judgment is only a behavior prediction and is formed before decision (Gold et al., 2015). In order to commit, people have to make decision in form of psychological state about their desire, need, and obligation (Meyer & Allen, 1991).However, more ethical moral judgment should enhance individual courage to stick to their judgment, and hold ethical principle or code of conduct.Finally, ethical moral judgment might lead people to be more committed to their professional code of ethics. H5: There is a positive association between individual moral judgment and commitment to code of ethics. AIMS There are three specific objectives of this study.First is to examine whether religiosity and moral maturity influence individual moral judgment. Next is to inquire the relationship of religiosity and moral maturity toward commitment to code of ethics.Finally, this study aims to investigate whether individual moral judgment determines the commitment to code of ethics. Data and sample The data were collected through surveys of business practitioners and business students in several regions in Indonesia, including Sumatra, Java and Sulawesi.We used online questionnaire containing measurements of each variables and case scenario of moral judgment.The link of the questionnaire was spread through contact of all authors.We used convenience sampling technique in order to reach large response rate. To ensure valid measuring process, questionnaire was equipped with reverse question.The items were adopted from well-established measurement of previous studies.The survey instrument was a structured questionnaire with the measurement items adopted from well-established scales in the literature.Our survey got 274 of total responses.After initial check of data, there were two invalid responses, thus, a total of 272 responses were used in further analysis. Respondents were asked to answer the questions on a five-point Likert scale ranging from "1 = strongly disagree" to "5 = strongly agree".The religiosity composite variable was created by averaging all items. Moral maturity To measure the moral maturity, we adopted the items and method used by Chao et al. (2016).The respondents were asked to rate 4 items with fivepoint Likert scale on the importance of the ethical belief related to the provided statement.Then, the first two questions were reverse-coded.To determine the respondent's moral maturity level, a P score was formulated by calculating the ranking of the data.The score was created for each statement in terms of perceived importance.If respondent scored "5", then four points were added to responses; three points for a score of 4, and so on.The composite score was created by averaging the P score to indicate the respondent's moral maturity level. Moral judgment The moral judgment was measured using procedure used by Marta, Heiss, Lurgio, and Delurgio (2008) and Chao et al. (2016).Respondents were provided a business scenario and then we asked them to respond on 3 items with seven-point Likert scale (1 = extremely disagree; 7 = extremely agree).The responses were reverse-coded and overall items were averaged to create the composite score of moral judgment variable. Commitment to codes of ethics To measure the commitment to codes of ethics, we followed the method used by Laczniak and Murphy (2006).The ethical values developed by the American Marketing Association (AMA) were adopted, including honesty, responsibility, fairness, respect, openness, and citizenship.First, we asked the respondents to rate the importance of each code of ethics using a nine-point scale ( 1= not important at all; 9 = extremely important). In the second stage, we asked the strength of their feelings about adopting each code of ethics (1 = not strong at all; 9 = extremely strong).The score of each code was calculated by averaging the score of importance and strength.The composite score of construct is indicated by average score of each of the averaged items. RESULTS AND HYPOTHESES TESTING Several steps were taken to analyze the data.We used the Cronbach's Alpha and Confirmatory Factor Analysis to check the reliability and validity of the measurement.Pearson correlation was used to indicate inter-item correlation among variables.Then, regression analysis was used to test the hypotheses. Table 1 describes descriptive statistics and correlation matrix among variables.The result shows high positive correlation between religiosity and commitment to codes of ethics.Similarly, positive correlation appeared between moral maturity and commitment to ethics.No correlation occurred on the relationship between religiosity and moral maturity and it indicated the absence of multicollinearity between the independent variables.Moral maturity also has positive correlation with moral judgment.Note: ** correlation is significant at the 0.01 level (2-tailed). Regression analysis revealed positive association between religiosity and commitment to codes of ethics (β = 0.348, p < 0.01).Result in Table 2 (model 1) indicated that people with high level of religiosity tend to be committed to code of conduct.Thus, hypothesis 1 is supported.However, as described in the model 2, hypothesis 2 was not supported, since the religiosity has an insignificant effect on moral judgment (β = 0.011, p > 0.05).This result means that religiosity level does not affected individual judgment related to moral issues.Note: * significant at 5% (p<0.05),** significant at 1% (p<0.01). H3 is supported by data which revealed positive association between moral maturity and commitment to code of ethics (β = 0.155, p < 0.01). The relationship explained that high level of commitment toward code of conduct is determined by high maturity.Similarly, moral maturity has positive effect on moral judgment (β = 0.197, p < 0.01).It means that the higher morally mature a person, is the higher probability is that he/she ends up with moral judgment.Thus, H4 is supported by empirical data.H5 that predicts positive correlation between moral judgment and commitment to code of ethics was not sup-ported by data.The result did not revealed any correlation between those constructs (β = 0.053, p > 0.05). Validity and reliability The confirmatory factor analysis using principal component analysis with varimax rotation indicated that the instruments are valid.KMO Bartlett's test shows 0.88, which indicated that the overall measurement model is valid.We used only valid items, which had above 0.5 loading score for further analysis.Complete results of validity and reliability are explained in Table 1.Result also showed positive relationship between maturity and commitment to ethics.Mature people tend to have more consideration on weighting alternatives, including whether to stay or leave from job.More mature people will consider their job based on their obligation or normative commitment, which resulted in higher level of commitment.This study provides additional support to Gibbs et al.'s (1986) study who found positive effect of maturity on commitment to ideal action. Relationship between moral judgment and commitment to code ethics was not supported by data.It implied that having ethical moral reasoning does not automatically arrive at positive attitude toward staying at job.Moral judgment is a people's perception about ethical or moral issues, it consists of psychological judgment which reflected individual attitude about the issues.However, it seems that individual might suffer cognitive dissonance when dealing with ethical issue.Although they firmed about ethical judgment, others factors might change the decision. Cognitive dissonance is a mental that stress arises as people find their actions are not consistent with belief (Lawson & Price, 2003).This study did not confirm Gold et al. (2015) argument saying that judgment is only a behavior prediction formed before decision to act.Individual needs to weight their desire, need, and obligation toward the issues simultaneously before decision to commit (Meyer & Allen, 1991). CONCLUSION This study revealed that religiosity needs to be considered in improving the individual commitment and obedience toward code of ethics.Regular commitment to be involved in religion activity affected the individual openness to commit to other rules like code of conduct.However, frequency of such activity does not determined reasoning process when dealing with ethical dilemmas.It seems various factors surrounded the judgment process.One factor was that empirically supported by this study is maturity level.This study suggested that distinct orientation and cognitive style reflected through maturity level play an imperative role in moral reasoning.Such distinctive features also influence individual commit-ment toward code of ethics.This study surveys students and employees in Indonesia using the online questionnaire.However, we did not categorize such groups in analysis.Future studies should consider to analyze different groups separately due to potential traits and characteristics between them. Table 1 . Descriptive statistics and correlations Table 2 . Regression results Table 3 . Factor loading and Cronbach's Alpha
2019-05-12T14:24:42.705Z
2018-12-25T00:00:00.000
{ "year": 2018, "sha1": "14fe1c1441a13f40cd0d2b4a129e38484b936502", "oa_license": "CCBY", "oa_url": "https://businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/11429/PPM_2018_04_Lukviarman.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "787ea51f411dc10dc57251b23484117327409322", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
253666528
pes2o/s2orc
v3-fos-license
Research on Defect Detection in Automated Fiber Placement Processes Based on a Multi-Scale Detector : Various surface defects in automated fiber placement (AFP) processes affect the forming quality of the components. In addition, defect detection usually requires manual observation with the naked eye, which leads to low production efficiency. Therefore, automatic solutions for defect recognition have high economic potential. In this paper, we propose a multi-scale AFP defect detection algorithm, named the spatial pyramid feature fusion YOLOv5 with channel attention (SPFFY-CA). The spatial pyramid feature fusion YOLOv5 (SPFFY) adopts spatial pyramid dilated convolutions (SPDCs) to fuse the feature maps extracted in different receptive fields, thus integrating multi-scale defect information. For the feature maps obtained from a concatenate function, channel attention (CA) can improve the representation ability of the network and generate more effective features. In addition, the sparsity training and pruning (STP) method is utilized to achieve network slimming, thus ensuring the efficiency and accuracy of defect detection. The experimental results of the PASCAL VOC and our AFP defect datasets demonstrate the effectiveness of our scheme, which achieves superior performance. Introduction Carbon fiber-reinforced plastic (CFRP) has remarkable advantages such as light weight, high strength, fatigue resistance, and corrosion resistance, and it is often used in large, single-piece aircraft structures [1,2]. The manufacturing methods of CFRP include hand layup, automated tape laying, and automated fiber placement [3]. Given the problems associated with the hand layup process, which include difficulty in achieving complex shapes, the need for the manufacturing of large-sized parts, low efficiency, and difficulty in achieving quality consistency, the relatively novel technique of automated fiber placement (AFP) is increasingly used in industry to make manufacturing economical, fast, and efficient [4][5][6]. Automated fiber placement (AFP) contains a placement head and a robotic arm. The placement head lays the CFRP material layer by layer onto a mold. The procedure of automatic fiber laying (AFP) is schematically shown in Figure 1. In the actual production environment, various defects may occur during fiber layup, which will affect the quality [7][8][9][10]. These defects are often directly related to the layup process itself. Harik et al. [7] investigated the link between AFP defects and process planning, layup strategies, and machining. The common types of AFP defects include wrinkles, twists, gaps, bubbles, and the presence of foreign material. A series of scanned images of actual defects and a reference sample without any defect are illustrated in Figure 2. In the actual production environment, various defects may occur during fiber layup, which will affect the quality [7][8][9][10]. These defects are often directly related to the layup process itself. Harik et al. [7] investigated the link between AFP defects and process planning, layup strategies, and machining. The common types of AFP defects include wrinkles, twists, gaps, bubbles, and the presence of foreign material. A series of scanned images of actual defects and a reference sample without any defect are illustrated in Figure 2. Defect detection typically requires manual observation by the naked eye, which leads to low production efficiency. Manual online detection is easily affected by subjective experience, and it may cause problems such as missed detection when the manufacturing task is heavy. With the rapid development of computer vision, deep learning, and other technologies, the defect visual inspection technology [11][12][13][14][15] based on deep learning can be effectively used in the quality control and monitoring of the CFRP manufacturing process. Sebastian Zambal et al. [8] considered defect detection in AFP as an image segmentation problem that can be trained by manually generated training sets. In their study, a laser triangulation sensor was used to obtain the data of the layup machinery, and a dataset with 5000 samples was established. The trained neural network could recognize the gaps, overlaps, and foreign objects on the product's surface. In this paper, we propose the spatial pyramid feature fusion YOLOv5 with channel attention (SPFFY-CA) to achieve defect detection in AFP. The SPFFY-CA includes spatial In the actual production environment, various defects may occur during fiber layup, which will affect the quality [7][8][9][10]. These defects are often directly related to the layup process itself. Harik et al. [7] investigated the link between AFP defects and process planning, layup strategies, and machining. The common types of AFP defects include wrinkles, twists, gaps, bubbles, and the presence of foreign material. A series of scanned images of actual defects and a reference sample without any defect are illustrated in Figure 2. Defect detection typically requires manual observation by the naked eye, which leads to low production efficiency. Manual online detection is easily affected by subjective experience, and it may cause problems such as missed detection when the manufacturing task is heavy. With the rapid development of computer vision, deep learning, and other technologies, the defect visual inspection technology [11][12][13][14][15] based on deep learning can be effectively used in the quality control and monitoring of the CFRP manufacturing process. Sebastian Zambal et al. [8] considered defect detection in AFP as an image segmentation problem that can be trained by manually generated training sets. In their study, a laser triangulation sensor was used to obtain the data of the layup machinery, and a dataset with 5000 samples was established. The trained neural network could recognize the gaps, overlaps, and foreign objects on the product's surface. In this paper, we propose the spatial pyramid feature fusion YOLOv5 with channel attention (SPFFY-CA) to achieve defect detection in AFP. The SPFFY-CA includes spatial Defect detection typically requires manual observation by the naked eye, which leads to low production efficiency. Manual online detection is easily affected by subjective experience, and it may cause problems such as missed detection when the manufacturing task is heavy. With the rapid development of computer vision, deep learning, and other technologies, the defect visual inspection technology [11][12][13][14][15] based on deep learning can be effectively used in the quality control and monitoring of the CFRP manufacturing process. Sebastian Zambal et al. [8] considered defect detection in AFP as an image segmentation problem that can be trained by manually generated training sets. In their study, a laser triangulation sensor was used to obtain the data of the layup machinery, and a dataset with 5000 samples was established. The trained neural network could recognize the gaps, overlaps, and foreign objects on the product's surface. In this paper, we propose the spatial pyramid feature fusion YOLOv5 with channel attention (SPFFY-CA) to achieve defect detection in AFP. The SPFFY-CA includes spatial pyramid dilated convolutions (SPDCs) and channel attention (CA) modules. In addition, we used the sparsity training and pruning (STP) method to achieve network slimming and ensure the efficiency and accuracy of defect detection. The contributions of this work can be briefly summarized as follows: • We propose the spatial pyramid feature fusion YOLOv5 (SPFFY), which adopts spatial pyramid dilated convolutions (SPDCs) to fuse the feature maps extracted in different receptive fields, thus integrating multi-scale defect information; • The channel attention (CA) mechanism was utilized to evaluate the importance of the channels obtained from concatenate functions, which improves the representation ability of the model and generates more effective features; • The sparsity training and pruning (STP) method based on the measurement of sparse and redundant features was utilized to obtain a smaller and more compact network while maintaining accuracy; • The proposed method was evaluated on the PASCAL VOC and our AFP defect datasets, and based on the results, it performs better than the original models. The remainder of this paper is organized as follows: Section 2 discusses related work. Section 3 describes the proposed methods in detail. Sections 4 and 5 present the experiments on the PASCAL VOC and AFP defect datasets, respectively. Finally, we conclude our work in Section 6. Related Work With the rapid development of deep learning in the field of object recognition, the AFP defect detection algorithm based on deep convolution neural networks (CNNs) has become a new research direction. Deep CNNs for Object Detection In recent years, deep convolution neural networks (CNNs) have achieved great success in visual recognition tasks [16][17][18][19][20]. With the improvement of hardware capability and the rapid development of deep convolutional neural network (CNN) architectures (AlexNet [16], VGGNet [21], ResNet [22], MobileNets [23,24], etc.), these models have powerful feature extraction capability to process large-scale images and are suitable for object recognition in complex scenes. The target recognition method based on CNNs is mainly divided into two-stage detection and one-stage detection [25]. Early two-stage recognition methods include R-CNN [26], SPP-Net [27], Fast R-CNN [28], and Faster R-CNN [29]. R-CNN and SPP-Net algorithms use SVM [30] for feature scoring and classification, which is complex to train and takes a long time to detect. Fast R-CNN uses the full connection layer instead of the SVM classifier, but it takes a long time to obtain the region of interest (ROI), and its detection speed is slow. Faster RCNN uses regional candidate networks (RPNs) to achieve end-to-end target recognition and detection, which improves the speed of target detection. However, as the two-stage target detection algorithm needs a large number of calculations and parameters, it cannot meet the requirements of real-time detection and batch application. One-stage detection methods include the YOLO series [31][32][33][34][35] and the SSD series [36,37]. When using the YOLO (You Only Look Once) algorithm for object recognition, the input image only needs one forward inference to predict all target positions and category information in the image. Each series of algorithms can further improve the recognition performance of the model by changing its classification strategy and backbone network. Defect Detection in AFP Various methods currently exist for AFP defect detection. In the AFP process, due to environmental factors, laying temperature, laying speed, laying pressure, equipment accuracy, laying trajectory planning, etc., different types of defects will occur in the final composite products. Many methods have been proposed based on machine vision to detect defects during the AFP process. Shadmehri et al. [38] proposed a laser vision detection system for the automatic fiber placement manufacturing process. This laserassisted detection system is very intuitive but in essence is still based on manual detection, which does not significantly improve its efficiency. Marani et al. [39] used thermal imaging technology to obtain the surface image of glass fiber-reinforced materials. The SURF operator and unsupervised learning K-means are used to detect the surface defects of glass fiber composites. Denkena et al. [40] proposed a defect detection system based on infrared thermal imaging and related image processing for the inspection of AFP 4 of 16 processes. The edge detection algorithm is used to analyze the specific area compacted by the roller, extract the geometric shape and position of the tow, obtain the relevant information of the layer, and further detect defects such as overlaps, gaps, twists, etc. Brüning et al. [41] proposed a machine learning algorithm using an integrated infrared (IR) camera, which detects different types of defects and provides real-time quality information for the inspection of AFP processes, achieving automated data capture, data storage, modeling, and optimization. Chen et al. [42] proposed an intelligent AFP detection system that uses infrared vision for defect recognition and measurement and includes intelligent decision making, multi-parameter optimization, and data storage. Some related studies in the field of deep learning have addressed AFP defect recognition [43]. Carsten Schmidt et al. [44] proposed a defect detection and classification method based on thermal imaging and deep learning in the automatic fiber placement (AFP) process. They designed three different CNN architectures for the detection and monitoring of tow defects, as well as for path monitoring. This method is only used to classify different defects and cannot locate them. In addition, when the defect target is small, the image contains a large amount of invalid background information, which interferes with the accuracy of classification. Sebastian Zambal et al. [8] proposed image segmentation to address defect detection in AFP and used artificially generated data [45] to solve the problem of insufficient defect data. The authors used probabilistic graphical models to generate training images and annotations and designed a neural network for image segmentation using an architecture similar to U-Nets, which is suitable for training with few real data. Sebastian Meister et al. [46] proposed a defect detection method based on convolutional and recurrent neural networks. In this method, one-dimensional signals are used to analyze the input height distribution of a laser line scanning sensor line by line, which is suitable for classifying images with large defects. In these existing studies, the quality inspection of automatic fiber placement (AFP) is rarely addressed from the aspects of target defect recognition and the location of end-to-end learning and detection networks. Furthermore, the existing studies still cannot effectively solve the problem of background information interference in AFP defect detection. Thus, we aimed to design a deep learning algorithm to identify and analyze the defects of different scales and types in end-to-end frameworks and intuitively provide the inspection results. Pruning To achieve a more compact and effective network that eliminates the time-consuming detection of two-dimensional images, we utilized structured pruning for online AFP defect detection. Pruning methods commonly include unstructured and structured pruning. A pruning process consists of three steps: training large networks, pruning redundant channels, and retraining the pruned networks. Regarding unstructured pruning, LeCun et al. used second-derivative information and removed the weights based on their saliency [47]. The early weight pruning method is also mentioned in [48]. Han et al. [49,50] proposed a weight pruning framework to remove some CNN parameters and connections by pruning low-magnitude weights, thus achieving model compression. In contrast, structured pruning can be utilized to perform network slimming and computational acceleration, which do not require specialized hardware or libraries. Some studies [51][52][53] proposed a set of pruning criteria for CNNs to evaluate and remove unimportant feature channels and their corresponding kernels. In [54][55][56], sparsity regularization strategies were proposed to obtain sparse weights and features and reduce the time-intensiveness of the pruningretraining step. In light of this body of research, we utilized feature sparsity training for the structured pruning and acceleration of CNNs to obtain a compact model. Methods In this section, the proposed method is described in detail. We present the architecture of our proposed method with the spatial pyramid feature fusion YOLOv5 (SPFFY), channel attention (CA), and sparsity training-pruning (STP). Multi-Scale Feature Fusion The original YOLOv5 utilizes a C3 architecture (a CSP bottleneck with 3 convolutions) with an SPPF (spatial pyramid pooling-fast) layer as the backbone to extract the feature map of the last convolution layer. The feature extraction capability of the backbone network directly affects the detection performance of ATP defects. Many recent studies [57,58] have revealed that the feature maps obtained from low-level convolutional layers have higher resolutions and, therefore, help to detect small objects. In these methods, a multi-scale spatial pyramid directs attention to the object by using its spatial features, which improves its detection. An SPPF block uses pooling layers with one-size kernels, and the output of each pooling becomes the input of the next pooling. Inspired by the SPPF, we propose a spatial pyramid dilated convolution (SPDC) module to fuse the multi-scale features extracted in different receptive fields in the same feature map, as shown in Figure 3. These modules replace SPPF and are further integrated with a channel attention mechanism. In the SPDC module, CBS represents conv + bn + silu. k3, s1, p2, and dr1 represent the convolution kernel, stride, padding, and dilation rate of size 3, 1, 2, and 1, respectively. SPDC modules can be regarded as a special block of CNN, as in these modules, the input and output feature maps have the same size; thus, they can be easily added to the backbone network of current detectors to obtain multi-scale feature maps. Here, we added an SPDC module behind each C3 module to replace the original SPPF in the backbone network of YOLOv5. of CNNs to obtain a compact model. Methods In this section, the proposed method is described in detail. We present the architecture of our proposed method with the spatial pyramid feature fusion YOLOv5 (SPFFY), channel attention (CA), and sparsity training-pruning (STP). Multi-Scale Feature Fusion The original YOLOv5 utilizes a C3 architecture (a CSP bottleneck with 3 convolutions) with an SPPF (spatial pyramid pooling-fast) layer as the backbone to extract the feature map of the last convolution layer. The feature extraction capability of the backbone network directly affects the detection performance of ATP defects. Many recent studies [57,58] have revealed that the feature maps obtained from low-level convolutional layers have higher resolutions and, therefore, help to detect small objects. In these methods, a multi-scale spatial pyramid directs attention to the object by using its spatial features, which improves its detection. An SPPF block uses pooling layers with one-size kernels, and the output of each pooling becomes the input of the next pooling. Inspired by the SPPF, we propose a spatial pyramid dilated convolution (SPDC) module to fuse the multi-scale features extracted in different receptive fields in the same feature map, as shown in Figure 3. These modules replace SPPF and are further integrated with a channel attention mechanism. In the SPDC module, CBS represents conv + bn + silu. k3, s1, p2, and dr1 represent the convolution kernel, stride, padding, and dilation rate of size 3, 1, 2, and 1, respectively. SPDC modules can be regarded as a special block of CNN, as in these modules, the input and output feature maps have the same size; thus, they can be easily added to the backbone network of current detectors to obtain multi-scale feature maps. Here, we added an SPDC module behind each C3 module to replace the original SPPF in the backbone network of YOLOv5. Channel Attention In the existing network architectures, multi-scale features are obtained by concatenating the output features from different layers, but the importance of the output feature channel after concatenation is often ignored. In high-level layers, the extracted features Channel Attention In the existing network architectures, multi-scale features are obtained by concatenating the output features from different layers, but the importance of the output feature channel after concatenation is often ignored. In high-level layers, the extracted features often contain target feature information, and output channels have less redundant information. In low-level layers, by contrast, only simple edges and color blocks can be extracted, and the extracted features contain a large amount of background interference information. If the output feature channels extracted from high-level layers are directly concatenated with the low-level output features behind upsample, the target feature information undergoes interference. Therefore, we added a channel attention module after each concatenating operation in the neck part of the model, and the redundant feature channels can be ascribed to different weights for eliminating some noises. The channel attention (CA) module assigns weights to fusion features from different scales. The channel attention mechanism is utilized after each concatenating operation in the neck network to direct more attention to the effective feature channels, as shown in Figure 4. The CA module consists of two branches: multi-scale feature fusion and channel attention mechanism. The input feature maps after concatenation are represented as F in ∈ R H×W×C . The feature fusion can generate the same sizes of output feature maps. The size of output feature maps with feature fusion is F in ∈ R H×W×C . The channel attention mechanism contains two one-dimensional convolutional operations and the Electronics 2022, 11, 3757 6 of 16 sigmoid activation function, which can be used to obtain the weights of each channel. The i-th channel attention score is calculated as: where the tensor x ∈ R 1×1×C is obtained from one-dimensional convolution operations. s ∈ R 1×1×C represents the weight of each feature channel. Then, the output feature channel is calculated as: where the operation is performed by channel-wise multiplication between the score s and the feature map F in . The SPDC and CA modules are embedded in the backbone and neck network. The proposed model is illustrated in Figure 5. often contain target feature information, and output channels have less redundant information. In low-level layers, by contrast, only simple edges and color blocks can be extracted, and the extracted features contain a large amount of background interference information. If the output feature channels extracted from high-level layers are directly concatenated with the low-level output features behind upsample, the target feature information undergoes interference. Therefore, we added a channel attention module after each concatenating operation in the neck part of the model, and the redundant feature channels can be ascribed to different weights for eliminating some noises. The channel attention (CA) module assigns weights to fusion features from different scales. The channel attention mechanism is utilized after each concatenating operation in the neck network to direct more attention to the effective feature channels, as shown in Figure 4. where the operation is performed by channel-wise multiplication between the score s and the feature map in F . The SPDC and CA modules are embedded in the backbone and neck network. The proposed model is illustrated in Figure 5. Sparsity Training and Pruning With the addition of SPDC and CA modules to the network, we introduced the sparsity training and pruning (STP) method to obtain more compact models and ensure the speed and accuracy of defect detection. Sparsity Training and Pruning With the addition of SPDC and CA modules to the network, we introduced the sparsity training and pruning (STP) method to obtain more compact models and ensure the speed and accuracy of defect detection. In general, the model compression rate can be determined by the actual use environment. However, when the compression rate is high, and the pretraining model has low sparsity, it is easy to prune the useful feature channels in the model, resulting in reduced detection accuracy. Therefore, in typical pruning methods, the number of channels to prune needs to be set to a small value in each iteration, and the pruning-retraining step needs to be repeated many times to obtain the final compact model. To avoid this, we employed the sparsity training of the pretrained networks to increase feature sparsity in each layer. We then used feature sparsity regularization on selected channels. During the sparsity training, the channels to be removed are penalized, and their outputs gradually decrease to zero. In this way, pruning can be finished in one iteration. Different from most of the existing typical pruning methods [51,52], which adopt multiple iteration schemes (including pruning and retraining), our model needs only one iteration to perform sparsity training and pruning and achieve network slimming. The STP framework is illustrated in Figure 6. First, the location and number of convolutional kernels that need to be pruned are determined by calculating the sparse redundancy of each feature map. Then, sparsity constraint training is performed in each convolutional channel to be pruned, thus speeding up the sparsity of redundant channels and achieving one-step pruning and model precision recovery. The loss function is one of the important components of neural networks, used to calculate the gradients and update the weights of the network. The YOL function consists of three parts: class loss (BCE loss), objectness loss (BCE loss), a tion loss (CIoU loss). It can be formulated as: where λ1, λ2, and λ3 are the control parameters balancing these three terms. Ad ly, the proposed loss function with sparsity training for CNNs is given by: where Rp denotes the feature sparsity regularization on each layer and is calcu The loss function is one of the important components of neural networks, which is used to calculate the gradients and update the weights of the network. The YOLOv5 loss function consists of three parts: class loss (BCE loss), objectness loss (BCE loss), and location loss (CIoU loss). It can be formulated as: where λ 1 , λ 2 , and λ 3 are the control parameters balancing these three terms. Additionally, the proposed loss function with sparsity training for CNNs is given by: where R p denotes the feature sparsity regularization on each layer and is calculated by the Lp norm of feature map F. For the pruning process, different from a simple layer-stack structure, additional attention should be given to each special module of the proposed network. For the backbone network, each block consists of C3 and SPDC modules. A C3 module with N bottlenecks is illustrated in Figure 7, where the symbol * denotes the convolution operation, and the white blocks represent the pruned channels. The number of bottlenecks in the output channel needs to be consistent to finish the sum operation. We utilize the L 1 norm of the feature map to evaluate the sparsity and redundancy of the output feature channels obtained by the element-wise addition of the last bottleneck in each C3 to determine the location and number of feature channels to be pruned. Then, the output channels of convolutional kernels corresponding to the second layer in each bottleneck are pruned. The importance of the output feature map of the first layer in each bottleneck is evaluated. Then, the corresponding output channels of kernels in the first layer and the input channels in the second layer can be pruned. The pruning architecture of the SPDC module is shown in Figure 8, where the symbol * denotes the convolution operation, and the white blocks represent the pruned channels. The importance of the feature maps obtained after the concatenation operation is first evaluated to determine the redundant feature channels. Then, the corresponding convolutional kernel channels of the previous layer and the input convolutional kernel channels of the next layer can be pruned. The pruning architecture of the SPDC module is shown in Figure 8, where the symbol * denotes the convolution operation, and the white blocks represent the pruned channels. The importance of the feature maps obtained after the concatenation operation is first evaluated to determine the redundant feature channels. Then, the corresponding convolutional kernel channels of the previous layer and the input convolutional kernel channels of the next layer can be pruned. For the channel attention (CA) module at the neck part of the proposed network, the CA-pruning module is illustrated in Figure 9, where the symbol * denotes the convolution operation, and the white blocks represent the pruned channels. The pruning architecture of the SPDC module is shown in Figure 8, where the symbol * denotes the convolution operation, and the white blocks represent the pruned channels. The importance of the feature maps obtained after the concatenation operation is first evaluated to determine the redundant feature channels. Then, the corresponding convolutional kernel channels of the previous layer and the input convolutional kernel channels of the next layer can be pruned. For the channel attention (CA) module at the neck part of the proposed network, the CA-pruning module is illustrated in Figure 9, where the symbol * denotes the convolution operation, and the white blocks represent the pruned channels. Experiments In this section, we evaluate the effectiveness of the proposed SPFFY-CA on the benchmark PASCAL VOC dataset and our AFP defect dataset. Data augmentation methods, namely random crop, shifting, scaling, clipping, and random color jittering, were adopted to avoid overfitting. We trained the original network from scratch, defined as the baseline, using the computer with an Intel I7-8700 CPU and NVIDIA GTX 3060 with 12 GB of memory. YOLOv5 are open-source machine learning frameworks that accelerate the process from research prototyping to production deployment. Experiments on PASCAL VOC Datasets The PASCAL Visual Object Classes Challenge (PASCAL VOC) dataset consists of VOC2007 and VOC2012. The dataset contains 20 objects, namely, Human: person; Animal: bird, cat, cow, dog, horse, and sheep; Vehicle: airplane, bicycle, boat, bus, car, motorbike, and train; indoor: bottle, chair, dining table, potted plant, sofa, and tv/monitor. The mean average precision (mAP) at the IoU threshold of 0.5 was calculated to measure the accuracy of target recognition. All the networks were trained on the datasets (16,551) containing VOC2007 and VOC2012 train-val datasets and were tested on the VOC2007 testing dataset (4,952). In terms of the training details, the proposed models were trained using the SGD optimizer. The mini-batch size was 30, and an initial learning rate of 10 −2 was used. The momentum was 0.937, and the weight decay was 0.0005. The inference latency (batch size equal to 1) and parameters of the models were determined. On the PASCAL VOC dataset, the performance of the proposed SPFFY-CA was compared with other state-of-the-art studies, and the results are shown in Table 1. It can be seen from Table 1 that SPFFY-CA-STP obtains 0.9% mAP higher than YOLOv5m with Experiments In this section, we evaluate the effectiveness of the proposed SPFFY-CA on the benchmark PASCAL VOC dataset and our AFP defect dataset. Data augmentation methods, namely random crop, shifting, scaling, clipping, and random color jittering, were adopted to avoid overfitting. We trained the original network from scratch, defined as the baseline, using the computer with an Intel I7-8700 CPU and NVIDIA GTX 3060 with 12 GB of memory. YOLOv5 are open-source machine learning frameworks that accelerate the process from research prototyping to production deployment. Experiments on PASCAL VOC Datasets The PASCAL Visual Object Classes Challenge (PASCAL VOC) dataset consists of VOC2007 and VOC2012. The dataset contains 20 objects, namely, Human: person; Animal: bird, cat, cow, dog, horse, and sheep; Vehicle: airplane, bicycle, boat, bus, car, motorbike, and train; indoor: bottle, chair, dining table, potted plant, sofa, and tv/monitor. The mean average precision (mAP) at the IoU threshold of 0.5 was calculated to measure the accuracy of target recognition. All the networks were trained on the datasets (16,551) containing VOC2007 and VOC2012 train-val datasets and were tested on the VOC2007 testing dataset (4952). In terms of the training details, the proposed models were trained using the SGD optimizer. The mini-batch size was 30, and an initial learning rate of 10 −2 was used. The momentum was 0.937, and the weight decay was 0.0005. The inference latency (batch size equal to 1) and parameters of the models were determined. On the PASCAL VOC dataset, the performance of the proposed SPFFY-CA was compared with other state-of-the-art studies, and the results are shown in Table 1. It can be seen from Table 1 that SPFFY-CA-STP obtains 0.9% mAP higher than YOLOv5m with the same magnitude of parameters and latency time. Compared with other algorithms, SPFFY-CA and SPFFY-CA-STP have fewer parameters and higher recognition accuracy. Table 2 shows the average precision (AP) of the proposed SPFFY-CA and SPFFY-CA-STP compared with SSD300 [36], SSD512 [36], CenterNet [61], and YOLOv5 [62]. It can be seen that the performance of the proposed method is superior to that of the other algorithms in the recognition performance of each category. Ablation Study We conducted ablation studies to validate the proposed method as follows: Spatial pyramid dilated convolutions (SPDC): We investigated the power of the spatial pyramid dilated convolution module by comparing the SPFFY-CA with and without the SPDC module. For this experiment, we used the SPFFY-CA without SPDC and trained it on the PASCAL VOC dataset. The training strategy was the same as in the previous section. The performance comparison results are shown in Table 3. It can be seen that the SPFFY-CA with the SPDC module can obtain better performance. Channel attention (CA): In this experiment, we studied the effects of SPFFY-CA with and without the multi-scale channel attention (CA) module. We used the SPFFY-CA without the CA module and trained the model on the PASCAL VOC dataset. The training strategy was the same as in the previous experiment. The performance comparison results are shown in Table 4. It can be seen that the SPFFY-CA with the CA module can obtain better performance. In this experiment, we investigated the effect of sparsity training and pruning (STP) on SPFFY-CA. We used the SPFFY-CA trained on the PASCAL VOC dataset. The training strategy was the same as in the previous experiment. The SPFFY-CA model was pruned with three different compression rates, and the results are shown in Table 5, where "SPFFY-CA-pruned-2" is based on the model of "SPFFY-CApruned-1". From Table 5, it can be inferred that the STP can compress the SPFFY-CA model and ensure the stability of identification accuracy. Experiments on AFP Defect Datasets Due to the complexity of the AFP manufacturing process, as well as environmental factors, process parameters, CFRP defects, equipment accuracy, laying trajectory planning, etc., different types of defects will appear in the final composite products, which will affect their mechanical properties [63,64]. Common types of AFP defects include wrinkles, twists, gaps, bubbles, and the presence of foreign material. In this study, an AFP defect dataset was labeled with 3000 images with an original resolution of 1000 × 1000. Then, 80% of the defect samples were used as the train-val dataset, and the rest were used as the test set to evaluate the performance of the model. The number of instances found for each type of defect is shown in Figure 10. The mean average precision (mAP) at the IoU threshold of 0.5 was calculated to measure the accuracy of defect recognition. In terms of the training details, the models were trained using the SGD optimizer with a batch size of 30 and an initial learning rate of 10 −2 . The momentum was 0.9, and the weight decay was 0.0005. The latency time (batch size equal to 1) was determined. On the AFP defect dataset, the performance of the proposed SPFFY-CA was compared with other detection algorithms, and the results are shown in Table 6. It can be seen from Table 6 that the SPFFY-CA proposed in this paper achieves an accuracy of 93.1% on the AFP defect dataset and has higher recognition confidence than YOLOv5m for defect detection. For all the various types of defects, SPFFY-CA achieves higher performance than YOLOv5m. Electronics 2022, 11, x FOR PEER REVIEW 13 of 17 The mean average precision (mAP) at the IoU threshold of 0.5 was calculated to measure the accuracy of defect recognition. In terms of the training details, the models were trained using the SGD optimizer with a batch size of 30 and an initial learning rate of 10 −2 . The momentum was 0.9, and the weight decay was 0.0005. The latency time (batch size equal to 1) was determined. On the AFP defect dataset, the performance of the proposed SPFFY-CA was compared with other detection algorithms, and the results are shown in Table 6. It can be seen from Table 6 that the SPFFY-CA proposed in this paper achieves an accuracy of 93.1% on the AFP defect dataset and has higher recognition confidence than YOLOv5m for defect detection. For all the various types of defects, SPFFY-CA achieves higher performance than YOLOv5m. Figure 11 shows the detection results of SPFFY-CA-STP and the original YOLOv5m for various defects, where the confidence score is higher than 0.5. Figure 11a shows the recognition effect of the designed SPFFY-CA-STP model, and Figure 11b shows the detection effect of the original YOLOv5m. It can be seen from Figure 11 that the SPFFY-CA-STP model has higher recognition confidence than YOLOv5m for defects of different scales and types. Figure 11 shows the detection results of SPFFY-CA-STP and the original YOLOv5m for various defects, where the confidence score is higher than 0.5. Figure 11a shows the recognition effect of the designed SPFFY-CA-STP model, and Figure 11b shows the detection effect of the original YOLOv5m. It can be seen from Figure 11 that the SPFFY-CA-STP model has higher recognition confidence than YOLOv5m for defects of different scales and types. The proposed SPFFY-CA-STP can achieve higher performance on the PASCAL VOC dataset while maintaining the same detection speed and can realize the real-time detection of multi-scale AFP defects. The quality inspection of automatic fiber placement (AFP) is thus addressed through the recognition of target defects and the location of an endto-end learning and detection network, but the main limitation is that this method is mainly used when the defect types are known, and further data collection is required for unknown defects. The proposed SPFFY-CA-STP can achieve higher performance on the PASCAL VOC dataset while maintaining the same detection speed and can realize the real-time detection of multi-scale AFP defects. The quality inspection of automatic fiber placement (AFP) is thus addressed through the recognition of target defects and the location of an end-to-end learning and detection network, but the main limitation is that this method is mainly used when the defect types are known, and further data collection is required for unknown defects. Conclusions and Future Work In this paper, we proposed a multi-scale AFP defect detection algorithm named the spatial pyramid feature fusion YOLOv5 with channel attention (SPFFY-CA), which includes spatial pyramid dilated convolutions (SPDCs) and channel attention (CA) modules to fuse the feature maps extracted in different receptive fields, thus integrating multi-scale defect information. Through the CA mechanism, the importance of the channels obtained from the concatenate function was evaluated, and further attention was given to the effective feature channels, which improved the representation ability and generated more effective features. In addition, we employed the sparsity training and pruning (STP) method to obtain more compact models and ensure the speed and accuracy of defect detection. The experimental results on the PASCAL VOC and the AFP defect datasets prove the effectiveness of the proposed approach and that it can obtain state-of-the-art performance. In future research, we will further study the defect detection of manual paving and better apply the visual identification technology to control the composite manufacturing process. Data Availability Statement: The data presented in this study are available on request from the corresponding author (wangwei_4524@163.com). The data are not publicly available due to privacy restrictions. Conclusions and Future Work In this paper, we proposed a multi-scale AFP defect detection algorithm named the spatial pyramid feature fusion YOLOv5 with channel attention (SPFFY-CA), which includes spatial pyramid dilated convolutions (SPDCs) and channel attention (CA) modules to fuse the feature maps extracted in different receptive fields, thus integrating multi-scale defect information. Through the CA mechanism, the importance of the channels obtained from the concatenate function was evaluated, and further attention was given to the effective feature channels, which improved the representation ability and generated more effective features. In addition, we employed the sparsity training and pruning (STP) method to obtain more compact models and ensure the speed and accuracy of defect detection. The experimental results on the PASCAL VOC and the AFP defect datasets prove the effectiveness of the proposed approach and that it can obtain state-of-the-art performance. In future research, we will further study the defect detection of manual paving and better apply the visual identification technology to control the composite manufacturing process. Data Availability Statement: The data presented in this study are available on request from the corresponding author (wangwei_4524@163.com). The data are not publicly available due to privacy restrictions.
2022-11-19T16:15:13.434Z
2022-11-16T00:00:00.000
{ "year": 2022, "sha1": "202dcb2504670ac3031126de8af3e22f9d09f197", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/11/22/3757/pdf?version=1668586990", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ba63ff840d6c8ae9a748fcc8661c14f914137823", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
241579232
pes2o/s2orc
v3-fos-license
Selected risk factors of dental caries in 11- to 13-years old schoolchildren in Slovakia Background: Oral diseases, particularly dental caries, affect as much as 9 in 10 persons globally. Its development starts during childhood and among factors participating in its aetiology, behavioural ones play a particularly important role. The aim of the study was to examine the selected behavioural risk factors of dental caries in Slovak adolescents between 2006 and 2018. Methods: We analysed occurrence of the selected factors (teeth brushing less than once a day, eating sweets and drinking sweetened soft drinks daily and their combination) in 11 to 13 years old schoolchildren in Slovakia by gender and socio-economic status using data from Health Behaviour in School-Aged Children surveys carried out in 2005/2006, 2009/2010, 2013/2014 and 2017/2018. Results: Consumption of sweets and sweetened soft drinks, despite of decline, remains widespread (41.3% of boys and 39.6% of girls in 2017/2018). Absence of daily teeth brushing, similarly as co-occurrence of these two risk factors, were more frequent in boys (10.6% and 5.0% in 2017/2018, respectively) than in girls (5.1% and 2.3% in 2017/2018, respectively). Absence of daily teeth brushing was associated with lower socioeconomic situation. Conclusions: Behavioural risk factors of dental caries play significant role in oral health of adolescents in Slovakia. Despite positive development of the epidemiological situation, effective interventions focused on consumption of sweets and sweetened soft drinks as well as improvement of oral hygiene in lower socioeconomic groups are needed. with appropriate dental hygiene is very important for maintaining overall health [2]. Dental caries ranks among the most common oral diseases [3]. Nature of the caries is gradual and cumulative. With time it becomes more compound. Due to its prevalence, economic aspects and effect on quality of life it is a significant public health problem [4][5][6]. Untreated caries in children may negatively affect their quality of life in many ways. Besides direct negative impact i.e. pain, troubles in eating, it is the most frequent cause of loss of teeth. Consequences of dental caries have a lifetime effects including decreased quality of life, impaired self-esteem, as well as can result in various chronic diseases [7]. According to official World Health Organization (WHO) data, prevalence of tooth decay among 6 years old children in European countries considerably varies between 20 to 90 per cent [1]. In general, relatively positive situation can be seen in Western and Northern Europe such as United Kingdom or Scandinavian countries. On the other hand, the highest prevalence is in Eastern Europe. To monitor dental health, DMFT index is widely used as an epidemiological tool indicating count of decayed (D), missing (M) and filled (F) teeth (T) [8,9]. Slovakia, together with Croatia ranks among countries with the highest average DMFT index (4.3 and 4.8, respectively) within the European Union member countries [8,[10][11][12][13]. However, we should keep in mind that above mentioned information is mostly based on data coming from dentists providing primary dental care. So, these estimations originate from population attending dentists for treatment or/and preventive check-up and there are insufficient epidemiologic data from those not attending dentists. Therefore, population representative data would bring light to better understand an extent of the issue. There are numerous factors increasing a risk of the dental caries. Beside inherent and metabolic predisposition, behavioural risk factors are of a great importance. Among them, namely oral hygiene and diet play a significant role [1,8,14]. Teeth brushing applied at least once a day has been considered as a principal tool to maintain oral health and to prevent caries and periodontal diseases [15]. On the other hand, consumption of sweetened food, soft and energy drinks promote initiation and further development of dental caries [16]. Numerous recent studies indicate that a socioeconomic situation can be considered as an independent determinant of teeth decay. Higher consumption of soft drinks was detected in children from low socioeconomic families and whose teeth brushing was sporadic [17][18][19]. It is also confirmed that children with good oral hygiene have mothers with higher education level [20,21]. Significantly more cases of dental caries are present in children whose grow up in lower socioeconomic families, in combination with low income and low education level [22]. Age of 12 is internationally determined as an age for global monitoring of dental caries. It is mostly because in majority of children all the permanent teeth have already erupted (except third molars) [13]. Considering the long-term and even lifelong impact of the caries, age between 11 to 13 years is crucial. Therefore, factors triggering process of caries and its further development are of a particular importance during this period. Understating of epidemiological aspects of risk factors of caries during childhood can considerably help to design and implement effective preventive intervention programs tailored for this target population. In our study, we focused on selected indicators of insufficient dental hygiene (teeth brushing less than once a day) and eating habits associated with increased risk of caries In the first step, participating schools were randomly selected with probability proportional to size using an official list of all schools obtained from the Slovak Institute of Information and Prognosis for Education. The sample of schools was stratified by region (eight administrative self-governing regions) and type of school (elementary schools comprising the 1 st -9 th grades, and eight-year grammar schools comprising the 6 th -13 th grades). In the second step, within the participating schools, classes were randomly selected to collect data. Parents were informed in advance about the study via the school administration and using a written informed consent form could opt out if they disagreed with their child's participation. Participation in the study was fully voluntary and anonymous, with no explicit incentives provided for participation. This approach provided samples proportionally representing all areas and population subgroups on the nationwide level and thus eliminating possible bias caused by heterogeneity of the target population. Pupils from the 5 th -9 th grades were considered as eligible, i.e. associated with 11-to 15-year-old adolescents. We included 11 to 13-years old respondents in our analysis. Table 1 shows the basic characteristic of the samples obtained in four waves of the survey. Drop outs were caused mostly by the absence of children due to illness or other personal reasons and the refusal of a parent or the adolescent to be involved in the study. Our study analyses prevalence of insufficient teeth brushing (less than once a day), eating of sweets and/or drinking of sweetened soft drinks, co-occurrence of the two above mentioned factors in relation to gender and socioeconomic status: Teeth brushing was measured by the question "How often do you brush your teeth? Possible responses were "More than once a day", "Once a day", "At least once a week but not daily", "Less than once a week" and "Never". After dichotomisation, we analysed proportion of answers "At least once a week but not daily", "Less than once a week" and "Never". Consumption of sweets in schoolchildren was measured by the question "How many times a week do you usually eat sweets (candy or chocolate)?" Possible answers were "Never", "Less than once a week", "Once a week", "2-4 days a week", "5-6 days a week", "Once a day every day" and "Every day, more than once". After dichotomisation, we analysed proportion of answers "Once a day every day" and "Every day, more than once". Consumption of sweetened soft drinks was measured by question "How many times a week do you usually drink coke or other soft drinks that contain sugar". Possible answers were "Never", "Less than once a week", "Once a week", "2-4 days a week", "5-6 days a week", "Once a day every day" and "Every day, more than once". We analysed proportion of answers "Once a day every day" and "Every day, more than once". The results are expressed as percentage (%) with the respective 95% confidence intervals. Differences were statistically evaluated using Chi-square test. As a level of statistical significance, p<0.05 was considered. To test changes across time, Bonferroni correction was applied for post-hoc pairwise comparisons. (Tables 3, Table 4). Moreover, we should keep in mind that we deal with subjective data, thus the underreporting can potentially present "a top of iceberg" effect making the problem even deeper [24]. Another problem to be considered is a quality of teeth brushing. As many studies showed, the problem is mostly in wrong technique (incorrect brushing movements, insufficient time of brushing, etc.), which may be in long term harmful to the oral health. Therefore, it is necessary to teach children not only to brush their teeth, but to brush them properly. In this research we only analysed whether or not they brush their teeth. For the future research it could be interesting to deeply analyse determinants of teeth brushing and the quality of teeth brushing techniques. As already mentioned, the insufficient teeth brushing relates particularly to boys. One of the possible causes may be that women in general consider oral health as important with positive impact on quality of life. Also, it could be in the perception of health, which girls seem to receive differently than boys [30,31]. Our results also showed an association between insufficient teeth brushing and socioeconomic situation. Main problem was detected in families with low education of children's parents. Parent, especially mothers, showed up to be very influential in the problematics of creating habits in children [31,32]. However, parents with lower educational level do not place an importance to brushing teeth, probably because of lack of information/health literacy [5,20]. Lack of information about oral health in parents is therefore associated with lack of motivation to even teach a child how to brush the teeth or to control if he/she is brushing the teeth. Children are influenced through opinions and attitudes of their parents towards dentists, too. Parental fear and worries about dental care are transferred to children [33]. It is necessary for parents to be motivational factor in the subject of oral hygiene [31,34]. It presents a challenge for public health how to better focus educational and information activities on families with lower socioeconomic position. As potential limitations of our study, we should consider that the HBSC data do not provide a comprehensive picture on a risk but only some aspects of it. We should consider our findings rather as an insight into the epidemiological situation and its changes over time and trends. These pieces of information, despite their limited scope, provide as important groundwork for population based preventive measures as well as relevant projections of the situation in future. The strongest point of our analysis is, that it uses representative data including the whole target population of the given age group. Most of studies dealing with oral health employ data from dentists [13,35,36]. According to the HBSC Slovakia report, as much as 15 % of boys and 12 % of girls aged 13 years reported not visiting the dentists during recent year [37]. Moreover, as the latest official data on dental care show, among children and adolescents (6 to 14 years old) as much as one quarter of them have not been registered to dental care [23,38], i.e. such pieces of information are limited to a population attending check-ups and undergoing dental care and can overlook a considerable proportion of the population. Therefore, our findings fulfil this information gap. Conclusions Eventually, despite of decline of the daily eating of sugar within recent years, it remains a widespread risk factor of dental caries in children. Its combination with insufficient teeth brushing is mostly a problem of boys and lower socioeconomic population groups where a particularly high risk can be expected. Therefore, there is a need to find out effective ways how to address these target groups in preventive programs with emphasis on youngsters when habits are still developing. The study was approved by the Ethics Committee of the Faculty of Medicine at the P.J. List Of Abbreviations Šafárik University in Košice. Parents were informed about the study via the school administration and using a written informed consent form could opt out if they disagreed with their child's participation. Participation in the study was fully voluntary and anonymous, with no explicit incentives provided for participation. Consent for publication Not applicable. Availability of data and materials The dataset supporting the conclusions of this article is available upon request. Competing interests The authors declare that they have no competing interests. Teeth brushing less than once a day. Significant differences are marked as *p<0.05 and **p<0.001. Figure 2 Consumption of sweets or sweetened soft drinks. Significant differences are marked as *p<0.05 and **p<0.001. Figure 3 Teeth brushing less than once a day and consumption of sweets or sweetened soft drinks. Significant differences are marked as *p<0.05 and **p<0.001.
2019-11-28T12:41:04.411Z
2019-11-21T00:00:00.000
{ "year": 2019, "sha1": "8a77275d08588b835386af34a51412a8054c97a3", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-8212/v1.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f5b7b508d6356539987af76ff2b1bb0a5efe93e0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
262106283
pes2o/s2orc
v3-fos-license
Scaffold-free Three-dimensional Graft From Autologous Adipose-derived Stem Cells for Large Bone Defect Reconstruction Abstract Long bone nonunion in the context of congenital pseudarthrosis or carcinologic resection (with intercalary bone allograft implantation) is one of the most challenging pathologies in pediatric orthopedics. Autologous cancellous bone remains the gold standard in this context of long bone nonunion reconstruction, but with several clinical limitations. We then assessed the feasibility and safety of human autologous scaffold-free osteogenic 3-dimensional (3D) graft (derived from autologous adipose-derived stem cells [ASCs]) to cure a bone nonunion in extreme clinical and pathophysiological conditions. Human ASCs (obtained from subcutaneous adipose tissue of 6 patients and expanded up to passage 4) were incubated in osteogenic media and supplemented with demineralized bone matrix to obtain the scaffold-free 3D osteogenic structure as confirmed in vitro by histomorphometry for osteogenesis and mineralization. The 3D “bone-like” structure was finally transplanted for 3 patients with bone tumor and 3 patients with bone pseudarthrosis (2 congenital, 1 acquired) to assess the clinical feasibility, safety, and efficacy. Although minor clones with structural aberrations (aneuploidies, such as tri or tetraploidies or clonal trisomy 7 in 6%–20% of cells) were detected in the undifferentiated ASCs at passage 4, the osteogenic differentiation significantly reduced these clonal anomalies. The final osteogenic product was stable, did not rupture with forceps manipulation, did not induce donor site morbidity, and was easily implanted directly into the bone defect. No acute (<3 mo) side effects, such as impaired wound healing, pain, inflammatory reaction, and infection, or long-term side effects, such as tumor development, were associated with the graft up to 4 years after transplantation. We report for the first time that autologous ASC can be fully differentiated into a 3D osteogenic-like implant without any scaffold. We demonstrated that this engineered tissue can safely promote osteogenesis in extreme conditions of bone nonunions with minor donor site morbidity and no oncological side effects. INTRODUCTION L ong bone nonunion in the context of congenital pseudarthrosis (1 in 140,000-250,000 births) or carcinologic resection (1% of all cancers, and an estimated incidence of 6/million per y, requiring intercalary allograft reconstruction) is one of the most challenging pathologies in pediatric orthopedics. Pathophysiological conditions and neo-adjuvant chemotherapy cause nonhealing bone in 15% to 55% of patients after allograft or prosthesis reconstruction. [1][2][3][4][5][6] The current gold standard for bone nonunion remains autologous cancellous bone graft from iliac crest (in most cases and in a small bone defect) containing bone marrow mesenchymal stem cells (MSCs), but available quantities are limited and the harvesting procedure is burdened by comorbidities. 7,8 The use of osteoinductive materials such as demineralized bone matrix (DBM) and bone morphogenetic proteins (BMPs) to overcome the lack of osteoinduction and osteogenic properties of synthetic or human materials remains relatively prohibitive in the pediatric context. The principle of caution is applied for derived bone growth factors because they have been implicated in the tumor process, and specific studies with long-term follow-up for safety are lacking. 6,[9][10][11][12][13][14][15] Tissue engineering and cell therapy using MSCs have raised the possibility of implanting living tissue for bone reconstruction. Adipose-derived stem cells (ASCs) demonstrate several advantages over those from bone marrow (considered the gold standard), including a less invasive harvesting procedure, a higher number of stem cell progenitors from an equivalent amount of tissue harvested, increased proliferation and differentiation capacities, and better angiogenic and osteogenic properties in vivo. [16][17][18][19][20][21][22][23][24] Critical size bone reconstruction using stem cells also remains limited by the large size of bone defects and consequently the size of the engineered implant requiring a scaffold. Tissue engineering can potentially provide treatment alternatives for conventional large bone defects. The application of different combinations of osteoconductive biomaterials, osteoprogenitor cells, and growth factors directly into the defect holds great potential for achieving bone healing in stringent and difficult conditions. Biomaterials should ideally possess properties such as mechanical strength, biodegradability, support. and stem cell differentiation with regard to mimicking bone-forming components to eliciting specific cellular responses and providing an ideal environment for bone formation. To date, no synthetic or biological scaffolds fulfil all these criteria since they can be influenced by the surrounding microenvironments or cause immunological problems. 25,26 Several scaffold-free systems have been investigated, but creating sufficient thickness to fill a critical size bone defect is difficult. 27 We developed a graft made of scaffold-free autologous ASCs differentiated into a 3-dimensional (3D) osteogenic structure with DBM. 28 We previously demonstrated the safety and efficacy of this graft to cure a femoral critical size bone defect in a pig preclinical nonunion model at 6 months postimplantation. 28 Complete stem cell differentiation in an osteogenic 3D structure significantly improved the efficacy of bone reconstitution (by promoting angiogenesis and osteogenesis) and the safety through a lower risk of growth factor release. 29 After osteogenic differentiation, human and pig ASCs demonstrated similar in vitro (vascular endothelial growth factor release and viability in hypoxic conditions) and in vivo (angiogenicity and osteogenicity with cellular engraftment and graft mineralization, respectively) properties. 29,30 Subsequent to the preclinical experiments, we then assessed the feasibility (ie, the reproducibility of manufacturing of 3D graft clinical batch) and safety (ie, the risk of MSCs within the tumor environment and pediatric context) of human autologous 3D osteogenic grafts to cure bone nonunion in extreme clinical and pathophysiological conditions. We also investigated the bone consolidation at a minimum of 1 year after reconstruction. METHODS This study was performed according to the Belgian Ministry of Health (AFMPS) guidelines for hospital exemption (and obtained the authorization by the national central authorities as the clinical number studies: ATMP-HE004) as per the Article 28 of European Regulation 1394/2007 on Advanced Therapy Medicinal Products. To qualify for this so-called hospital exemption (HE), the advanced therapy concerned should meet the following criteria: preparation on a nonroutine basis, preparation according to specific quality standards, use in a hospital, use under the exclusive responsibility of a medical practitioner, and comply with an individual medical prescription for a custom-made product for an individual patient As such, the legislator intends to provide patients the possibility to benefit from a custom-made, innovative individual treatment in the absence of valid therapeutic alternatives. All procedures for tissue procurement and clinical studies (for children patients) were approved by the Ethical Committee of the Medical Faculty (Universite Catholique de Louvain) as the national authorization number: B40320108542. All patients (parents of the children) signed the consent to participate to the study after verbal and written information by the principal investigator of the study. All consents were included and archived in the Case Report Form for each patient (and for a duration of 30 y). Study Design and Patients Between 2010 and 2012, 6 young patients were included in the study (Table 1). Three male patients with bone tumors (2 osteosarcomas for patients #1 and #2 and 1 Ewing sarcoma for patient #3 characterized by several clonal cytogenetic alterations). Preoperative chemotherapy was applied for a mean duration of 7 and 10 months before tumor resection. The anatomical reconstitution was performed with a metallic (Phenix) prosthesis and a human osteochondral allograft, respectively. A 3D stem cell autograft was also proposed for 3 patients with bone nonunions due to congenital pseudarthrosis (n ¼ 2 for patients #4 and 5) or acquired pseudarthrosis (in the context of erythroblastopenia, n ¼ 1 for patient #6) that was untreatable with classical treatments such as surgery (eg, curettage, elongation, Fassier-Duval telescopic system, intramedullary fixation, and Ilizarov fixation), iliac crest autograft, and DBM alone. The autologous adipose stem cell transplant was The anatomical reconstitution was performed with a metallic prosthesis (Phenix, patient 1) and a human osteochondral allograft (patients 2 and 3). Tumors of patients 1 and 2 were classified as conventional high-grade osteoblastic osteosarcoma and included in the EURAMOS protocol (European and American Osteosarcoma Study Group: chemotherapy with methotrexate, doxorubicin, and cisplatin). Patient 3 had a high-grade Ewing sarcoma and was included in the Euro-Ewing protocol (chemotherapy with vincristine, ifosfamide, doxorubicin, VP-16). Adipose subcutaneous tissue was procured surgically (at the time of tumor biopsy for patients [1][2][3] or by aspiration with a syringe (for patients 4-6) at the leg for all patients except in the abdominal region for patient 4. proposed after a mean of 35.3 months after diagnosis of bone nonunion without success by conventional therapies. Clinical Grade Manufacturing of the 3D Graft The Endocrine Cell Therapy Unit (Center of Tissue and Cell Therapy, Cliniques Universitaires Saint-Luc, Brussels, Belgium) is recognized as a clinical laboratory for isolating ASCs by the Belgian Federal Agency for Medicines and Health Products. All procedures for ASC isolation and expansion were performed in air-laminated flow grade A located in clean room grade B (validated annually by ICCE SA, Elsene, Belgium), in accordance with the Belgium Ministry of Health recommendations and European directives (regulation no. 1394/2007 for advanced cell therapy products). The environment for cellular culture was controlled by weekly particle counting (in static and dynamic conditions; Lasair II Particle Counter, Particle Measuring Systems Germany GmbH, Darmstadt, Germany) and microbiological testing at each manipulation, as recorded in the ''Graft Report.'' To isolate human ASCs, a mean of 1.9 AE 2.6 g of fatty tissue was harvested by a simple subcutaneous biopsy from 6 patients ( Table 1) after informed consent and serologic screening. 31 ASC isolation (with GMP collagenase 0.075 g; 8000 PZ U/L; Serva Electrophoresis GmbH, Heidelberg, Germany), expansion, and differentiation were performed in line with good manufacturing practices (GMPs) and the ISO 9001-2008 quality management system. ASCs were then isolated and expanded in the proliferation medium (Dulbecco modified Eagle medium supplemented with 10% heat-inactivated and viral-tested fetal bovine serum certified by the US Department of Agriculture; Life Technologies, Grand Island, NY) up to passage 4 (P4) (after sequential trypsinizations). 28,29 At P4, ASCs were incubated (in 150-cm 2 culture flask) in osteogenic medium composed of the proliferation medium supplemented with dexamethasone (1 mM), sodium ascorbate (50 mg/mL), and sodium dihydrophosphate (36 mg/mL). After 15 to 18 days of ASC incubation, DBM was added (10 mg/mL) to create the 3D scaffold-free graft. 28 Human DBM was provided by the University Tissue Bank (University Clinical Hospital, Saint-Luc, Brussels, Belgium) and was produced from multiorgan human donors. Diaphysis of femoral or tibial bone was cut and ground into particles between 200 and 700 mm for demineralization treatment. DBM was performed by grinding cortical bones from selected human donors (<45 y old; 7 donors were used for this experimental protocol). First, human bone tissue was defatted by acetone (99%) bath overnight, followed by washing in demineralized water for 2 hours. Decalcification was performed by immersion in 0.6 N HCl for 3 hours (20 mL solution per gram of bone) under agitation at room temperature. Then, demineralized bone powder was rinsed with demineralized water for 2 hours and the pH was controlled (normal ph between 7.00 and 7.84). If the pH was too acidic, DBM was buffered with phosphate solution at 0.1 M under agitation. Finally, DBM was freeze-dried and weighed. The DBM was sterilized with 25 kGy by gamma irradiation at À808C. The osteogenic properties of DBM were assessed by the residual level of calcium concentration after the demineralization process (measured by calcium extraction contained in a mean of 1.3 g of DBM vs nondemineralized bone powder from each donor); and the osteoinduction by in vivo implantation in paravertebral musculature of nude rats (male, 6-8 wks old) to quantify the new bone formation (presence of bone marrow, osteoblast activity, and new bone formation) by histomorphometry (a standard 300 cross-grid for point counting under a microphotography at 10Â magnification; 4 nonoverlapping areas per slide were studied) at 1 month postimplantation for demineralized versus non-DBM. The 3D graft (was ready for implantation right off the plastic dish) was implanted after a mean of 71.5 AE 22.3 days of incubation in osteogenic medium (after P4) (Fig. 1). The day of implantation, the 3D graft was rinsed 3 times with transplantation medium (CMRL; Mediatec Inc., Manassas, VA) without phenol red and without antibiotics or sera. The graft was finally placed in a sterile culture flask enclosed in 3 sterile plastic bags. The graft was then transferred at room temperature, in less than 15 minutes, to the operating room for implantation. When the final scaffold-free 3D graft was obtained with the optimal DBM concentration (ASCs þ 10 mg/mL), the cellular viability and the graft integrity were determined by a histomorphological score including the cellular, extracellular matrix, and DBM contents: (DBM-ECM)/viable cells. A 20mm 2 biopsy (day of the transplantation) was fixed in 4% paraformaldehyde overnight. We normalized the integrity of the 3D graft by the (DBM-ECM)/viable cells ratio between À1 and þ1. 28 Graft Safety Cytogenetic stability was studied by karyotype and fluorescence in situ hybridization (FISH) analyses at P4 (undifferentiated and differentiated) of ASCs from the 6 patients to assess the oncogenic safety of the cellular components of the 3D graft. Metaphase chromosomes were obtained according to standard protocols from cultured cells (ASCs) in the exponential growth phase after P4. 32 Twenty Giemsa-Trypsin-Wright banded metaphases were analyzed, and karyotypes were reported according to the last 2013 International System for Human Cytogenetics Nomenclature. The FISH experiment was performed according to standard protocols to detect aneuploidy of chromosomes 7 and 8 using CEP7/D7Z1 (SpectrumGreen or SpectrumOrange) or CEP8/D8Z2 (SpectrumOrange or Spec-trumGreen) probes (Abbot Molecular, Ottignies/Louvain-la-Neuve, Belgium). 32 For the patient with tumors, specific probes according to the initial genomic alterations detected in the tumors were tested in the P4 ASCs (undifferentiated and differentiated): LSI 9p21/CEP 9 (Abbot SA, Wavre, Belgium), ON MDM2/SE 12 (Kreatech Diagnostics, Amsterdam, the Netherlands), LSI-RB1/13q1 42 (Abbot SA), LSI TP53/CEP 17 (Abbot SA), LSI EWSR1 (Abbot SA), and LSI-FLI1 and EWSR1 (Cytocell, Cambridge, UK). At least 100 nuclei were counted, and the thresholds were calculated following the inverse beta law, with a confidence interval of 99.9%. Mycoplasma and endotoxin assays were also performed, as per current GMP guidelines, by TEXCELL SA (Evry, France) on cellular samples collected at P4 for undifferentiated and osteogenic cells (the last sample prior graft delivering). Microbiological testing was repeatedly performed at each media change (twice a week during the entire manufacturing of the graft) for aerobia, anaerobia, moisture, and yeast by BACTEC assays. In-process controls (on cellular samples collected at P4 for undifferentiated and osteogenic cells up to the last sample before graft delivery) based on safety tests found no microbiological or mycoplasm contamination and no endotoxin contents for any manufacturing batch. Therefore, all manufactured 3D grafts fulfilled the release criteria for implantation. FIGURE 1. Protocol to obtain a 3-dimensional (3D) osteogenic graft made of adipose-derived stem cells (ASCs). Human ASCs were isolated following digestion of subcutaneous adipose tissues, expanded up to passage 4 (P4), and finally differentiated in 2 phases. The osteogenic differentiation was induced to create a 3D structure by the addition of demineralized bone matrix (DBM). The optimal concentration of DBM was adjusted with the anticipated function of the 3D construction to produce a sufficiently stable graft for manipulation by forceps. Dufrane et al Medicine Volume 94, Number 50, December 2015 The 3D Graft Implantation and Outcome In case of tumor resection (patients 1-3), the 3D graft was placed directly at the junction between the native host bone and the bone allograft or the growing prosthesis. In case of bone nonunion, the 3D osteogenic graft was modeled to the ideal size of the bone defect and put directly into the hole without any fixation material. The primary outcomes of the clinical trial were safety and feasibility of the procedure assessed at 12 months after implantation. The safety was studied in terms of adverse events (local or systemic) with clinical (inflammation, wound infection) and biological (C-reactive protein, fibrinogen, white blood cell count) assessments on a specific schedule: 2 times per week during the first month and once per month from month 1 to 12 postimplantation. The safety was also investigated by x-ray between 12 and 39 months posttransplantation to assess any secondary ectopic malignant tissue development. The feasibility study was based on the ability to reproducibly obtain a 3D structure from autologous stem cells and allogenic DBM, the respect of the timing between the adipose tissue procurement and the surgical intervention (especially for tumor resection in combination with the chemotherapy presurgery), the capacity to produce enough 3D grafts to fill a large bone defect, and the surgical handling of the 3D graft being clinically relevant for large-scale application. The secondary outcomes were the efficacy for bone tissue consolidation at the site of bone nonunion (radiologically assessed) and patient satisfaction based on quality of life (eg, walking, pain). Statistical Analysis Results are expressed as mean AE standard deviation. The 1sample Kolmogorov-Smirnov test and Q-Q plots were used to assess the normal distribution of values. Statistically significant differences between groups were tested by 1-way analysis of variance with the Bonferroni post hoc test. Statistical tests were performed with PASW 18 (SPSS; IBM, Armonk, NY); P < 0.05 was considered significant. The quality of the human DBM was confirmed by a significant reduction of the calcium content (47 AE 116 vs 2615 AE 890 mg/dL of calcium corresponding to a mean demineralization of 98%; P < 0.005) and by significant higher in vivo osteogenesis (12 AE 5 vs 1 AE 3% of the explanted graft with osteoinductivity; P < 0.05) in comparison to non-DBM, respectively. Clinical Safety and Efficacy A mean of 16 AE 4 million ASCs per patient was available by the end of P3 (within 41 AE 6 d), which was sufficient for seeding into three 150-cm 2 culture flasks for P4 ( Table 1). The osteogenic differentiation was then induced at P4 for 15 days (when ASCs were confluent) before supplementation of DBM at 10 mg/mL to create the 3D structure. All grafts demonstrated a 3D structure before implantation. The 3D graft was implanted at day 112 AE 26 after adipose tissue procurement, consistent with the preoperative chemotherapy course. Three grafts of 3 Â 3 cm 2 (1 graft per 150-cm 2 flask) were produced per patient from ASCs supplemented with DBM 10 mg/mL (Fig. 1). The final product was stable and did not rupture with forceps manipulation. No adverse event occurred at the site of adipose tissue biopsy. The 3D structure was verified for each graft (preimplantation) by histomorphometric potency testing scored based on staining for osteogenic phenotype and mineralization of the matrix. The complex clonal numerical and structural chromosomal aberrations detected in the bone tumor cells ( Table 2) were not detected in the ASCs developed for each graft at P1 and P4 (both undifferentiated and differentiated status). Minor clones with structural aberrations detected in the undifferentiated ASCs at P4 were absent from the differentiated ASCs (Fig. 2). No acute side effects such as inflammatory reaction, pain, or wound nonhealing were reported (Figs. 3 and 4). No patient had complications during follow-up, but patient #2 underwent allograft removal because of intercalary allograft infection more than 10 months posttransplantation. No longterm complications were observed for patients 1 and 3 after a mean follow-up of 37 months. For both patients, ossification was rapidly initiated around the Phenix prosthesis and the bone allograft, respectively (Fig. 3). The junction of the native bone and the bone substitute began at 3 months postimplantation and remained consolidated up to 47 months. The stabilization of the allografts led to a normal quality of life more than 3 years postimplantation (Fig. 3B). Although no acute complication was reported for patients with bone nonunion (patients 4-6), one case required material removal due to sepsis following screw and plate infection by Staphylococcus aureus at 10 months posttransplantation. No sign of bone consolidation was found. For patient #4, a surgical revision was performed at 9 months, due to incomplete or inefficient bone consolidation. In bone nonunion within an erythroblastopenia context, the bone consolidation was confirmed at 10 months postimplantation and maintained up to 29.8 months (Fig. 4). No abnormal ectopic bone development was found radiologically after a mean of 32 months postimplantation for all patients (n ¼ 6). DISCUSSION The combination of osteoprogenitor ASCs and growth factors included in osteoconductive DBM demonstrated the feasibility of manufacturing a clinical batch of 3D scaffoldfree autologous and osteogenic graft; the safety of differentiated stem cells and growth factors in an oncological context; and the efficacy of surgical reconstruction of a large bone defect in a stringent clinical context, specifically bone pseudarthrosis and bone tumor resection. The most important outcome of this study is the proof of concept in terms of feasibility for manufacturing a scaffold-free 3D implant from human autologous ASCs differentiated into an osteogenic phenotype with DBM. For clinical application of this advanced therapy of medicinal products, all procedures were validated using human ASCs (following GMPs) and DBM with the goal of being able to uniformly reproduce the manufacture of a structural and stable 3D implant in all patients despite clinical constraints, such as the timing of adjuvant chemotherapy, the Karyotypes and FISH of patients 1 to 3 were compared with individual control patients (6-13 y old with 2 congenital and 1 acquired bone pseudarthrosis). No native tumor abnormalities were found in undifferentiated and differentiated ASCs for patients 1-to 3. No difference was found between expanded and differentiated ASCs from patients with a bone tumor and bone pseudarthrosis. schedule of operating room access, and interdonor variability. A mean of 3.7 months for graft manufacture was compatible for clinical implantation, taking into consideration some delays due to patient state (minimum of 80 vs maximum of 143 d for patients 2 and 3, respectively). Another important issue for the clinical feasibility is the size of the implant generated to fill the bone defect. The size of generated 3D bone-like tissue (a mean of 12.6 cm 3 for the 3 grafts) was significantly increased by nearly 6 times (compared to 2 cm 3 of native adipose tissue), and it was always sufficient to fill the bone defect, which was a function of the clinical indication. The primary issue for the clinical application of advanced cell therapy remains the safety of the patient. Although human MSC (deficient for p53 and/or Rb) failed to induce tumor formation in vivo, suggesting the safety of these cells in clinical application, Perrot et al 33 postulated a risk associated with autologous fat graft implantation in a postneoplasic context, especially for osteosarcoma. No tumor recurrence was observed FIGURE 3. Bone defect at the junction between the native bone tissue and the intercalary allograft (after tumor resection at the right tibia). A, No patient had complications during follow-up, but patient #2 underwent allograft removal because of intercalary allograft infection more than 10 months posttransplantation. No long-term complications were observed for patients 1 and 3 after a mean followup of 37 months. B, The graft was placed at the direct junction between both tissues without any fixation material. The total consolidation was demonstrated at 24.3 months, the total bridge between the native tissue and the allograft was confirmed at 33 months postimplantation (as demonstrated by nuclear magnetic resonance), and the consolidation is currently maintained up to 47 months posttransplantation. A complete return to normal quality of life for patient #3 was confirmed after 3 years. in our series more than 3 years posttransplantation. Controversy exists concerning the potential for spontaneous transformation of MSCs after prolonged ex vivo culture, but several studies reported that MSCs have limited tendencies to develop tumors. [34][35][36] In the context of the pediatric population, especially in cases of bone tumor, all precautions were taken to guarantee ASC expansion and differentiation before transplantation. In contrast to native bone tumor, which harbored complex genomic alterations, FISH analysis revealed minor rates (near the detection threshold) of chromosomal aneuploidy, mainly tetrasomies, suggesting tetraploidy as classically observed in cultured cells (around the cut-off of 4.5% and different from the initial tumor clone) and known to be nontumorigenic. 37 The clonal chromosomal and structural aberrations detected in the karyotypes of 2 patients were not associated with a selective growth advantage in vitro. In a context of chronic wound healing with undifferentiated human ASCs, we previously demonstrated minor rates (near the FIGURE 4. Bone nonunion in a case of congenital pseudarthrosis. A, No acute complication was reported for patients with bone nonunion (patients 4-6), 1 case required material removal due to sepsis following screw and plate infection by Staphylococcus aureus at 10 months posttransplantation. No sign of bone consolidation was found. For patient #4, a surgical revision was performed at 9 months, due to incomplete or inefficient bone consolidation. In bone nonunion within an erythroblastopenia context, the bone consolidation was confirmed at 10 months postimplantation and maintained up to 29.8 months. B, The 3D graft can easily be placed in the large bone defect of the left ulna of patient 4. Bone nonunion, found on the right tibia of patient 6, was initiated at 6 months postimplantation and consolidated up to 29.8 months. 3D ¼ 3-dimensional. detection threshold) of chromosomal aneuploidy up to passage 16 (cut-off $4.5%) and the absence of adverse events in immunodeficient animal recipients (1 or 3 months postimplantation) and in patients (up to 22 months after implantation). In this study, the in vivo oncologic safety was confirmed by the absence of adverse events in patients up to 3 years after implantation; ASC delivery after a shorter in vitro culture (P4), thus avoiding the selection of tumor cell clones; and the stabilization of the genome by osteogenic differentiation. 38 The implantation of a 3D osteogenic-like graft was sufficient to provide long-term safety as well as proof of efficacy in extreme clinical conditions. All manufactured 3D grafts were easily handled and implanted (without any additional fixation materials) in a defined and regular space (eg, the junction between the intercalary allograft and native bone) or in an irregular defect as found after tumor resection or in pseudarthrosis, respectively. Although the final 3D graft is not intended to directly restore the native bone mechanical properties, the low mineral content of the final 3D product (which is only a mean of 18% of that found in human adult trabecular bone: 277 vs 2876 mg/cm 3 , respectively, data not shown) confers an optimal malleability for surgical implantation. 39 No side effects can be related to the 3Dgraft. The cases of reported infection were associated with the intercalary allograft and the fixation material contamination at the peri or postoperative time. The microbiological control of the 3Dgraft (at the time of implantation) and the culture media (used as vehicle for the transportation between clean room and operative room) did not reveal contamination. Bus et al 5 recently demonstrated that bone nonunion (between intercalary and native bone) remains the major cause (40% of patients) of second surgical intervention (with the use of cancellous bone allograft at 1 y after the initial surgery) to facilitate the union of allograft-host junctions after tumor resection. Although no junction (by x-ray) was found at 6 months after graft implantation (which can be associated to a lower degree of mineralization of the 3D graft compared to native bone), the 3D graft was found to promote irreversible bone consolidation up to 47 months, restoring normal quality of life, especially for patients #1 and #3. Although ASCs survived and promoted in vivo osteogenesis in bone nonunion (congenital or acquired) up to 29.8 months posttransplantation, which is characterized by low oxygen tension, fibrosis, and absence of a local favorable environment (low level of local and systemic growth factors, such as fibroblast growth factor 2), a lower rate of success was found in this category of patients (certainly due to the native pathophysiology of the pseudarthrosis). 40 Although in vivo histological analysis could not be performed for ethical reasons, x-ray analysis demonstrated total consolidation, which was initiated rapidly at 3 months postimplantation. This observation coincides with our previous preclinical results demonstrating in vivo consolidation of critical femoral size bone defect at 6 months postimplantation, which was characterized by endochondral and intramembranous ossification after the 3D graft implantation. 28 CONCLUSIONS We report for the first time that autologous ASCs can be fully differentiated into a 3D osteogenic-like implant without any scaffold. We demonstrated that this engineered tissue can safely promote osteogenesis in extreme conditions of bone nonunion, leading to restoration of bone anatomy and function, with minor donor site morbidity and no oncological side effects. Since bone nonunion incidence is increased due to comorbidity factors (eg, type 2 diabetes, smoking), this technology must demonstrate its potential in terms of clinical benefit in comparison to a gold-standard surgery (reduction of additional interventions, reduction of morbidity); improved cost effectiveness by the introduction of allogenic ASC source and automated/closed culture system to reduce the interbatch variability and manufacturing costs (eg, clean room, quality control testings, operator handling). A prospective controlled trial is needed for clinical relevant indications to clinically assess the 3D osteogenic-like implant. ACKNOWLEDGMENTS We gratefully acknowledge Geneviève Ameye and Gaëlle Tilman for their collaboration with genetic analysis (FISH and karyotype, respectively). Author contributions: DD was the Principal Investigator of the clinical study in terms of study design, inclusion/exclusion patient criteria. DD developed the preclinical and clinical concept of the human autologous 3D graft from adipose stem cells. NA and DD validated the protocols of graft manufacturing, GMP production of the graft, and immunohistochemical analyses. NA produced the clinical batches of 3D grafts. CD and P-LD interpreted the clinical outcome in terms of safety and efficacy (x-ray analysis). HAP performed and interpreted the genetic analysis for graft release. WA coordinated the study design for clinical data management.
2018-04-03T05:13:39.364Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "034923c1e5c068eb15302c82d9f7bc7544f54929", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000002220", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "034923c1e5c068eb15302c82d9f7bc7544f54929", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
12629658
pes2o/s2orc
v3-fos-license
Numerical research on hydrodynamic characteristics of end cover of pressure exchanger To investigate hydrodynamic performance of the end cover under different inclined angles, a series of 3-Dimensional geometric models of the end cover with different inclined angles were built by Creo. The maximum inclined angle is 32 degrees and the minimum is 6 degrees. Numerical simulations by solving the Navier-Stokes equation, coupled with the “k-ɛ” turbulence model, were carried out. At last, regressive analysis method is used to deal with the result data. The data obtained by simulation were mainly analysed from three aspects. They are cause of the driving torque, drive efficiency and inclined angle of flow channel. The results show that the driving torque is formed mainly by the positive pressure of the water, and the influence of the viscous force on the driving torque of the rotor is negligible. What‘s more, both driving torque and pressure difference decrease with the increase of inclined angle in the form of power function. The driving efficiency increases in form of logarithmic function with the increase of inclined angle. This research has a great significance in the control of rotational speed of rotor and revealed the relationship between the driving torque of the rotor and the flow channel of the end cover. Introduction A rotary pressure exchanger which is based on the positive displacement principle is a kind of fluid energy recovery equipment [1] [2]. It is one of the three core components of seawater desalination industry. Its operating principle is illustrated by Figure 1-1 [1]. The key components of RPE include a rotor with axial ducts arranged in a circle around a centre tension rod, two end covers and one sleeve. In the work process, the fluid will generate a term of tangential velocity after it passed through the inclined flow channels and the rotor rotated in the sleeve is driven by this fluid. Low pressure and high pressure brine will directly contact with each other and the pressure energy is transferred directly from the high pressure reject stream to a feed stream with no intervening walls. The pressurized seawater is discharged into lift pump. The pressure of brine decreased and then it is discharged by the feed stream. The most important three features of pressure exchanger are flow-driven rotation, self-lubricating bearing and pressure transition control [3]. Now the latter two issues have been searched by many people. For example, Zhao Fei and his collaborators have carried out a detailed analysis on the problem of self-lubricating bearing [4]. They researched the support mechanism and stiffness of axial film in rotary pressure exchanger and revealed how much the clearness between cover and rotor should be to guarantee that the rotor can be supported by water and the leakage is maintained at a low level. Another important research was carried out by Yihui Zhou [5] and her partners, which revealed that the rotor speed is very important to control the maximum flow-in length and guarantee that the mixing is maintained at a low level. Many other studies show that the rotor speed has a great effect on the performance of pressure exchanger [6]. So the mechanism and characteristics of flow-driven is a problem we can't avoid to guarantee the rotor rotates at ideal speed. But no detailed research on the problem of flow-driven has been carried out at present. In order to figure out the mechanism and characteristics of the flow-driven of pressure exchanger, a lot of work has been done in this paper. The structure of the article is arranged as follows. Section 2 gives a detailed introduction of the research problem, in which geometry specifications and operating parameters are listed. Section 3 describes the numerical setup detailed. Section 4 presents the numerical results, including the velocity vectors at different inclined angles as well as the whole hydraulic moment analysis, followed by the conclusion in Section 5. Description of problem The section at the flow channel of the end cover is shown in figure 2-1. A term of tangential velocity will generate after the fluid passed through the inclined flow channel. Then the fluid produce a tangential impact on rotor so as to drive the rotor rotates in the sleeve From the qualitative point, bigger inclined angle means that more fluid with tangential velocity component drives the rotor, but the tangential velocity will be smaller. So the ultimate driving torque may be not larger than that in smaller inclined angle. It becomes important to get the change law between driving torque and inclined angle. In order to study the effect of inclined angle on driving torque, the volume flow rate is set to 70m3/h, pressure at high pressure outlet is set to 6 MPa and that at low pressure outlet is set to 0.4 MPa. In addition, the rotor speed is set to be same. Mesh Generation The geometry model of the pressure exchanger was established by the commercial software-Creo. It includes two end covers and a rotor. The main geometry parameters are listed in Table 3-1and the whole configuration can be seen in Figure In the present study, FLUENT has been used for calculation, with transport equations solved by finite volume method. The mesh was generated on the platform of ICEM CFD. In practise, the flow zone is divided into three parts (two covers and a rotor). Each part is meshed separately. Tetrahedral elements is used in mesh Generation of rotor and triangular elements is used in mesh Generation of The number of mesh element of rotor reached 4 million and that of end cover reached 3.5 million. The quality of mesh of all parts are higher than 0.5. After computational test, it is proved that the mesh system can satisfy the requirement of the k-ɛ turbulence model and enhanced wall treatment, which can be seen from the y+ values on the walls of rotor and end cover displayed in Figures 3-4 and 3-5, respectively. At last, all the meshes are transferred from ICEMCD to fluent to compose an integral mesh. Interface is used to connect the mesh of different fluid zones. Boundary condition 3.2.1. Inlet boundary conditions. The velocity inlet at inlet of end cover is used, whose magnitude is assumed to be uniform and determined by the experiment. The turbulence parameters are specified in terms of turbulence intensity and hydraulic diameter of the inlet. Outlet boundary conditions. Pressure outlet is used at outlet of end cover, static pressure (p=0.4MPa) is specified for low pressure outlet and static pressure (p=6MPa) is specified for high pressure outlet. The turbulence parameters are specified in terms of turbulence intensity and hydraulic diameter of the inlet. Other conditions. No-slip condition is assumed on all the solid walls, and enhanced wall treatment function is used to calculate the turbulence kinetic energy and turbulence dispassion frequency near the wall. The rotation of rotor domain is considered by the use of the multiple rotating reference frame (MRF) method. Solution strategy FLUENT is used to carry out the numerical simulations. The code solves the Reynolds averaged Navier-Stokes equations in a primitive variable form. The effects of turbulence are modelled using the k-ɛ turbulence in the simulation. The second-order upwind scheme is used for discretization of convective term and the second-order central difference scheme for discretization of diffusion term. The separated solver is used to solve the incompressible flow. Numerical convergence absolute criteria is set to a maximum of 1×10 −4 . The rotor rotates clockwise under the driving of the water flow. The Upstream face and Downstream face of rotor are shown in figure 4-1. When the water flow shots to the Upstream face of rotor channels with a certain inclined angle, the kinetic energy of water flow is converted into the static pressure energy. So the pressure of Upstream face will rise, therefore a pressure difference will formed between the Upstream face and Downstream face. It is can be seen from the pressure distribution of the cross section of the rotor shown in figure 4-2. For the sake of clearness, the range of displayed pressure is set as 5.85MPa to6.35MPa, which is suitable for displaying the pressure distribution of high pressure side. It is can be seen from figure 4-2 that the inclined angle is smaller, the pressure difference between the Upstream face and Downstream face is larger. It indicates that the driving torque obtained by rotor is greater at smaller inclined angle. According to the different forming mechanism, the hydrodynamic force acting on any objects can be divided into positive pressure and viscous shear force. The performance characteristics of that two kinds of force is also completely different. So the driving torque of rotor is divided into two parts in this paper. They are respectively called pressure torque caused by positive pressure and viscosity torque caused by viscous shear force. The trend of pressure torque denoted by Mp along with the change of inclined angle denoted by θ is shown in figure 4-3. From that figure we can know that the pressure torque at any inclined angle is positive, which suggests that the water's positive pressure is driving the rotor to rotate. The trend of pressure torque is obviously. The inclined angle is larger, the smaller the pressure torque. Especially when the inclined angle is small than 16°, the pressure torque de-creased rapidly. Another characteristic is that the distribution of data points is regularly, roughly on a curve. Regression method is used to analyse these data and power function is used to fit these data points shown in figure 4-1. The regressive curve equation got from regressive analysis is (1) Where y represents the pressure torque and x represents the inclined angle. The fitting degree index R 2 reached 0.9961. The distribution of viscosity torque denoted by Mv at every inclined angle is shown in figure 4-4. It is obviously that the viscosity torque at every inclined angle is negative, which suggests that the viscosity torque caused by viscous shear force hinders the rotation of rotor. But the absolute value of viscosity torque is very small. The biggest absolute value of torque caused by viscosity shear force is only 0.108(n*m) at the inclined angle of 6 degree. How-ever the torque caused by positive pressure at that inclined angle is 28.7(n*m), which is much larger than viscosity torque. Although the distribution of date points of viscosity torque is not as regular as pressure torque, regressive analysis is carried out on the viscosity torque and linear function is used to fit these data points shown in figure 4-4. The linear regressive curve equation is (2) Where y represents the viscosity torque and x represents the inclined angle. The fitting degree index R 2 is 0.748. Total torque denoted by Mt is the sum of pressure torque and viscosity torque. Because the absolute value of viscosity torque is too small to influence the variation trend of total torque, the distribution of data points of total torque shown in figure 4-5 is very similar to the distribution of data points of pressure torque. Similarly, regressive analysis is carried out on the total torque and the regressive curve equation is (3) Where y represents the total torque and x represents the inclined angle. The fitting degree index R 2 is 0.996. Pressure difference characteristics of pressure exchanger The previous analysis shows that when water flows through the pressure exchanger, the water will produce a driving torque to drive the rotor to rotate. From the view of conservation of energy law, the water flow must have some loss of energy of itself. This can be confirmed from the pressure difference between the inlet and outlet. The pressure at high pressure inlet, high pressure outlet, low pressure inlet, low pressure outlet are denoted by HPin, HPout, LPin, LPout respectively. So the total pressure difference can be described as follows. (4) Where PD represents the total pressure difference, which reflects the energy loss of water flow. The distribution of date points of pressure difference is shown in figure 4-6. It is obviously that the distribution of pressure difference and total torque is very similar, which suggests that larger total torque means larger pressure difference. Regressive analysis is also carried out on the pressure difference and the regressive curve equation is (5) Where y represents the pressure difference and x represents the inclined angle. The fitting degree index R 2 is 0.9969. Driving efficiency characteristics of pressure exchanger Total pressure difference between inlet and outlet represents the energy loss of water flow. Total driving torque is the output of water flow. So the driving efficiency is de-fined as the ratio of total driving torque and total pressure difference, which is described as follows. (6) Where η represents the driving efficiency, Mt represents the pressure torque, PD represents the pressure difference. The distribution of driving efficiency denoted by η at every inclined angle is shown in figure 4-7. It is obviously that the driving efficiency increased gradually with the increase of inclined angle, which is contrary to the trend of total torque and pressure difference. Logarithm curve is used to fit the date points of driving efficiency and the regressive curve equation got from regressive analysis is Where y represents the driving efficiency and x represents the inclined angle. The fitting degree index R 2 is 0.9924. When water flows into channels of rotor, the water flow shots to the walls of rotor channels with a certain inclined angle. So the water flow is disordered by rotor and a large amount of vortex generated in this region. When the water flow into the end cover's channel from the channels of rotor, the direction of water flow is changed. When θ<20°, the water flow is disordered seriously and a large amount of vortex generated in the channels of low pressure outlet and high pressure outlet. It is can be seen from figure 4-8 that the inclined angle θ is larger, the region where the turbulence kinetic energy is greater or equal to 5 J/kg is smaller, which indicates that The greater the inclined angle, the less energy of water flow dissipates into the turbulence. For the sake of clearness, the total energy loss of water flow, energy of driving rotor to rotate, energy dissipates into the turbulence are denoted by Et, Ed and Ew. So the following equation is established (8) : Energy dissipates into the turbulence
2017-10-05T11:07:05.194Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "6d9911c50bc72b74707421283e66beeada36b668", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/129/1/012030", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "43fe52a759ec09b1a457908d40016409ce0d969c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
211079442
pes2o/s2orc
v3-fos-license
Snake C-Type Lectins Potentially Contribute to the Prey Immobilization in Protobothrops mucrosquamatus and Trimeresurus stejnegeri Venoms Snake venoms contain components selected to immobilize prey. The venoms from Elapidae mainly contain neurotoxins, which are critical for rapid prey paralysis, while the venoms from Viperidae and Colubridae may contain fewer neurotoxins but are likely to induce circulatory disorders. Here, we show that the venoms from Protobothrops mucrosquamatus and Trimeresurus stejnegeri are comparable to those of Naja atra in prey immobilization. Further studies indicate that snake C-type lectin-like proteins (snaclecs), which are one of the main nonenzymatic components in viper venoms, are responsible for rapid prey immobilization. Snaclecs (mucetin and stejnulxin) from the venoms of P. mucrosquamatus and T. stejnegeri induce the aggregation of both mammalian platelets and avian thrombocytes, leading to acute cerebral ischemia, and reduced animal locomotor activity and exploration in the open field test. Viper venoms in the absence of snaclecs fail to aggregate platelets and thrombocytes, and thus show an attenuated ability to cause cerebral ischemia and immobilization of their prey. This work provides novel insights into the prey immobilization mechanism of Viperidae snakes and the understanding of viper envenomation-induced cerebral infarction. Introduction Snake venoms are complex cocktails of bioactive peptides and proteins that immobilize or digest prey [1]. There are more than 420 species of venomous snakes living on the earth [2], and variations in venom composition are common among these snakes [1,3]. Venomous snakes are classified into four families: Viperidae, Elapidae, Atractaspididae and Colubridae [4]. The venoms from elapids usually contain a high level of neurotoxins such as three-finger toxins (3FTxs) and phospholipases A2 (PLA2), that lead to the rapid paralysis of their prey [5][6][7]. In contrast, neurotoxins are less abundant in the venoms from vipers and colubrids, and their envenomation is often associated with hemorrhage and circulatory disorders [3,4]. It is unknown if the viper venoms have a comparable prey immobilization efficiency to that of Elapidae venoms and what their underlying mechanisms are. Snake C-type lectin-like proteins (snaclecs) are mainly expressed in the venoms of vipers and colubrids [4,5]. The available data indicates that snaclecs may be one of the most abundant nonenzymatic group of proteins in the venoms [8][9][10][11][12][13]. Snalecs usually have a heterodimeric structure with α and β subunits, which are often oligomerized to form protein multimers, and have evolved to bind a wide range of physiologically important proteins such as GPIb, GPVI and integrins on mammal platelets [13][14][15][16]. In this study, we investigate the effects of viper snaclecs on prey immobilization and the mechanisms underlying them. Snaclecs Can Rapidly Immobilize Prey The intraperitoneal injection of crude venom from P. mucrosquamatus or T. stejnegeri induced a quick loss of the righting reflex similar to the crude venom from N. atra, as shown in Table 1. Comparatively speaking, venoms from vipers and colubrids may contain fewer neurotoxins but are rich in snaclecs [4,5,14]. We therefore speculate that snaclecs such as mucetin or stejnulxin may be capable of immobilizing prey. The intraperitoneal injection of purified mucetin or stejnulxin reduced animal exploratory behavior and locomotor activity in the open field test in a concentration-dependent manner, as shown in Figure 1. The average travel distances of pheasant chicks (Phasianus colchicus) were reduced from 13.39 m to 1.95 and 1.32 m over a 10-minute period with an increasing dose of mucetin or stejnulxin, respectively, as shown in Figure 1a. Mice (Mus musculus) were more active in the open field test, but the snaclecs showed a similar trend in reducing their spontaneous locomotor behavior. The average travel distances of mice were reduced from 3.40 m to 0.61 and 0.55 m over a 3-minute period with an increasing dose of mucetin or stejnulxin, respectively, as shown in Figure 1b. Mice and birds are common snake prey belonging to different classes; however, mucetin and stejnulxin significantly reduced the locomotor activity of these animals indistinguishably, suggesting that snaclecs may be broad-spectrum toxins that help snakes to immobilize prey. the mice (b) within 10 and 3 min, respectively, were recorded by an automated infrared tracking system. A nonparametric test with a Dunnʹs multiple comparison test were used to indicate the statistical significant differences between the groups (N = 6, * p < 0.05, *** p < 0.001). Snaclecs are Critical for Viper Venom Induced Prey Paralysis To further investigate the role of mucetin and stejnulxin in viper envenomation-induced prey immobilization, we compared the activity of crude viper venoms both in the presence and absence of the snaclecs. As shown in Figure 2a, the average travel distances of pheasant chicks were significantly reduced from 13.39 m to 2.25 and 1.51 m by the crude venoms of P. mucrosquamatus and T. stejnegeri, respectively. Pheasant chicks treated with the venoms lacking mucetin or stejnulxin showed obviously extended travel distances, as shown in Figure 2a. A similar phenomenon was observed in mice, as shown in Figure 2b. The distinct exploration paths of pheasant chicks (a) and mice (b) within 10 and 3 minutes, respectively, were recorded by an automated infrared tracking system. A nonparametric test with a Dunnʹs multiple comparison test were used to indicate statistical significant differences between groups (N = 6, * p < 0.05, ** p < 0.01). Snaclecs Induce Cerebral Ischemia The intraperitoneal injection of the viper crude venoms or the snaclecs (mucetin or stejnulxin) at 400 μg/kg significantly reduced the cerebral blood flow of pheasant chicks, as shown in Figure 3a, as well as mice, as shown in Figure 3b, as monitored by a laser-speckle blood flow imaging system over a 10-min period. However, these crude venoms showed a much-attenuated effect on cerebral blood flow with the removal of mucetin or stejnulxin. Moreover, injection of the crude venoms from N. atra, which are considered to lack snaclecs, did not affect cerebral blood flow. Snaclecs Activate Thrombocytes or Platelets Thrombocytes vary considerably among birds and mammals in terms of their cellular structure and functions, but they all share the identical function of clumping together quickly to form clots and thus preventing blood loss after trauma [17]. Abnormal activation of thrombocytes, such as platelet aggregation, will obstruct cerebral microcirculation and lead to cerebral infarction [18,19]. The available data indicate that mammalian platelets are the target of snaclecs [16,20,21], though whether these snaclecs have an effect on avian thrombocytes has never been investigated before. We compared the thrombocyte aggregation activity of the snaclecs as well as the crude viper venom both in the presence and absence of mucetin or stejnulxin, as shown in Figure 4. The crude venoms (CV) from P. mucrosquamatus and T. stejnegeri rather than N. atra potently induced thrombocyte aggregation at 2 μg/mL. The viper venoms in the absence of mucetin (CV-mu) or stejnulxin (CV-st) did not aggregate the thrombocytes or platelets at the same concentration. Further assays indicated that the purified snaclecs (mucetin or stejnulxin) showed a stronger ability to initiate aggregation than the crude venoms, as shown in Figure 4. Discussion Elapid envenomation usually leads to neurotoxicity because of the high content of three-finger toxins (3FTxs) and phospholipase A2 (PLA2) [4], while viper venoms rarely contain 3FTxs [5]. PLA2 also exists in P. mucrosquamatus and T. stejnegeri venoms [22,23], and in the latter shows an inhibitory effect on platelet aggregation [23]. Despite the presence of PLA2 in viper venoms that may lead to muscle paralysis, sensitivity to this toxin seems to be species-specific [24]. The injection of snake PLA2 does not immobilize prey immediately because there is always a minimum interval of about one hour between the injection and muscle paralysis, probably due to the mode of action but irrespective of the concentration [25,26]. In this study, we show that the venoms from P. mucrosquamatus and T. stejnegeri are comparable to those of Elapidae in prey immobilization, as shown in Table 1, and that the snaclecs may help vipers to rapidly immobilize and subdue prey by inducing platelet aggregation to impair blood circulation. Viper snaclecs are one of the most abundant nonenzymatic group of proteins in some Viperidae venoms [8][9][10][11][12]. The Snaclecs reduced the exploratory behavior and locomotor activity of pheasant chicks as well as mice within 5 minutes of injection in a concentration-dependent manner, as shown in Figure 1. This suggests that the snaclecs may help vipers to immobilize their prey quickly and efficiently. According to the purification profile, we found that the content of platelet-activating snaclecs in these viper venoms are 5-10% (data not shown). It is worth noting that the purified mucetin and stejnulxin showed a stronger ability to aggregate thrombocytes/platelets than the crude venoms in vitro, as shown in Figure 4, but they are comparable with the crude venoms in animal experiments at the same concentration, as shown in Figure 1-3. This is probably because crude viper venoms may contain components such as metalloproteinases and hyaluronidase that favor the spread of the snaclecs in the tissue and in the circulatory system [27]. Alternately, there may also be unknown factors in the crude venoms that are helpful for prey immobilization via inducing cardiovascular collapse and prolonged hypotension [28]. Despite the presence of anti-platelet proteins or peptides such as disintegrins in snake venoms [29,30], the crude viper venoms studied here showed a strong ability to induce thrombocytes/platelets aggregation, as shown in Figure 4. Given that the thrombocytes/platelets were not aggregated by the venoms that lacked snaclecs, the snaclecs (i.e., mucetin and stejnulxin) are the likely to be the major components that activate the aggregation of thrombocytes/platelets in the viper venoms. As a cocktail that contains high levels of proteins, snake venoms are metabolically expensive to produce [31]; however, the ability of viper venoms to impair the circulatory system of their prey (as well as humans) [4,13,14] is very efficient. Relatively fewer molecules are necessary to efficiently activate rather than inhibit thrombocyte/platelet aggregation due to the number of the receptors on the membrane and the triggered downstream cascade reactions [14]; thus, thrombocyte-activating snaclecs may provide key evidence to support the venom-optimization hypothesis [31]. The effects of snaclecs on the circulatory system were further confirmed by inducing acute cerebral ischemia. As illustrated in Figure 3, the intraperitoneal injection of the crude venoms from P. mucrosquamatus or T. stejnegeri as well as mucetin or stejnulxin significantly reduced the cerebral blood flow in both the pheasant chicks and the adult mice within 5 minutes, while the cerebral blood flow in mice treated by the N. atra venom or the viper venoms lacking snaclecs was comparable with the normal saline group. This suggested that mucetin and stejnulxin may be the main components for these venom-induced cerebral infarctions. Birds and small rodent mammals are common prey for snakes [32,33]. Cerebral infarction caused by snaclecs likely facilitates vipers to immobilize and capture their prey by inducing motor disability in animals, as illustrated in Figure 1 and 2. Despite a low occurrence, available data indicate that viper envenomation, including T. stejnegeri [34], leads to acute cerebral infarction in humans [4,[35][36][37][38][39]; however, experiments performed so far have not clarified the components responsible for viper envenomation-induced cerebral infarction in the venom. We suggest that Snaclecs may be the key component for viper envenomation-induced cerebral infarction. Our current finding that viper snaclecs induce cerebral infarction provides new insight into our understanding of viper envenomation-induced cerebral infarction. Venom Collection and Toxins Purification The crude venoms from P. mucrosquamatus or T. stejnegeri were collected in Jiangxi province of China. The purification and identification of mucetin and stejnulxin were carried out as previously described [15,16], and their amino acid sequences of α and β subunits are shown in Figure S1. The purity of the snaclecs was determined by SDS-PAGE and is shown in Figure S2. FPLC (Fast Protein Liquid Chromatography) was used to remove mucetin or stejnulxin from the crude venoms, while the crude venoms in the absence of mucetin (CV-mu) and stejnulxin (CV-st) were made by combining different components, without mucetin and stejnulxin. The Locomotor Activity in an Open Field Test The open field test is a standard test apparatus used to measure animal locomotion activity and exploration behavior [40]. An open field test was performed according to the previous method described [41]; briefly, pheasant chicks (190-210 g) and BALB/c mice (20-22 g) of either sex were used in this experiment. Experimental protocol (SMKX2017027) using animals in this work was Continuous Measurement of Cerebral Cortex Blood Flow Pheasant chicks (190-210 g) and BALB/c mice (20-22 g) of either sex were anesthetized by isoflurane inhalation with an anesthesia respirator (R540IP, RWD Life Science). The head was fixed, and the scalp was cut longitudinally to expose the skull of the mouse. A gentle saline drip over the exposed surgical opening prevented dehydration of the skull. The blood flow in the cerebral cortex of the animals was monitored by a laser-speckle blood flow imaging system (Version 2.0, RFLSI Pro, RWD Life Science, Shenzhen, China, 2017) before and after the injection of the toxins. Platelet/Thrombocyte Isolation and Stimulation Thrombocytes were isolated as previously described, but with some modifications [42]. Briefly, blood was collected from the wing vein of adult pheasant chick and put into a 50 mL sterile polystyrene tube containing 1mL 10% EDTA solution. The blood sample was diluted 1:1 with Hankʹs equilibrium salt solution (HBSS) without Ca 2+ and Mg 2+ . Diluted blood samples (6 mL) were added to the upper layer of the lymphocyte separation medium (density = 1.077 g/mL, GE Healthcare) and centrifuged at 1700 × g for 40 minutes at room temperature. The thrombocytes in the intermediate layer were collected and washed with Tyrode's buffer A (137 mM NaCl, 2 mM KCl, 0.3 mM NaH2PO4, 12 mM NaHCO3, 5.5 mM glucose, 0.35% BSA, 1 mM MgCl2 and 0.2 mM EDTA, pH 6.5) by centrifugation at 450 × g for 5 minutes at room temperature. The platelets were isolated from the mice with differential centrifugation. Briefly, platelet-rich plasma (PRP) was isolated from the citrated whole blood by centrifuging at 100 × g for 5 min at room temperature, then the platelets were pelleted at 500 × g for 5 minutes and washed with Tyrode's buffer A. The thrombocytes or platelets were suspended in Tyrode's buffer B (137 mM NaCl, 2 mMKCl, 0.3 mM NaH2PO4, 12 mM NaHCO3, 5.5 mM glucose, 0.35% BSA and 2 mM CaCl2, pH 7.4) for further use. Aggregation was elicited by the addition of the toxins to platelets/thrombocytes and stirred at 1000 rpm for 5 minutes at 37 °C in a four-channel aggregometer (LBY-NJ4, Techlink, Beijing, China). Statistical Analysis A nonparametric test with a Dunnʹs multiple comparison test was used to indicate the statistical significant differences between the groups. Analyses were performed with GraphPad Prism 8 software. The results were reported as mean ± SD with significance accepted at p < 0.05. Supplementary Materials: Figure S1: The amino acid sequences of some snake venom C-type lectin-like proteins; Figure S2: The purity of purified mucetin and stejnulxin as determined by SDS-PAGE and Coomassie blue staining.
2020-02-12T14:04:21.739Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "09389cec8d96b3bb4615ffcd593f46bc4f299489", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/12/2/105/pdf", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "f3047fef10de6de99a4fde908373bfbef512fcde", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
19077031
pes2o/s2orc
v3-fos-license
Developmental Programming of Obesity and Liver Metabolism by Maternal Perinatal Nutrition Involves the Melanocortin System Maternal obesity predisposes offspring to metabolic dysfunction and Non-Alcoholic Fatty Liver Disease (NAFLD). Melanocortin-4 receptor (Mc4r)-deficient mouse models exhibit obesity during adulthood. Here, we aim to determine the influence of the Mc4r gene on the liver of mice subjected to perinatal diet-induced obesity. Female mice heterozygous for Mc4r fed an obesogenic or a control diet for 5 weeks were mated with heterozygous males, with the same diet continued throughout pregnancy and lactation, generating four offspring groups: control wild type (C_wt), control knockout (C_KO), obese wild type (Ob_wt), and obese knockout (Ob_KO). At 21 days, offspring were genotyped, weaned onto a control diet, and sacrificed at 6 months old. Offspring phenotypic characteristics, plasma biochemical profile, liver histology, and hepatic gene expression were analyzed. Mc4r_ko offspring showed higher body, liver and adipose tissue weights respect to the wild type animals. Histological examination showed mild hepatic steatosis in offspring group C_KO. The expression of hepatic genes involved in regulating inflammation, fibrosis, and immune cell infiltration were upregulated by the absence of the Mc4r gene. These results demonstrate that maternal obesogenic feeding during the perinatal period programs offspring obesity development with involvement of the Mc4r system. Introduction Obesity is a chronic, multifactorial and pro-inflammatory disease defined as a disproportionate increase of body weight with excessive adipose tissue accumulation [1]. The prevalence of obesity is rising alarmingly worldwide, with more than 640 million obese patients and an estimated 1.5 billion overweight people according to the World Health Organization (WHO) [2]. This increase in adiposity is associated with all causes of mortality, a significant decrease in lifespan of up to 20 years, and a tremendous fiscal burden [3,4]. Obesity is associated with multiple comorbidities representing the main causes of illness and death in affluent societies, especially cardiovascular and cerebrovascular illnesses, type 2 diabetes mellitus, many cancers, and Non-Alcoholic Fatty Liver Disease (NAFLD) [1,5]. NAFLD is now the most common cause of liver disease in these affluent countries; it may progress through steatosis, inflammation and injury (non-alcoholic steatohepatitis, NASH), fibrosis, cirrhosis, and hepatocellular carcinoma [6][7][8]. Considering that the prevalence of obesity and NAFLD in Western countries ranges between 20-30%, these alterations in liver morphology and functionality secondary to NAFLD are a major concern for national health policies [6]. The increase in global obesity rate affects all populations, including women in their reproductive age. As a result, the risk of pregnancy loss, maternal gestational diabetes, fetal malformations, and other complications during pregnancy has increased in obese women [9]. Interestingly, retrospective epidemiological human studies and animal interventions have recently highlighted that, during early development, an adverse pro-obesogenic in utero environment plays an important role in promoting offspring obesity and metabolic diseases in later life [10]. Our previous studies have demonstrated that maternal obesogenic diet during perinatal periods programs the development of obesity and NAFLD in the offspring [11][12][13], although the precise involved mechanism remains uncertain. The etiology of obesity is mostly thought of, perhaps simplistically, as higher caloric intake greater than energy expenditure. However, the underlying mechanisms are much more complex and include genetic predisposition, epigenetic regulation, environmental factors, and/or interactions with the gut microbiota [1,14]. Indeed, current Genome Wide Association Studies (GWAS) point to several key genes with very important influences on the origin and development of obesity: these include Fat-Associated Obesity (FTO), Leptin, Leptin Receptor, Pro-Opiomelanocortin (Pomc), or Melanocortin Receptor 4 (Mc4r) [15]. Importantly, multiple meta-analyses and GWAS studies have confirmed the association between Mc4r polymorphisms and obesity and its associated comorbidities [16][17][18]. Mc4r is a critical mediator in energy homeostasis, regulating both food intake and energy expenditure as well as affecting blood pressure homeostasis [19,20]. Interestingly, a novel study in rats by Tabachnik et al. demonstrated that perinatal obesogenic environment increased in the offspring histone acetylation marks at the Mc4r promoter. This epigenetic regulation was also associated with thyroid hormones metabolism as well as with the inhibition of Mc4r transcription [21]. The aim of this study, therefore, was to investigate ab initio whether the Mc4r gene plays a role in the maternal programming of offspring obesity and consequent NAFLD. Animals and Experimental Design All experiments were approved by the Local Ethics Committee of the University of King's College London, and were conducted in accordance with the Home Office Animals (Scientific Procedures) Act of 1986 guidelines (United Kingdom). Mice were housed under controlled conditions (light-dark cycle 12 h, 21 ± 2 • C, 40-50% humidity) with food and water available ad libitum. Adult female mice heterozygous for Mc4r with C57BL/6J background were fed an obesogenic diet (824053, Special Dietary Services, Wittam, UK) [22] supplemented with sweetened condensed milk (Nestlé, Vevey, Switzerland) and fortified with 3.5% (AIN 93G; Special Diets Services) mineral mix and 1% vitamin mix or a control standard laboratory diet (RM1, Special Diets Services) for 5 weeks (dietary composition in Table 1). Then, as previously described, obesogenic-fed heterozygous females were around 50% heavier than control-fed females [23]. The female mice were mated with control-fed heterozygous males from the same litter. Conception was determined by vaginal plug formation. The female animals were maintained on their allocated diets throughout gestation and lactation, as previously described [11]. Litter sizes from both maternal feeding groups were similar [23]. After birth, litters were standardized to six pups each with an equal number of males and females when possible. At day 21 postnatally, offspring were genotyped and weaned onto a control diet until 6 months old. They were then killed by schedule 1 method after an overnight fast. Blood samples were collected, centrifuged (10,000× g, 10 min at 4 • C), and stored at −80 • C until further analysis. Liver and inguinal adipose depots were harvested, weighted, and stored at −80 • C. A representative sample of each liver was fixed in 10% formalin for histological analysis. Liver Histology Offspring liver samples at 6 months of age (n = 5-6 per experiment group) were fixed in formalin (10%), dehydrated, and subsequently embedded in paraffin. Liver samples were cut into 4-µm sections, mounted, and dried overnight at 37 • C. The liver sections were then stained with hematoxylin and eosin (H&E), and the extent of steatosis assessed by an expert liver pathologist blinded to the group identities, as previously described [24]. Plasma Analysis Plasma glucose, triglycerides, alanine aminotransferase (ALT), and aspartate aminotransferase (ALT) concentrations were assayed by the Royal Free Hospital Clinical Biochemistry Department (London, UK). Statistical Analysis All data are expressed as the mean ± standard error of the mean (SEM). Two-way ANOVA was applied for studying the effect of maternal obesogenic feeding (C vs. Ob) and offspring genotype (wt vs. knockout). Comparison of the means was carried out by Tukey post-hoc test. The statistical unit used throughout the analysis was the number of dams. Statistical significance was accepted with a p value of less than 0.05. IBM SPSS 24 software (24.0, SPSS Statistics, IBM, Chicago, IL, USA) was used for the statistical analysis. Phenotypic and Histological Characteristics We firstly analyzed the effect of maternal obesogenic feeding on phenotypical parameters and hepatic morphology ( Figure 1). As we have previously reported, at 6 months of age, body weight of Mc4rko and wild type mice from control-and obesogenic-fed dams had already reached a plateau [23]. Thus, at this age, there was a marked genotype effect independent of maternal nutrition, with increased body mass (+0.37-fold, p < 0.001) (Figure 1a), inguinal fat mass (+1.59-fold, p < 0.001) (Figure 1b), and liver weight (+1.51-fold, p < 0.01) (Figure 1c) in KO mice compared to the wild type animals. Furthermore, maternal obesogenic feeding during perinatal periods predisposed the offspring to higher body weight (+0.27-fold, p < 0.05) and inguinal fat deposition (+1.19-fold, p < 0.05). In offspring subjected to maternal control diet, C_KO mice presented a marked obesity phenotype compared to C_wt animals, with higher body mass (+0.29-fold, p < 0.01), inguinal fat mass (+1.96-fold, p < 0.01), and liver weight (+0.47-fold, p < 0.05). Finally, the combination of maternal obesity and Mc4r gene deletion strongly influenced offspring phenotype when compared to the C_wt group, showing a marked increase in body mass (+0.48-fold, p < 0.001), inguinal fat mass (+2.47-fold, p < 0.001), and liver weight (+0.60-fold, p < 0.001). According to the hepatic morphology (Figure 1d), there was a mild steatosis in C_KO animals with no changes in the general liver architecture. subjected to maternal control diet, C_KO mice presented a marked obesity phenotype compared to C_wt animals, with higher body mass (+0.29-fold, p < 0.01), inguinal fat mass (+1.96-fold, p < 0.01), and liver weight (+0.47-fold, p < 0.05). Finally, the combination of maternal obesity and Mc4r gene deletion strongly influenced offspring phenotype when compared to the C_wt group, showing a marked increase in body mass (+0.48-fold, p < 0.001), inguinal fat mass (+2.47-fold, p < 0.001), and liver weight (+0.60-fold, p < 0.001). According to the hepatic morphology (Figure 1d), there was a mild steatosis in C_KO animals with no changes in the general liver architecture. Plasma Biochemical Features Plasma glucose concentration (Figure 2a) showed a tendency to be increased (+0.38-fold, p < 0.1) in the offspring subjected to maternal obesity compared to control-fed dams. However, the absence of the Mc4r gene had no effect on this parameter. Furthermore, there was a decrease in plasma triglyceride concentration (Figure 2b) in offspring from obese mothers (−0.30-fold, p < 0.05) compared to the controls. However, this effect was mainly caused by the elevated TG levels in the C_KO group compared to the C_wt (+0.75-fold, p < 0.05), Ob_wt (+0.67-fold, p < 0.1), and Ob_KO groups (+1.05fold, p < 0.05). With the hepatic transaminases (Figures 2c,d), there was a trend of increased ALT caused by maternal obesity (+0.74-fold, p < 0.1), which may be explained by the elevated concentrations of this transaminase in the Ob_KO with respect to the C_wt group (+1.68-fold, p < 0.1). Additionally, AST was markedly increased by the absence of the Mc4r gene in offspring from controlfed dams (+0.40-fold, p < 0.05) and partially increased in wild type animals subjected to maternal obesogenic feeding (+2.80-fold, p < 0.1). C_wt, control wild type; C_KO, control knockout; Ob_wt, obese wild type; Ob_KO, obese knockout; n.s., non-significant; * p < 0.05; ** p < 0.01; *** p < 0.001; T p > 0.05 and p < 0.1. Plasma Biochemical Features Plasma glucose concentration (Figure 2a) showed a tendency to be increased (+0.38-fold, p < 0.1) in the offspring subjected to maternal obesity compared to control-fed dams. However, the absence of the Mc4r gene had no effect on this parameter. Furthermore, there was a decrease in plasma triglyceride concentration (Figure 2b) in offspring from obese mothers (−0.30-fold, p < 0.05) compared to the controls. However, this effect was mainly caused by the elevated TG levels in the C_KO group compared to the C_wt (+0.75-fold, p < 0.05), Ob_wt (+0.67-fold, p < 0.1), and Ob_KO groups (+1.05-fold, p < 0.05). With the hepatic transaminases (Figure 2c,d), there was a trend of increased ALT caused by maternal obesity (+0.74-fold, p < 0.1), which may be explained by the elevated concentrations of this transaminase in the Ob_KO with respect to the C_wt group (+1.68-fold, p < 0.1). Additionally, AST was markedly increased by the absence of the Mc4r gene in offspring from control-fed dams (+0.40-fold, p < 0.05) and partially increased in wild type animals subjected to maternal obesogenic feeding (+2.80-fold, p < 0.1). C_wt, control wild type; C_KO, control knockout; Ob_wt, obese wild type; Ob_KO, obese knockout; n.s., non-significant; * p < 0.05; T p > 0.05 and p < 0.1; TG: triglycerides; ALT: alanine aminotransferase; AST: aspartate aminotransferase. Discussion Observations of human polymorphisms highlight the Mc4r gene as one of the key genes for understanding obesity risk and its associated comorbidities [16][17][18]. Mc4r is shown to be an energy balance modulator. Recently, a mouse study described that the activation of Mc4r reduces food intake and increases energy expenditure, preventing obesity-associated increased adiposity [25]. Additionally, the absence of Mc4r inhibits brown adipose tissue activity; therefore, the stimulation of the Mc4r pathway can be a potential target for increasing energy expenditure and accelerating weight loss [26]. Although melanocortin receptors are predominantly expressed in the brain, Mc4r is also known to be present in liver cells [27,28]. Therefore, the lack of this gene not only exerts systemic effects through the nervous system, but may also have a direct hepatic component. Evidence from liver regeneration after acute liver injury, where rats were subjected to partial hepatectomy, has shown that there is an overexpression of Mc4r in the hepatocytes [29]. Furthermore, NAFLD is the main hepatic manifestation of the metabolic syndrome, often accompanied by alterations in glucose homeostasis and waist circumference, and has been directly associated with genetic variations of Mc4r [30]. Itoh et al. reported that Mc4r_KO mice developed steatohepatitis when fed a high-fat diet, which was associated with an obese phenotype, insulin resistance, and dyslipidemia. Histologic analysis found enhanced inflammation, macrophage infiltration, hepatocyte ballooning, and, after a year of obesogenic feeding, hepatocellular carcinoma [31]. However, these results should be carefully compared with our experimental model because the direct, long-term effects of adult obesogenic feeding have greater impact on mice metabolism than maternally induced obesity. Probably due to this reason, our liver phenotypes did not present as marked of a proinflammatory stage. In the previous study, the authors also described obesity-related traits in Mc4r-silenced mice fed a control diet; similar to what we have showed here in our study, there was overexpression of TGF-β and Col-1α compared to wild type mice [31]. In vitro studies have also shown that the treatment of isolated liver cells with melanocortin agonists inhibits endotoxin-induced upregulation of the pro-inflammatory cytokines IL-6, IL1β, and TNF-α by Kupffer cells [28]. Thus, changes we described in liver gene expression in our Mcr4_ko offspring from control-fed dams may be the initial step for the apparition of later fibrotic markers in the liver, in addition to the detection of infiltrated macrophages and their polarization to different subpopulations. Indeed, there was a tendency to increase Mcp1 hepatic expression in these animals, which in turn may exacerbate, as we have shown, the hepatic expression of pro-inflammatory and immune system-related genes. Maternal perinatal physiology and environmental insults predispose offspring to metabolic diseases in adult life. Thus, our previous studies with rodent models have demonstrated that a hypercaloric diet enriched in fat and simple sugars during peri-conception, pregnancy, and/or lactation periods affects offspring phenotype with increased body weights, visceral fat, liver and pancreas weights, plus a parallel accumulation of lipids in visceral organs [11][12][13]32]. Our previous results showed that maternal obesity programs development of a dysmetabolic and NAFLD phenotype, which is critically dependent on the early postnatal period involving alteration of hypothalamic appetite nuclei signaling by maternal breast milk and neonatal adipose tissue-derived leptin [12,32]. Furthermore, in a perinatal model of mice lacking the Mc4r gene, we demonstrated that maternal obesity (apparently through neonatal leptin exposure) permanently resets the responsiveness of the central sympathetic nervous system, specifically via the hypothalamic paraventricular nucleus melanocortin system, to initiate hypertension [23]. Moreover, in that study, we found increased food intake and leptin plasma levels influenced by maternal obesity and by the lack of the Mc4r gene. Surprisingly, in the current study, we found that the offspring phenotype was more influenced by the lack of the Mc4r gene, rather than by maternal obesity. Indeed, although maternal obesogenic feeding was associated with higher body and adipose depots, there was a lack of steatosis in liver histological samples. This may be partially explained by the age of these animals, as in our previous murine studies with similar feeding protocol, the steatotic effect induced by maternal obesity was well defined at 12 months and vague at 6 months of age [12,33]. Indeed, the age of these animals is directly proportional to their intra-abdominal adipose tissue accumulation and, therefore, to the abnormal fat infiltration in visceral organs. Interestingly, we did not find an additional effect of the lack of Mc4r on the maternal obesogenic feeding offspring. We may hypothesize that the molecular mechanisms affecting obesity and the associated liver fat accumulation and damage may be common for maternal-associated programming of obesity and for Mc4r pathways. For example, there is appetite regulation in both situations as well as a decrease in energy expenditure induced by maternal obesity and Mc4r blockage [23,26]. Moreover, a study in rats with high-fat diet-induced maternal obesity recently described a downregulation of hypothalamic Mc4r mRNA expression at weaning in the offspring from obese dams [34]. Others have replicated these results, proposing an epigenetic mechanism for the decrease in Mc4r expression in the offspring of obese rats due to histone acetylation in the Mc4r promoter region, which may also be associated with the thyroid hormone receptor-β, a transcription inhibitor for this gene [21]. This research group has also described how other Mc4r-related genes involved in obesity through appetite regulation such as Pomc may be epigenetically regulated in the offspring because of maternal obesity [35,36]. As a limitation of this study, the use of animal models and, more specifically, knockout and perinatally-based designs makes the translation of the findings to the general population difficult. However, besides the ethical considerations of human interventions during pregnancy, rodent models shorten the experimental time, and also allow studying the effects during offspring adult life. Furthermore, the similar genetic and physiological background to humans and the control of external insults and confounding factors make necessary to perform experimental animal models in this field. Furthermore, although phenotypically the offspring were influenced by maternal obesity, from a metabolic and transcriptomic point of view the effect became partially diluted, which differs with our previously standardized developmental programing protocols [11][12][13]33,37]. This may be due to the Mc4r gene silencing; however, the lack of difference in some of the variables only due to maternal obesogenic feeding may be also due to a limited number of animals and the wide intra group differences. Finally, the lack of some interesting plasma and hepatic biochemical markers such as liver triglyceride content or food consumption may be a limitation for the explanation of the findings described in the current study. Conclusions In conclusion, these results emphasized the importance of the melanocortin system as a target for the development of new therapeutic tools against obesity and its associated implications in liver metabolism through obesogenic feeding and developmental programming. We showed that dietary changes during the perinatal period may follow an adaptive response of the offspring to be predisposed to long-term changes in metabolism and physiology. Although the lack of Mc4r induced an increase in body, fat, and liver weights, the interaction with maternal perinatal obesity suggested a protective effect in the Mc4r_ko mice. Thus, offspring from obese mothers did not show liver steatosis and presented lower hepatic expression of proinflammatory and profibrogenic genes. This interaction should warrant further research in this model, given the potential to elucidate new mechanistic pathways implicated in the developmental programing of obesity and NAFLD.
2017-10-01T14:39:13.915Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "ed4ea5836e8820fe466601a990699e4ef0fba1bf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/9/9/1041/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed4ea5836e8820fe466601a990699e4ef0fba1bf", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
6081955
pes2o/s2orc
v3-fos-license
A Process and Outcomes Evaluation of the International AIDS Conference: Who Attends? Who Benefits Most? The objective of the study was to conduct a process and outcomes evaluation of the International AIDS Conference (IAC). Reaction evaluation data are presented from a delegate survey distributed at the 2004 IAC held in Thailand. Input and output data from the Thailand IAC are compared to data from previous IACs to ascertain attendance and reaction trends, which delegates benefit most, and host country effects. Outcomes effectiveness data were collected via a survey and intercept interviews. Data suggest that the host country may significantly affect the number and quality of basic science IAC presentations, who attends, and who benefits most. Intended and executed HIV work-related behavior change was assessed under 9 classifications. Delegates who attended 1 previous IAC were more likely to report behavior changes than attendees who attended more than 1 previous IAC. The conference needs to be continually evaluated to elicit the required data to plan effective future IACs. Introduction The first International AIDS Conference (IAC) was held in 1985. Its purpose was to share research and medical findings about the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). This event was held annually through 1994, and then every 2 years. Prior to 2000 the conference was held only in developed countries including Canada, France, Germany, Holland, Italy, Japan, Sweden, and the United States. Beginning in 2000, the International AIDS Society (IAS) made a decision to rotate the conference between developed and developing countries. Since then the conference has been held in Durban, South Africa; Barcelona, Spain; Bangkok, Thailand; and, most recently, Toronto, Ontario, Canada in August 2006. The IAC is an enormous and costly undertaking. Millions of dollars in sponsorships, exhibition sales, and registration fees are raised to support the conference; the latter covers approximately half of the total cost. The IAC is undoubtedly one of the largest health-related conferences in the world: The XV IAC held in Thailand in 2004 was attended by approximately 16,500 delegates; it provided nearly 3000 scholarships, and it accepted and orchestrated 490 oral presentations grouped into 75 sessions and 5 conference tracks (ie, Basic Science; Clinical Research, Treatment and Care; Epidemiology and Prevention; Social and Economic Issues; and Policy and Program Implementation). Given the cost of planning and implementing the IAC, as well as the cost in terms of delegate time away from work and travel, accommodation, and registration fees, is it worth it? The conference has never been systematically evaluated. Some input, output, and reaction data were inconsistently collected beginning in 1998, but not published/reported, and the conference's outcomes effectiveness (ie, purported changes the delegates make in their HIV/AIDS work as a result of attending the conference) has never been assessed. A limited budget was set aside by the XV IAC for evaluation. An evaluation team from the United States and South Africa volunteered their time to conduct a process and outcomes evaluation of the IAC using Kirkpatrick's paradigm for evaluating training programs. [1] Reaction data from the XV IAC were evaluated, and the input and output evaluation results were compared with available data from 2 previous IACs (ie, the 2000 XIII IAC in Durban and the 2002 XIV IAC in Barcelona) to determine the continued viability of the conference. Some of the important questions to ask include: Who attends the conference? Who benefits most? What is the impact, if any, of hosting the conference in a developed vs developing country? Is the focus of the IAC moving too far away from science to continue to attract scientists and researchers? Can the IAC continue to successfully compete with the IAS Conference on HIV Pathogenesis and Treatment and other science-and treatment-focused world conferences in attracting the attention and participation of prominent scientists and researchers? If not, what is its current niche? Is this conference's 5-track system necessary, or is there sufficient mobility between tracks to reduce or eliminate the track system? This article provides preliminary data addressing these questions and investigates the outcomes of the conference. The first IACs focused on the scientific understanding of HIV and AIDS. With no supporting outcomes data, the degree to which major advances in our understanding of HIV/AIDS can be attributed to the IAC is unknown and, as such, evidence supporting what might be considered some of the greatest outcomes of the IAC have been irrevocably lost: eg, key research studies on the pathogenesis, host immune responses, prevention and treatment of the disease, and the more widespread use of antiretroviral therapies in developing countries. The outcomes of more recent IACs are presented in this article. Methods The study used a convenient, random sample of delegates attending the XVI 2004 conference in Thailand. Process (including input, output, and reaction data) and out-comes data were collected via a self-report delegate survey. Additional outcomes data were collected via a standardized intercept interview. Delegate Survey The delegate survey, written in English and composed of both qualitative and quantitative questions, was developed by the study team and pretested on a sample of South African University students for understandability. The survey included demographic data (eg, primary employment role, country of work, years worked in the HIV/AIDS field), the number of IACs attended, reactions to the conference, and an outcomes evaluation question asking delegates what they planned to do differently in their HIV/AIDS work as a result of attending the XV IAC. The Theory of Reasoned Action [2] supported this outcomes approach. Intercept Interviews A semistructured interview guide was developed to individually interview a random selection of delegates. The outcomes evaluation question asked delegates to think about the last IACs they had attended and specify what changes, if any, they had made in their HIV/AIDS-related work as a result of attending the previous IACs. A short background section determined delegate eligibility (eg, attendance at a previous IAC) and gathered demographic data. Data Collection Methods The survey sampling design allowed conference tracks to be sampled equally by randomly selecting an equal number of sessions per track to survey in both morning and afternoon sessions on 3 days beginning on the second day of the conference. Not all tracks had sessions in the morning and afternoon on each day of the conference, in which case twice the number of surveys was available for distribution the first time the track had a session ( Table 1). The design controlled for multiple surveys being administered to the same delegate by sampling within concurrent sessions and displaying a slide before each session informing delegates of the purpose of the survey and requesting their participation if they had not already completed a survey. This message was reinforced by each session Chair. A cadre of 30 Thai University students was trained to distribute and collect the surveys as intended. In total, 7890 surveys were distributed over the 3 days. Surveys were collected at all the exit doors of the session rooms, and volunteers removed any remaining surveys from the session rooms. Intercept interviews were conducted before, during, and after the conference program over the last 2 days of the conference. Delegates were intercepted randomly at a variety of locations (eg, lounge areas, taxis, and Internet terminal queues). Interceptors informed delegates that they were part of the research team evaluating the conference and asked delegates if they would participate. Those consenting were interviewed on the spot. Analyses Delegate survey quantitative data were entered into an EpiData file [3] and validated by double entry. To investigate delegate mobility between tracks, the session in which the participant was sampled was compared to their stated track of interest. Input data (ie, income from delegate fees, total sponsorships, total conference income, number of abstracts received by track) and output data (ie, the number of registered delegates) from the Barcelona and Durban IACs were obtained from the Report on the XV International AIDS Conference (an unpublished International AIDS Society report) and were compared to the data from the Thailand IAC. Historical input data from IACs prior to the one held in Durban were not consistently available. EpiData, [3] EpiInfo, [4] and the STATA [5] were used to conduct the analyses, which included descriptive statistics, the chi-square statistic, and regression analyses. Countries of work were collapsed into continents according to the Population Reference Bureau. [6] Nationality of respondents was grouped according to regions and assigned a developed vs developing country code using the Australian Government Overseas Aid Program divisions. [7] Qualitative verbatim responses on the delegate survey were transcribed into Microsoft Word as separate data records per respondent. Following review of delegate responses, broad classifications of self-reported intent to change behavior were identified by one member of the research team and concurred by a second member. These two team members then independently coded the delegates' comments under 1 or more broad change classifications. Multiple behavior/practice changes on a survey were coded as separate intentions. Inter-coder reliability was assessed using Cohen's kappa coefficient of agreement for nominal scales. [8] Qualitative data collected via the intercept interviews were recorded on a standard interview response worksheet. These data were transcribed into MS Word as separate documents per interviewee, and imported into NVivo 2.0 qualitative analysis software. [9] Response Rate Of the questionnaires distributed, 2598 were completed and returned for an overall response rate of 33%. Two invalid questionnaires were discarded, yielding 2596 valid responses. Table 2 shows the response rate by track. Response rates varied significantly by track [ 2 (4, N = 2596) = 15.77, P < .01]. Significantly fewer respondents in the basic science and clinical research/treatment/care tracks returned questionnaires compared with the epidemiology/prevention and social/economic tracks. A response rate for the intercept interviews could not be determined as the number of persons approached who declined to participate was not recorded. A total of 108 participants were surveyed via intercept interviews lasting between 5 and 10 minutes. Nearly half did not meet the inclusion criterion of having attended a previous IAC and were discarded from analyses, leaving 59 viable interviews. Survey and intercept statements describing nonbehavioral benefits (eg, perceived change in knowledge and attitudes, and feeling supported by peers) were excluded from analyses. Delegate Characteristics Half of the survey delegates indicated their primary employment role as either researchers/scientists or handson clinical care providers (eg, doctors, nurses), and approximately another quarter indicated that they were program/facility administrators/managers or teachers/ trainers/educators (Table 3). Respondents' part-or full- (1, N = 2515) = 7.23, P < .01] and significant differences [ 2 (2, N = 2515) = 205.89, P < .01] were found between the number of respondents who were first-time delegates (53%), those who had attended 1 to 3 previous IACs (32%), and those who had attended 4 or more previous conferences (15%). The intercept delegates were primarily administrators/managers (32%) and researchers/scientists (29%). The remainder were policy-makers, clinical/ service providers, community workers, and media representatives. Approximately one third were from North America (31%), one quarter were from Europe/Middle East (24%), and the rest were from Africa (21%) and Asia/ South Pacific (19%). Input Findings Significant differences (all P values < .001) were found between the Durban 2000, Barcelona 2002, and Bangkok 2004 IAC conferences in terms of total conference income, income from delegate fees, total sponsorships, and the value of exhibition sales. In general, the Barcelona conference received significantly higher total conference income than either Bangkok or Durban (12% and 41% higher, respectively); significantly more delegate fee incomes (3% higher than Bangkok and 43% higher than Durban), and higher exhibition sales incomes than either Bangkok or Durban (21% and 18% higher, respectively). In general, total sponsorships increased significantly each year over the past 3 conferences. Bangkok generated significantly more income from total sponsorships than either Durban or Barcelona (57% and 14% higher, respectively). The value of sponsored items (ie, donations from pharmaceutical and other donations) has decreased significantly each year over the past 3 conferences. Durban generated significantly more income from sponsored items than either Barcelona (29% higher) or Bangkok (39% higher). Expenditures of the Bangkok conference, on the other hand, were approximately 35% higher than Barcelona and 38% higher than Durban, with major cost drivers being in specific expenditure line items (eg, miscellaneous, press/communication). The expenditure difference between Barcelona and Durban was 7%. Of the total number of abstracts submitted for the Bangkok conference (N = 10,060), 27% were in the social and economic issues track, 23% pertained to policy and program implementation, 22% to epidemiology and prevention, 22% to clinical research, treatment and care, and 7% were in the basic science track. Figure 1 Output Findings The exact number of delegates attending the IAC is not known, but the IAC estimated that approximately 16,500 delegates attended the Bangkok conference. Significant Reaction Findings Respondents were asked to rate the conference in terms of conference value, content usefulness, difficulty level of sessions, and whether they would recommend the conference to a peer. Of those responding, 39% rated the conference as 'very useful' to their work and 58% rated it 'somewhat useful'; 66% found the content difficulty level to be 'about right,' and 25% found it 'way' or 'a little too easy'; and 85% said they would recommend the IAC to a peer. A Pearson correlation Logistic regression analyses (Table 4) showed that survey respondents working in developing countries were twice as likely as those working in developed countries to rate the Thailand conference as useful to their work, and firsttime attendees were 3 times more likely. Both variables were significant predictors of usefulness (both P values = .001). Although researchers/scientists were less likely than other professional groups to rate the conference useful to their work, professional group was not a significant predictor of conference usefulness to work. Working in a developing country and fewer years (ie, 04 years) of HIV/ AIDS experience were significant predictors of recommending the IAC to a peer. Being a researcher or scientist was a significant predictor of not recommending the IAC to a peer. Comparing developing vs developed countries, logistic regressions (Table 5) found that respondents from a developing country were 6 times more likely to have never attended a previous IAC, twice more likely to have no or limited HIV experience, and nearly 3 times more likely to be a teacher/trainer or program/facility manager (all P values = .001). They were significantly less likely to be a researcher or scientist (P = .001). There was no difference between the number of hands-on clinical care and other healthcare provider respondents from developing vs developed countries. Only 547 (21%) survey respondents completed the qualitative section of the survey asking delegates to identify missing conference content. A total of 637 comments were coded but centered on quality issues rather than missing content (eg, improving the quality of presentations, especially the basic science presentations; assuring the balance between scientific/clinical and the social/policy/prevention content; and the desire for more interactive sessions). The top 2 factors influencing decisions to attend the IAC were conference content (25% of those responding) and networking opportunities (21%). 'Tourist value,' 'recommended by a peer,' and 'close to home' were lowest ranked (4%8%). When asked what component of the IAC was most responsible for changes in behavior following past IACs attended, respondents identified all forums: didactic (39%), interactive (33%), and informal interactions (29%). Outcomes Findings Forty-one percent of the survey respondents (n = 1062) answered the question, "What will you do differently in your practice, service setting, community or area of • Treatment: intentions to change patient management and/or treatment including conducting more risk assessments and counseling, changing treatment plans [n = 134 (12%)]; • Advocacy: intentions to change or increase advocacy for HIV patients (eg, advocate for drug access, treatment for all) and programs (eg, prevention-ofmother-to-child programs) [n = 120 (11%)]; • Involvement with persons living with HIV/AIDS (PLWHA): changes, increases in involvement with, and assistance to PLWHA [n = 97 (9%)]; • Increased policy involvement: more effort to influence policy at organizational, local, regional, or international levels [n = 81 (7%)]; • Collaboration: intentions to increase and establish new collaborations with other researchers, programs, and clinicians [n = 67 (6%)]; • Self education: intentions to seek more information [n = 50 (4%)]; and • Funding: intentions to seek more funds to further their work [n = 20 (2%)]. Eighty percent of the intercept interview sample cited a behavior change as a result of attending a past IAC. Of these, 31% worked in North America, 24% in Europe and the Middle East, and 21% in Africa; 32% were administrators/managers and 29% were researchers/scientists. They reported attending between 1 and 7 previous IACs; roughly equal percentages had attended 1 (39%), 2 (27%), or 3 or more IACs ( Process Evaluation Discussions centering on where to have the conference have to take cost and revenue issues into consideration. The conference cannot operate at a loss. With the available data to date, host country does not appear to be a factor related to the cost of implementing the IAC nor the amount of income generated. The Bangkok IAC cost significantly more than either Durban or Barcelona, but cost increases were in line with progressively increasing costs for service, number of delegates attending, number of past participants who receive IAC announcements and programs, and number of scholarships awarded (eg, significantly more local and international scholarships were awarded at Bangkok compared with the 2 previous IACs [ 2 (2, N = 6100) = 326.7, P < .01]). The Barcelona conference received more income than either of the developing country sites, but the difference between Barcelona and Bangkok was dramatically less than it was between Durban and Barcelona, with Durban receiving less income. This finding may be related to South Africa being the first developing country to host the IAC and possible concerns about the quality of the conference. Quality concerns being allayed at Durban may explain the much smaller discrepancy between the incomes and sponsorships generated by the Barcelona and Bangkok IACs. Factored in is the steady reduction in the value of sponsored items (ie, donations from pharmaceutical companies) over the past 3 conferences. This, too, may not be a function of hosting the conference in a developing vs developed country, but rather due to pharmaceutical companies pulling in their belts in general. Host country does not appear to affect the number of people who attend. Delegates attend for the conference content and the networking opportunities rather than tourist value and travel distance. Despite the epidemic being in its third decade, IAC attendance has increased over the past 6 years. Informal networking is considered to be as useful as the sessions. Professional conference organizers monitored 109 sessions and rated the level of attendance (ie, room was full, half-full, or had few attendees). Forty percent of the sessions had few attendees and 35% were half-full. Were delegates networking outside of the sessions, sightseeing, or working elsewhere? The current data do not support any conclusions on this front. The data do support with the exception of the basic sciences track considerable between-track mobility, perhaps indicating delegates' desire for an integrated experience or the perception that the track content was highly integrated. Either way, the mobility and session attendance data support reducing the number of tracks in subsequent conferences. Host country may affect the number and quality of basic science IAC presentations, who attends, and who benefits most. Only 7% of the abstracts submitted to the Thailand conference were basic science. This might be a product of the paucity of new basic science, lack of international travel funds in federally/nationally funded research money, dissatisfaction with the quality of the basic science component at the 2 previous IACs, and/or the decision to present basic science data at the IAS Conference on HIV Pathogenesis and Treatment and other sciencefocused conferences rather than at the IAC. The lack of international travel funds in federal grants is definitely an issue for scientists from the United States, but it is unknown whether this also explains the paucity of researchers/scientists attending from Europe. Some data support concerns for basic science quality when the conference is held in a developing country: the regression analyses in this study demonstrated that being a researcher/scientist was a significant predictor for not recommending the IAC to a peer, and the qualitative comments referring to the lack of science, the low quality of the science presentations, and the need to balance psychosocial and policy content with clinical and research content. Bangkok was ideally located to allow substantial numbers of delegates from HIV-burdened developing countries to attend. The number of people attending by country is not known, but the largest number of survey respondents were either from sub-Saharan Africa (24%) and Asia (32%). Using survey response as a proxy indication of attendance by country is problematic but, at this point, no other data are available. Abstract data by country of work are not available for previous conferences but, anecdotally from persons attending, the majority of delegates attending the South Africa IAC were from developing countries, and noticeably fewer delegates from developing countries attended the Barcelona conference. Comparative data from the Toronto 2006 IAC are needed to determine whether host country really does affect the number of basic science abstracts submitted and the quality of basic science presentations. The authors of this paper did not evaluate the Toronto IAC nor did they attend, but it is known that some evaluation was conducted. It is hoped that the results will be published allowing comparisons to be made. Overall reactions to the XV IAC were positive. The majority of survey respondents rated the conference as useful to their work, the content difficulty level as 'about right,' and would recommend the IAC to a peer. Working in a developing country, first-time IAC attendees, and delegates with less HIV/AIDS experience were significant predictors of usefulness to work and recommending the IAC to a peer. The latter 2 variables, however, were highly associated with developing country status: Delegates from developing countries were 6 times more likely to have never attended a previous IAC, and twice more likely to have no or limited HIV/AIDS experience. Again, data from the Toronto IAC are needed to determine the effects of host country. Did substantial numbers of delegates from developing countries arguably those likely to benefit most attend the Toronto conference or did the combined registration and travel costs greatly limit their attendance? The Toronto registration fee for developing country delegates was significantly reduced, but was it enough to reduce economic barriers? Given where the epidemic is globally in terms of infection rates and who seems to benefit most, the IAC's niche may be to focus world attention on government discrepancies in responding to the HIV/AIDS epidemic, and the scaling up of currently known prevention and treatment activities in developing countries. Following the Durban IAC and criticisms aimed at the South African government's lack of response to its HIV/AIDS crisis, IAC press coverage increased dramatically. The Thailand conference attracted a record number of journalists (ie, more than 2500) and written articles about the conference (ie, over 2700), with positive coverage (ie, favorable reviews) exceeding negative coverage by a ratio of 2:1. Given that the burden of the epidemic is in developing countries, the possible effect of host country in allowing developing country delegates to attend the conference, and the Thailand IAC data indicating that developing country delegates have the most to gain and do benefit most, perhaps all or more than half of future IACs should be held in developing countries. The rapid scale-up of known prevention and treatment activities in developing countries has not lived up to expectations and, perhaps, rather than trying to compete with the IAS Conference on HIV Pathogenesis and Treatment and other science-and treatment-focused conferences, the IAC should focus on the dissemination of information on known prevention and treatment activities to emerging countries. Outcomes Evaluation The survey outcomes data indicated that 91% of the delegates who answered the question indicated they intended to change their HIV/AIDS work as a function of attending the XV IAC, and 80% of the valid intercept interviewees indicated they had changed their behavior as a result of attending past IACs. In hindsight, Kirkpatrick's model may not have been the best evaluation model to employ. It recognizes behavior change, but does not consider that no intention to change behavior might also constitute an outcomes success if the conference validated/reinforced what attendees already do. Nevertheless, 7 broad intended changes in HIV/AIDS work behavior domains were reported by survey respondents. Respondents attending previous IACs reported they had made changes in these same broad behavior change categories, and attributed the changes to attending the IAC conference. Survey respondents who had attended just 1 previous IAC were significantly more likely to report making a change in their HIV/AIDS work as a result of attending a past IAC than those who attended more than 1 IAC. With the exception of development status of country of work, no other provider background variables significantly predicted behavior change. More survey delegates from developing rather than developed countries reported an intention to change their behavior as a result of attending the XV IAC. A major limitation of the process and outcomes evaluation is the lack of delegate data collected via the IAC registration form. Without knowing the demographics of the entire delegate population, one cannot gauge whether the survey respondents were representative of all registered delegates. Other limitations of the study include the low overall survey response rate in general, and the low response rate to the outcomes question in particular. Two thirds of the sample did not complete and hand in the questionnaire and, of those who did, 41% did not answer the outcomes question. Given the demographics of those participating in the evaluation, the outcomes are more representative of delegates from developing than developed countries, those with lesser experience in the field of HIV/AIDS, and delegates attending either their first or second IAC. Conclusion If host country is not a factor related to the cost of implementing the IAC, the amount of income generated, and the overall numbers attending, but is a factor in allowing delegates from emerging and developing countries (ie, those most likely to benefit) to attend, the IAC might reconsider its plan to host the conference every other year in a developed country. It is recommended that systematic evaluation data from future IACs be collected and analyzed to confirm or negate the trends found in this study and thereby provide the IAC with the necessary information to decide future country locations based on who attends and who benefits most. Authors and Disclosures Views expressed in this paper are those of the authors and are in no way attributable to the institutions in which they work, nor to the persons acknowledged. Bernadette Lalonde, PhD, has disclosed no relevant financial relationships. Jacqueline E. Wolvaardt, MPH, has disclosed no relevant financial relationships. Elize M Webb, MPH, has disclosed no relevant financial relationships. Amy Tournas-Hardt MAA, MPH, has disclosed no relevant financial relationships. Funding Information Authors contributed their time to the development and implementation of the study. Travel and registration to attend the XV International AIDS Conference in Thailand, where the study was conducted, was contributed by the IAS.
2018-05-08T18:15:40.263Z
0001-01-01T00:00:00.000
{ "year": 2007, "sha1": "af59faa40557d5a351878da2e4b624c5f661c2c2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/1758-2652-9-1-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "af59faa40557d5a351878da2e4b624c5f661c2c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
5032875
pes2o/s2orc
v3-fos-license
Inflammatory Cytokine TSLP Stimulates Platelet Secretion and Potentiates Platelet Aggregation via a TSLPR-Dependent PI3K/Akt Signaling Pathway Aims: Thymic stromal lymphopoietin (TSLP) plays an important role in inflammatory diseases and is over-expressed in human atherosclerotic artery specimens. The present study investigated the role of TSLP in platelet activation and thrombosis models in vitro and in vivo, as well as the underlying mechanism and signaling pathway. Methods and Results: Western blotting and flow cytometry demonstrated that the TSLP receptor was expressed on murine platelets. According to flow cytometry, platelet stimulation with TSLP induced platelet degranulation and integrin αIIbβ3 activation. A TSLPR deficiency caused defective platelet aggregation, defective platelet secretion and markedly blunted thrombus growth in perfusion chambers at both low and high shear rates. TSLPR KO mice exhibited defective carotid artery thrombus formation after exposure to FeCl3. TSLP increased Akt phosphorylation, an effect that was abrogated by the PI3K inhibitors wortmannin and LY294002. The PI3K inhibitors further diminished TSLP-induced platelet activation. TSLP-mediated platelet degranulation, integrin αIIbβ3 activation and Akt phosphorylation were blunted in platelets that lacked the TSLP receptor. Conclusion: This study demonstrated that the functional TSLPR was surface-expressed on murine platelets. The inflammatory cytokine TSLP triggered platelet activation and thrombus formation via TSLP-dependent PI3K/Akt signaling, which suggests an important role for TSLP in linking vascular inflammation and thrombo-occlusive diseases. Introduction Platelet adhesion and subsequent aggregation at the vascular injury site are key events required for hemostasis [1,2]; however, they are also critical for the development of acute thrombotic occlusion at regions of atherosclerotic plaque rupture, which reflect the major pathophysiological mechanism underlying ischemic diseases, such as myocardial infarction or stroke [3]. Platelet activation is induced by platelet agonists, such as ADP, collagen or thrombin [4]. The agonists lead to platelet degranulation, shape changes, integrin αIIbβ3 activation and adhesion to the vascular wall [5]. Apart from thrombosis, there is increasing evidence that platelets are critically involved in the pathogenesis of inflammatory diseases [6,7] via interactions with a variety of inflammatory cells [8]. During inflammatory stimulation, platelets rapidly adhere to the endothelium or subendothelial extracellular matrix at sites of vascular endothelial injury [6]. Cytokines, including tumor necrosis factors, interleukins, interferons, and colony stimulating factors, are produced by macrophages, T-cells and monocytes, as well as platelets, endothelial cells and vascular smooth muscle cells. Additionally, they are central players in vascular inflammation processes via the recruitment of leukocytes, which leads to the progression of atherosclerosis and plaque destabilization [9]. Accumulating literature suggests a delicate role of cytokines in atherothrombosis, and members of the cytokine family have been shown to activate platelets via their receptors [10,11]. However, the exact signaling mechanisms of platelet activation by inflammatory cytokines remain unknown. Thymic stromal lymphopoietin (TSLP) is a newly identified interleukin-7-like cytokine, which was originally isolated from a murine thymic stromal cell line [12] and characterized as a lymphocyte growth factor [13]. Substantial progress has been made in the understanding of the biological responses mediated by TSLP because this cytokine was first cloned. Previous studies have demonstrated TSLP has important actions in inflammation and allergy diseases, including rheumatoid arthritis, colonic inflammation and asthma [14-Human platelet preparation Human platelets were isolated as previously described [35]. Blood from healthy volunteers was collected in ACD-buffer and centrifuged at 200 g for 20 minutes. The obtained platelet-rich plasma was added to modified Tyrode-HEPES buffer (137 Mm NaCl, 2.8 mM KCL, 12 mM NaHCO3, 5 mM glucose, 0.4 mM Na2HPO4, 10 mM HEPES, 0.1% bovine serum albumin, pH 6.5). After centrifugation at 900 g for 10 minutes and the removal of the supernatant, the resulting platelet pellet was resuspended in Tyrode-HEPES buffer (pH 7.4, supplemented with 1 mM CaCl 2 ). Mouse platelet preparation and transfusion Mice (8-10 weeks old) were anesthetized by pentobarbital (100 mg/kg) and bled from the retroorbital plexus with the use of heparin-coated glass capillary tubes. Blood was collected into tubes that contained 3% ACD (1/9 vol/vol). Platelet rich plasma (PRP) was obtained by centrifugation at 300 g for 7 minutes. The PRP was subsequently centrifuged at 640 g for 5 minutes to pellet the platelets. After two further washing steps, the pellet of the washed platelets was resuspended in modified Tyrode-HEPES buffer (pH 7.4, supplemented with 1 mM CaCl 2 ). Platelets from the suspension were then diluted with PBS to a concentration of 200×10 6 platelets in 0.15 mL and transfused via jugular vein into recipient mice immediately prior to TSLP or saline administration. Western blot analysis Platelets or DCs were resuspended in lysis buffer that contained a protease inhibitor cocktail (Sigma-Aldrich). Following a 30 minute centrifugation with 16,000 g at 4°C, the supernatant was collected for Bradford assay (Biorad) to determine the protein concentration. After boiling the samples for 10 minutes at 95°C in Roti®-Load1 (Roth), the cell lysates were separated by 10% SDS-PAGE and blotted on nitrocellulose or PVDF membrane. The membranes were blocked for 1 hour with 10% nonfat-milk or 5% BSA in TBS-0.1% Tween 20 (TBST). The membranes were subsequently incubated with primary antibody against TSLP (1:500; eBioscience, USA) or Thr308 or Ser473 pAkt (1:1000; Cell Signaling) at 4°C overnight. After washing with TBST, the blots were incubated with the adequate secondary antibody conjugated with horse radish peroxidase (HRP) (1:2000; Cell Signaling) for at least 2 hours. Antibody binding was detected with the ECL detection reagent, and the bands were quantified with Quantity One Software (Biorad). Flow cytometry P-selectin expression was measured using an FITC-labeled mouse anti-human P-selectin monoclonal antibody (BD Biosciences). Activated integrin αIIbβ3 was quantified through the binding of the FITClabeled mouse anti-human monoclonal antibody PAC-1 (BD Biosciences). The platelets were activated using recombinant human or murine TSLP (R&D Systems). TSLPR expression was analyzed using a PE-conjugated mouse anti-human TSLP receptor antibody (eBioscience, USA) or a PE-conjugated anti-mouse TSLPR antibody (eBioscience, USA). Corresponding isotype controls were used for each antibody. A two-color analysis of mouse platelet activation was conducted using fluorophore-labeled antibodies for P-selectin expression (Wug.E9-FITC) (BD Biosciences) and the active form of αIIbβ3 integrin (JON/A-PE) (BD) as previously described [6]. Aggregometry Light transmission aggregometry (Chrono-Log Corp, USA Havertown, PA) was performed with isolated human platelets (2.5x105/μl). After calibration, the agonists were added at the indicated concentrations, and aggregation was measured for 6 minutes with a stir speed of 1000 rpm at 37°C. The extent of aggregation was quantified as the % of light transmission. The data analysis was performed with AGGRO/LINK8 software (Chrono-Log). Dense-granule secretion ATP release was monitored in parallel with platelet aggregation. To examine the effects of TSLP on ATP release, washed platelets were incubated with TSLP (200 ng/ml) or thrombin (0.01U/ml), and ATP in the supernatant was measured by the addition of luciferin-luciferase reagent. Quantification was performed using the ATP standard. Murine PRP was incubated for 30 minutes at 37°C with 3H-serotonin (2 µCi[0.074 MBg]/mL), washed once with HEN buffer, and then resuspended in HEPES-Tyrode buffer that contained 1 µM imipramine and 1 mM CaCl 2 . The platelets were stimulated with thrombin for 6 minutes at 37°C. The reactions were stopped with an equal volume of 0.1 M EDTA/2% formaldehyde and centrifuged for 5 minutes at 10,000 g. 3 H in the supernatants and pellets was counted, and the percentage of 5-HT secretion was defined as the agonistrelated increase in extracellular 3 H divided by the total intracellular 3 H at the start of the experiment. Perfusion flow chamber assays The ex vivo perfusion flow chamber thrombosis model was performed at high (1800 s -1 ) and low (600 s -1 ) shear rates, as described in previous studies. [36] We used the rectangular parallel plate flow chamber (Glycotech, Rockville, MD). Briefly, rectangular (0.1×1 mm) glass capillary microslides were coated with 100 µg/mL type-I collagen fibrils (Sigma-Aldrich, Saint Louis, USA) overnight at 4°C. Murine blood was perfused over the collagen-coated surface under a controlled flow rate with the use of a syringe pump (Harvard Apparatus, Holliston, MA). The blood was perfused for 4 minutes followed by 2 minutes of perfusion with a rinsing buffer (NaCl 130 mM, KCl 2 mM, NaHCO3 12 mM, CaCl 2 2.5 mM, MgCl 2 0.9 mM, glucose 5 mM, pH 7.4) at 37°C. Ex vivo thrombus formation was monitored using WT and TSLPR KO platelets in the concentration of 200 ng/ml TSLP. Platelet aggregation and thrombus formation were recorded in real-time over the course of perfusion under a bright field with a Zeiss Axiovert 135 inverted microscope and computer (IBM IntelliStation Z Pro) using the Slidebook program (Intelligent Imaging Innovations). Ferric chloride carotid artery thrombosis model [37] The WT and TSLPR KO mice (8-10 weeks old) were anesthetized via the administration of pentobarbital (100 mg/kg) and secured supine under a dissecting microscope. The right carotid artery was exposed by blunt dissection. A miniature Doppler flow probe was placed on the surface of the artery, and the flow was measured to ensure proper placement of the probe. A 2.5-mm strip of filter paper was saturated with 10% FeCl 3 (Sigma-Aldrich) and applied to the adventitial surface of the exposed artery for 2.50 minutes to induce vessel damage. The groups comprised the WT, TSLPR KO and TSLPR KO+WT washed platelet transfusion groups. Saline or TSLP (200 ng/ml) was injected into the jugular vein of the mice prior to the initiation of the carotid artery injury. The blood flow in the carotid artery following the FeCl 3 -induced injury was monitored until complete vessel occlusion was observed. Arterial flow rate was monitored for 30 minutes. Statistical analysis The data are presented as the means ± SD or SEM, and n represents the number of experiments. The data were analyzed by Student t-test or one-way ANOVA coupled with the Student-Newman-Keuls multiple comparison tests or one-way ANOVA with Dunnets post-hoc test. Differences were considered statistically significant if P<0.05. All statistical analyses were performed with the SPSS 17.0 statistical package. TSLPR protein expression in human and murine platelets To investigate whether the TSLPR is expressed on human platelets, we examined it via flow cytometry and western blot. Western blotting and flow cytometry of human and murine platelets indicated that the TSLPR was expressed on platelets ( Fig. 1). Because DC S have been well known for TSLPR expression [21,22,25], DC S served as positive control. Flow cytometry of P-selectin expression in human platelets after TSLP stimulation (ng/ml). ADP (5 μM) and thrombin (0.01U/ml) served as positive controls. Arithmetic means ± SEM (n=8) are shown. *(p<0.05) and **(p<0.01) indicate significant differences compared with resting platelets. B. Flow cytometry of activated integrin αIIbβ3 (PAC-1) expression in human platelets after TSLP stimulation (ng/ml). ADP (5 μM) and thrombin (0.01U/ml) served as positive controls. Arithmetic means ± SEM (n=8) are shown. **(p<0.01) indicates a significant difference. with low dose ADP (44.6 ± 11.8 vs. 21.1± 6.8, p<0.01) or low dose thrombin (47.7 ± 5.8 vs. 24.0± 7.6, p<0.01); this effect was not identified after stimulation with high dose ADP (Fig. 3 C,D). Thus, TSLP can directly interact with platelets and plays a stimulatory role that synergizes with low concentrations of platelet agonists to induce platelet aggregation. Effects of TSLP on platelet dense granule secretion Platelet secretion plays a critical role in the potentiation of platelet activation induced by low dose agonists. To determine whether platelet secretion accounted for the potentiating effect of TSLP on platelet aggregation, we examined TSLP-induced ATP release, which indicates the secretion of dense granules in human platelets. TSLP alone is sufficient to induce the release of ATP (ATP concentrations were 451.2± 51.9 nmol/L (basal level) versus 657.8 ± 72.4 nmol/L for TSLP.) (Fig. 3 E); however, the amount of ATP release induced by TSLP alone stimulation was substantially lower than the release induced by platelet agonists, such as thrombin (thrombin, at 0.01 U/ml, induced approximately 33-fold increase of ATP release compared with the basal level). Effects of TSLPR deficiency on platelet aggregation, secretion and in vitro thrombus formation To evaluate whether the absence of TSLPR affected platelet function, we employed the platelet aggregometry and the perfusion chamber model. TSLPR KO platelets exhibited a markedly defective aggregation following stimulation with low dose thrombin compared with WT platelets (47.0 ± 8.2 vs. 64.2 ±8.1, p<0.01). Platelet aggregation exhibited a 25% reduction in the TSLPR KO relative to WT platelets (Fig. 4 A, B). Consistent with platelet aggregation, platelet serotonin release was significantly reduced in the TSLPR KO platelets in response to low dose thrombin (41.2 ±8.0 vs. 28.9 ± 7.1, p<0.05) (Fig. 4 C). To assess the effects of TSLPR deficiency on platelet thrombus formation under flow conditions, we performed perfusion experiments that used glass coverslips coated with type I collagen fibrils. Consistent with the data from the in vitro platelet aggregation, in this ex vivo experiment, we observed significantly defective thrombus formation in the TSLPR KO mouse stimulation with TSLP at both low (600 s -1 ) and high (1800 s -1 ) shear rates compared with the WT mice (Fig. 4D). Effects of TSLPR deficiency on FeCl3-induced carotid artery thrombosis To evaluate whether the absence of TSLPR affected thrombus formation in the carotid artery, we employed the ferric chloride carotid artery thrombosis model. We demonstrated that TSLP stimulation shortened the vessel occlusion time of the WT mice. Following stimulation with TSLP, compared with the WT mice, we determined that the vessel occlusion time was significantly delayed in the TSLPR KO mice (P<0.01), but the occlusion times were not significantly different in the TSLPR KO mice plus WT washed platelet transfusion group (P>0.05) (Fig. 5 A). Involvement of the PI3K/Akt pathway in TSLP-dependent platelet activation. To determine whether TSLP affected platelet function through the PI3K/Akt signaling pathway, we detected the platelet p-Akt levels following stimulation with thrombin. In the western blot analysis, TSLP significantly increased Akt phosphorylation at Thr308 and Ser473, which could be prevented by preincubation with the PI3K inhibitors wortmannin (100 nM) and LY294002 (25 μM) (Fig. 6A). Consistent with these findings, the effects of TSLP on platelet activation (e.g., degranulation and integrin αIIbβ3 activation) were abolished after preincubation with the PI3K inhibitors wortmannin (P-selectin: 18 (Fig. 6 B). TSLPR-dependency of TSLP-induced platelet Akt phosphorylation and activation. To determine if the effects of TSLP on platelet activation were attributed to the activation and downstream signaling of its receptor TSLPR, we analyzed TSLP-dependent platelet Cellular Physiology and Biochemistry degranulation and integrin αIIbβ3 activation, as well as Akt phosphorylation in WT and TSLPR KO platelets (Fig. 7). As shown in Fig. 7 A, TSLP-dependent Akt phosphorylation was abrogated in the TSLPR KO platelets compared with the WT platelets. Thus, TSLP-induced platelet degranulation (22.2 ± 7.1 vs. 14.5 ± 6.5, p<0.05) and integrin αIIbβ3 activation (42.3 ± 11.0 vs. 25.6 ± 10.6, p<0.05) were significantly reduced in the TSLPR KO platelets compared with the WT platelets, whereas ADP-dependent platelet activation was unaffected (Fig. 7B). Discussion Although there have been numerous studies examining the effects of TSLP in allergies and inflammation, TSLP may play a role not only in allergic disease but also in other diseases. Although classified as a hematopoietin receptor based on its structural homology, the TSLPR subunit contains notable differences compared with the canonical hematopoietin receptors. Our previous study demonstrated that the inflammatory cytokine TSLP was over-expressed in human atherosclerotic artery specimens [27]. T cells, B cells, basophils, monocytes, eosinophils, and DCs that are derived from the hematopoietic cells express functional TSLPRs [21,22]. Platelets are also derived from the megakaryocytes of hematopoietic cells. Whether platelets also express TSLPR is unknown. Whether platelets express functional TSLPR and whether the inflammatory cytokine TSLP plays a role in platelet function are completely unknown. Our previous study demonstrated that the TSLPR was expressed on human platelets [34]. Our data demonstrated that the TSLPR was expressed on murine platelets (Fig. 1). Therefore, it is important to further examine the effects of TSLP/TSLPR on platelet function. We determined whether TSLP/TSLPR could alter platelet aggregation and secretion, as well as in thrombosis models in vitro and in vivo. Platelets can sense different signals during activation and selectively release their granules, such as P-selectin. P-selectin may translocate to the cell surface when platelets are activated. Platelet activation also results in the conversion of αIIbβ3 integrin to an active conformation, which enables it to bind ligands, including fibrinogen and other proteins. The translocation and conformational changes of these proteins after platelet activation facilitates the interactions of platelets with their environment, which are important processes for hemostasis and thrombosis, as well as inflammation and atherosclerosis. We therefore measured the α-granule secretion marker (P-selectin) and integrin αIIbβ3. Our data demonstrated that TSLP increased the expression of P-selectin and activated integrin αIIbβ3 (Fig. 2 A, B), whereas the quantity of the expression was minimal. The finding that TSLP induces the activation (degranulation and integrin αIIbβ3 activation) of circulating platelets, which, in turn, could promote the release of platelet-derived inflammatory mediators that result in enhanced leukocyte recruitment [38], suggests a vicious circle that potentially aggravates the progression of atherogenesis. Furthermore, the present study demonstrated that TSLP alone was unable to trigger potent aggregation of resting platelets; however, it significantly amplified platelet aggregation, P-selectin expression and integrin αIIbβ3 activation following stimulation with a low concentration of platelet agonists (Fig. 3). We proposed that TSLP potentiated platelet activation and exhibited a magnified effect via co-stimulation with platelet agonists. These data also indicated that TSLP and platelet agonists might use the same signaling pathway to promote platelet activation and aggregation. TSLP alone is sufficient to induce the release of ATP; however, the amount of ATP release induced by TSLP stimulation is substantially lower compared with thrombin (Fig. 3 E). Similarly, TSLP alone induced P-selectin expression in human platelets, which indicates TSLP also stimulated a-granule secretion. Thus, TSLP stimulates platelet secretion of both dense and α-granules and amplifies secretion-dependent platelet aggregation. The low level secretion and minimal activation of the integrin αIIbβ3 induced by TSLP stimulation may explain why TSLP alone induces a very low level of platelet aggregation. The TSLPR KO platelets exhibited a markedly defective aggregation response to thrombin compared with the WT platelets ( Fig. 4 A, B). We also demonstrated that platelet serotonin release was markedly reduced in the TSLPR KO platelets response to thrombin (Fig. 4C). Aggregation and secretion defects at low concentrations of agonists for thrombin receptors are frequently indicative of a defect in secretion or secretory granule content. Although platelet aggregometry is commonly employed to assess platelet function, it cannot be used to examine platelet aggregation under flowing, pathophysiological conditions. The perfusion chamber is an ex vivo model of thrombosis that has a number of important advantages over aggregometry, including the ability to assess thrombus formation on a pathophysiologically relevant substrate and under flow conditions with different shear stresses. Therefore, we employed the perfusion chamber model and monitored the effects of TSLPR deficiency on thrombus formation. We clearly demonstrated that thrombus formation was markedly decreased in the TSLPR KO mice at both low and high shear rates (Fig. 4D). These data suggest that TSLP/TSLPR may promote thrombosis at both venous and arterial shear rates. The late stages of atherosclerosis are often associated with thrombotic complications caused by vascular injury and compromised endothelial integrity [39]. To study the role of TSLP/TSLPR in the mediation of thrombus formation at the site of vascular injury using intravital microscopy, we observed defective thrombus formation in FeCl 3 -injured carotid arteries in TSLPR KO mice. To exclude the contribution of a TSLPR deficiency in the vessel wall, WT washed platelets were transfused into the TSLPR KO mice. We determined that the occlusion time clearly recovered. Compared with the WT mice, the occlusion time was not significantly different in the TSLPR KO mice plus WT platelet transfusion group (Fig. 5 A). Some inflammatory cytokines can affect platelet count. Raffaele Strippoli et al. reported that IL-6-transgenic mice treated with lipopolysaccharide showed a quantitative difference in platelets compared with wild-type mice. [40] Regarding whether the absence of TSLPR affected the platelet quantity, our data demonstrated that there was no obvious difference in the platelet count in the TSLPR KO mice (Fig. 5B). Furthermore, we determined that there was also no significant difference in thrombus composition in the TSLPR KO mice (Fig. 5C). This observation indicates that defective thrombus formation is predominately because of a TSLPR deficiency on platelets. Our results enable us to speculate that TSLP expression in atherosclerotic lesions and release from inflammatory cells could represent an additional local proadhesive stimulus for circulating platelets that results in increased platelet adhesion and aggregation at the sites of vascular injury. Therefore, data from our experiments in ex vivo perfusion chambers and the in vivo thrombosis models are consistent with our results from the in vitro platelet aggregation assays. Our data demonstrated obviously defective expression of P-selectin and JON/A (activated integrin αIIbβ3) in the TSLPR KO platelets after stimulation with TSLP compared with the WT (Fig. 7B). These data indicate that TSLP-dependent platelet activation critically depends on the binding to and signaling via the platelet TSLP receptor. Thus, we propose that the effects of TSLP/TSLPR on platelets may be to the result of the promotion of platelet activation, which subsequently increases platelet α-granule release, as well as promotes integrin αIIbβ3 activation and ligand binding. TSLP exerts broad and significant biological effects in various cell populations by binding to its receptors, which results in downstream signaling. However, TSLP-mediated cell signals remain poorly defined because their effects vary greatly between cell types. It has been demonstrated that the activation of the PI3K/Akt pathway occurred following TSLP stimulation [32,33]. Therefore, we conclude that TSLP stimulates platelets via the PI3K/AKT pathway. PI3K, as well as its downstream effector Akt play a decisive role in the regulation of platelet function [41]. PI3K synthesizes D3-phosphoinositides, which regulate many important platelet responses, such as platelet shape changes, integrin αIIbβ3 activation, and irreversible platelet aggregation [42,43]. Akt phosphorylation appears to function to direct irreversible platelet aggregation via the modulation of the continued activation of αIIbβ3 [44]. In addition, multiple agonists are known to stimulate PI3K in platelets, which subsequently activates Akt via phosphorylation [45]. Both Thr308 and Ser473 phosphorylation are required for full enzymatic activity [46]. In the present study, we demonstrated that TSLP significantly increases Akt phosphorylation, an effect that was completely abrogated in TSLPR-deficient platelets (Fig. 7A). This effect could also be observed in the preincubation of human platelets with LY294002 or wortmannin (Fig. 6 A). These results indicate that TSLP activates Akt in platelets in a TSLPR-and PI3Kdependent manner. Moreover, we identified a significant reduction of TSLP-induced platelet degranulation and integrin αIIbβ3 activation after preincubation with the PI3K inhibitors LY (25 μM) and Wm (100 nM) (Fig. 6B). We also demonstrated that PI3K inhibitors could not completely down-regulate P-selectin expression and integrin activation. We suggest there may be other specific signaling pathways which require further investigated. In conclusion, this study demonstrated that functional TSLPRs were expressed on murine platelets. TSLP triggered platelet TSLPR-dependent PI3K/Akt signaling, which led to degranulation and integrin αIIbβ3 activation. TSLP and platelet agonists via co-stimulation exhibited a magnified effect. We demonstrated that a TSLPR deficiency caused defective platelet aggregation and secretion and markedly attenuated thrombus growth in perfusion chambers at both low and high shear rates in the blood of TSLPR KO mice. TSLPR KO mice exhibited reduced carotid artery thrombus formation after exposure to FeCl 3 . Thus, the TSLP/TSLPR signaling system could play an important role in linking inflammatory vascular diseases and thrombosis. The first authors name was spelled as "Jangchuan Dong", the right spelling is "Jiangchuan Dong".
2018-04-03T00:37:46.097Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "9dcb84d68acdf29f6dacda50997cc6f94f8807d8", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/369684", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9dcb84d68acdf29f6dacda50997cc6f94f8807d8", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
263621888
pes2o/s2orc
v3-fos-license
Clonal fitness decline in somatic differentiation hierarchies The concept of clonal fitness is fundamental to describe the evolutionary dynamics in somatic tissues. It is now well established that otherwise healthy somatic tissues become increasingly populated by expanding clones with age. However, the dynamic properties and respective fitnesses of these clones are less well understood. Here we show, that in somatic tissues organised as a differentiation hierarchy, theory predicts a natural decline of effective clonal fitness over time in the absence of additional driver events. This decline is intrinsic to the tissue organisation and can be captured quantitatively by a simple heuristic equation that is proportional to 1/time. We also show that the expected fitness decline is directly observable in human haematopoiesis. The predicted short and long term dynamics agree with in vivo observations using data of Neutrophil recovery after bone marrow transplants and naturally progressing Chronic Lymphocyte Leukemia (CLL). We further show that theory predicts the existence of a long term equilibrium fitness. All CLL patients transition into a stable equilibrium fitness eventually. We find significant inter-patient variation of long term fitness and a strong correlation with disease aggressiveness. Interestingly, CLL long term fitness can be forecast based on the early stages of disease progression, suggesting a Big Bang like model for CLL evolution. Introduction The expansion of clones in somatic tissues with age is a well-documented phenomenon and universal to all tissues [1][2][3].It is a naturally expected outcome of somatic evolutionary processes, underlying tissue transformation, and cancer progression [4][5][6].However, although clonal expansions are common, cancers are overall still rare, suggesting most clonal expansions do not progress [7].This is an interesting evolutionary phenomenon and the containment of clones has to be understood in the context of its somatic environment [8]. Most human tissues are hierarchically organized [9][10][11].A comparably small population of stem cells maintains tissue homeostasis through a balance of self-renewal and differentiation, giving rise to fully functional differentiated cells [12].Such structures minimize the accumulation of mutations while at the same time allowing the production of enormously large numbers of fully differentiated cells within a short time [13][14][15][16].This is possible because only mutations in stem cells persist [17].All non-stem cell derived mutations vanish in the long run as the lifetime of any non-stem cell is finite (unless the cells acquire stem cell like properties due to mutations).While clonal dynamics have to be understood in the context of such differentiation hierarchies, we focus on the clonal dynamics within the hematopoietic system.There are three general patterns of clonal trajectories naturally occurring in such hierarchies, continued clonal expansion, clonal homeostasis, and waves of clonal extinction [11]. We investigate how a differentiation hierarchy affects clonal fitness.We show that mathematical models predict a continuous decline of clonal fitness with age in a differentiation hierarchy in the absence of additional driver events.This decline seems universal and is expected to occur even for aggressively exponentially expanding clones.In the long run, clonal dynamics approach a constant equilibrium fitness that is either positive for continued clonal growth or negative for waves of clonal extinctions. We then use data from two different patient cohorts to test our theoretical expectations directly in human hematopoiesis.We first follow neutrophil count recovery after stem cell transplantation in 19 patients with multiple myeloma who received high dose melphalan conditioning followed by the infusion of autologous stem and progenitor cells.These data represent the earliest stages of a clonal expansion and we observe a sharp decline in clonal fitness in all patients.To test the predictions of a long-term equilibrium of clonal expansions, we analyze clonal fitness in 85 patients of chronic lymphocytic leukemia (CLL), some of whom remained untreated for up to 20 years.All patients transitioned into a stable equilibrium fitness.Furthermore, equilibrium fitness is also correlated with disease aggressiveness, which allows to forecast of disease progression early. Clonal fitness in hierarchical tissues The dynamics of cells in hierarchically organized tissues can be modeled mathematically by compartmentalized structures.Each compartment represents cells at certain stages of differentiation.Cells move at predefined rates between compartments.Usually, homeostasis in such hierarchies is maintained by a self-renewing population of stem cells that give rise to all fully differentiated cells within a tissue.The deterministic dynamics of cells in such hierarchies can then be captured by a set of differential equations [11], given by Long term fitness: < l a t e x i t s h a 1 _ b a s e 6 4 = " a 8 / G 0 N R i 4 z C x x l g r I n Z E 0 l 1 T 5 9 0 = " > A A A B 9 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g q S T i 1 7 H o x W M F + w F t L J v t p l 2 6 2 Y T d i V p C / o c X D 4 p 4 9 b 9 4 8 9 + 4 b X P Q 1 g c D j / d m m J n n x 4 J r d J x v q 7 C 0 v L K 6 V l w v b W x u b e + U d / e a O k o U Z Q 0 a i U i 1 f a K Z 4 J I 1 k K N g 7 V g x E v q C t f z R 9 c R v P T C l e S T v c B w z L y Q D y Q N O C R r p 3 u + l X W R P m A Y c s 6 x X r j h V Z w p 7 k b g 5 q U C O e q / 8 1 e 1 H N A m Z R C q I 1 h 3 X i d F L i U J O B c t K 3 U S z m N A R G b C O o Z K E T H v p 9 O r M P j J K 3 w 4 i Z U q i P V V / T 6 Q k 1 H o c + q Y z J D j U 8 9 5 E / M / r J B h c e i m X c Y J M 0 t m i I B E 2 R v Y k A r v P F a M o x o Y Q q r i 5 1 a Z D o g h F E 1 T J h O D O v 7 x I m i d V 9 7 x 6 d n t a q V 3 l c R T h A A 7 h G F y 4 g B r c Q B 0 a Q E H B M 7 z C m / V o v V j v 1 s e s t W D l M / v w B 9 b n D 3 J s k y U = < / l a t e x i t > b fit t 1 t 2 t 3 t 4 Short term fitness at a given sampling time : < l a t e x i t s h a 1 _ b a s e 6 4 = " m e m e q b c w Compartment < l a t e x i t s h a 1 _ b a s e 6 4 = " x T n V U 9 Here, N i (t) denotes the number of cells in compartment i at time t (0 ≤ i ≤ k), ϵ is the cell differentiation probability, λ is the cell death probability, and r j is the cells proliferation rate in compartment j.Thus, the probability of self-renewing is (1 − ϵ − λ).We introduce the notation α for the effective net growth rate per compartment as We are interested in the dynamics of clones within such differentiation hierarchies.If one places a cell with certain proliferation parameters at any stage within this hierarchy, what are the dynamics of its progeny?Suppose N ij is the size of that clonal population in compartment i when the first cell of that clone originated in compartment j.Formally this corresponds to the initial conditions Together with equations (1) this leads to a solution of the form where we set Γ (c) 4)) for ϵ = 0.55 (black line), and ϵ = 0.45 (dashed line).(c) Clonal fitness decline is well described by the heuristic approximation s(t) = a/t + b.(d) Long term equilibrium fitness can be forecast using data on the early expansion of clones. for three different clonal trajectories that depend on the value of the differentiation rate ϵ.For ϵ < 0.5, cell numbers are increasing exponentially across compartments.For ϵ = 0.5, we have homeostasis where the cell number reaches a constant equilibrium.Wherever ϵ > 0.5, we observe waves of clonal extinction traveling through compartments, where clonal waves result from a lack of sufficient selfrenewal, causing clonal extinction in the long run, see Figure 2a for an example. It is then natural to ask, if we have in vivo time series data and want to estimate the fitness of clonal expansions with potentially important prognostic consequences for patients, what should we expect?Theory predicts that the experimentally observed fitness will critically depend on the exact timing of the experiment despite the underlying intrinsic parameters of the clonal expansion remain unchanged. To show this formally, we define the fitness s of cells in a differentiation hierarchy as the change of cell numbers within a time interval ∆t normalized by its mean population size A value of s = 0 corresponds to homeostasis, s > 0 to growing clonal expansions and s < 0 to shrinking clonal populations.In the limit of ∆t → 0 and normalizing for population size N, this fitness becomes the per capita growth rate of cells and is given by Using equation ( 4) and considering any intermediate compartment i in equations ( 1), we have Substituting equation ( 2) and simplifying we obtain We show two representative examples of the dynamic behavior of equation ( 6) in Figure 2b. Although all intrinsic parameters of the clonal population, e.g. the proliferation rate and differentiation probability, are kept constant across all compartments, fitness is predicted to decline over time in the absence of additional events.This decline is universal.It occurs for exponentially expanding populations as well as waves of clonal extinctions (Figure 2a & 2b).Furthermore, the decline decelerates with time.For sufficiently long times, equation ( 6) predicts the existence of two distinct equilibria that are clonal expansions (ϵ < 0.5) or waves of clonal extinction (ϵ > 0.5) within a differentiation hierarchy ultimately reach Equations (7) show two important differences for the long term fitness of clonal expansions and extinctions.Firstly, the fitness of clonal expansions always remains positive, whereas the long-term fitness of waves of clonal extinction is negative.Secondly, clonal expansions are ultimately dominated by the proliferation of the most differentiated cells r i , whereas clonal extinctions depend on the proliferation of the slowest least differentiated cells r j . Although being exact, the change of the clonal fitness function, equation ( 6), is rather involved and in principle depends on all microscopic parameters, which makes direct comparisons to time series data with unknown microscopic parameters difficult.Heuristically, equation ( 6) can be approximated by with only two free parameters, a and b (Figures 1b & 2c).The first parameter a corresponds to the effective rate of fitness decline and the second parameter b represents the equilibrium fitness of clonal dynamics.This heuristic approximation captures the dynamics of the clonal fitness well and allows us to numerically fit time series data of clonal dynamics to derive estimates for the rate of decline a and the long term fitness b, Figure 2c & 2d. Equation ( 8) also allows us to estimate the time τ till equilibrium fitness is reached.We say that we are in equilibrium if the change in clonal fitness is below a certain threshold h.We then have Below, we will use h = 0.01 as the threshold for the equilibrium phase. Neutrophil recovery after stem cell transplantation Our theoretical modelling suggests that the fitness of clonal expansions declines sharply early and levels off towards the equilibrium value for sufficiently long times.To test the prediction of a sharp initial fitness decline of clonal expansions in human haematopoiesis, we measured the neutrophil counts in 19 patients with multiple myeloma who received autologous stem and progenitor cells (CD34+) after myeloablative conditioning with melphalan.This is standard therapy for fit patients with multiple myeloma where patients initially receive induction therapy and following disease control they have autologous stem and progenitor cells collected via growth factor mobilization (e.g.granulocyte colony stimulating factor).Typically 3 × 10 6 CD34 cells are required for a single stem cell transplant.Initially, the neutrophil (and other blood cell counts) fall due to myeloablation but then typically around 13 days after the stem cell infusion, neutrophil recovery is observed.Thus patients are monitored carefully for their neutrophil recovery during this period.For a number of days, the neutrophil count will be too low for accurate quantitation.This is known as the neutrophil nadir.Data from 19 such patients was captured for this analysis. The recovery of neutrophils after depleting all the bone marrow resembles a clonal expansion in the background of an empty differentiation hierarchy.Neutrophils were measured daily within the first 3 weeks after the stem cell transplant, allowing us to use equation ( 4) to calculate the change of clonal fitness over time across patients.Time trajectories were fitted using the heuristic approximation (8) to each patient separately.All patients received their stem cell infusion on day 0 and neutrophil counts of the re-expanding transplant usually remain below the detection threshold for the first 10 days.After day 10, neutrophil counts are detectable and rise steadily.As predicted by the modelling, the fitness of these early clonal expansions is decreasing rapidly in all but two patients, Figure 3a & 3b (see also Figure S2 for a summary of all patients).Typically neutrophil fitness becomes negative at day 14 to 15 after the autologous stem cell transplant.However, the time to equilibrium is considerably longer τ exp = 44 +10 −9 ( days) with considerable variation between patients (0 to 78 days), Figure 3c.This is in keeping with clinical observations that patients may take several weeks for their neutrophil count to return to normal and stay there. Equilibrium clonal fitness of Chronic Lymphocytic Leukemia To test the prediction of the long-term equilibrium fitness of clonal expansions in a differentiation hierarchy, we analyzed the trajectories of 85 naturally progressing patients with chronic lymphocytic leukemia (CLL) originally published in [18].CLL patients are often monitored for many years without intervention, enabling us to observe the long-term change in clonal fitness retrospectively.White blood cell counts in this cohort were followed for up to 20 years with often multiple measurements per year.In the original study, the patient cohort had been retrospectively separated into three categories, exponential growth, logistic growth, and indeterminate growth [18].These three categories correlated with the aggressiveness of the disease.Patients exhibiting exponential growth patterns were more likely to require treatment compared to patients with logistic clonal growth.Exponential growth also correlated with a higher number of detectable driver mutations compared to logistically progressing disease [18].Patients with indeterminate growth patterns had significantly shorter follow-up times, see Supplementary Figures S (11 -14).Thus, this category likely contains both logistic and exponential growth patterns that were not yet distinguishable in the original study.In our subsequent analysis, we use the same grouping of patients by the three growth categories as in the original study. As predicted by our theory, a constant clonal equilibrium fitness is reached in all patients independent of classification.Summaries of all clonal fitness over time are shown in Supplementary Figures S (3 -14).The initial fast decline of the clonal fitness is less pronounced in most CLL patients compared to the clonal fitness of neutrophils after bone marrow transplants.This is likely due to the normal physiologic expansion of neutrophils after transplantation compared to the oncogene driven growth of the cells in CLL.The sharp initial drop is most pronounced at the earliest stages of clonal expansions.In most CLL patients, the diseases had been present for some time before they entered monitoring and data collection. We then ask if differences in disease aggressiveness are also reflected in the long-term fitness of clonal expansions.By fitting equation (8) to individual fitness trajectories, we estimate the long-term fitness b for all 85 patients.A summary of this fitness is shown in Figure 4.The equilibrium fitness of logistically and exponentially growing clones differ significantly (p = 2 × 10 −5 , T-test) Figure 4c.The median clonal fitness of exponential clones is b exp = 0.35 +0.2 −0.09 ( per year) , whereas for logistic clones we find b log = 0.002 +0.11 −0.086 ( per year).The clonal fitness for exponentially growing tumours corresponds to an approximate exponential growth rate of the disease, allowing us to estimate doubling times, which are in the range of T d exp = 1.9 +0.8 −0.7 ( years), which strongly correlate with adverse outcome [19,20].Furthermore, clonal fitness is always positive in patients exhibiting exponential growth, whereas for logistically growing clones, fitness can be negative or positive, and between patients, variation is higher (Figure 4c).However, most fitness in the logistic growth category cluster around b = 0, suggesting either slow expansions if b log is small and positive, or an approximately constant disease if b log is small and negative.There are 3 patients in the logistic category with b log < −0.2 (per year), suggesting possible clonal extinctions of CLL in these patients in the long-term.However, the extinction time is on the scale of decades and residual disease might remain detectable throughout life.The clonal fitness of patients with indeterminate growth patterns is between logistic and exponential growth (b ind = 0.16 +0.27 −0.13 ) and is not significantly different from either logistic or exponential growth (Supplementary Figure S16a).This is in line with the idea that the indeterminate category is a mix of patients with logistic or exponential growth patterns respectively because of the significantly shorter follow-up times. Clonal fitness forecast in CLL Having shown that the equilibrium clonal fitness strongly correlates with retrospective grouping of patients disease trajectories, we ask if we can identify exponential or logistic growth early.More precisely, we ask, can the change in clonal fitness at early time points forecast long-term clonal fitness within patients? We showed that within our mathematical model, such a forecast is possible (Figures 1b and 2d).If we fit heuristic equation (8) to the early dynamics of either exponentially growing clones or waves of clonal extinctions with known model parameters, we can compare estimated b fit to the exact b true , given by equation (6).These estimates correlate well with the theoretical prediction (Spearman Rho = 0.993).We tend to slightly underestimate the rate of clonal expansions and slightly overestimate the rate of clonal extinctions.However, the estimates conserve signs, allowing us to distinguish exponential expansions and clonal extinctions in principle. We now test if we can forecast disease progression similarly in CLL patients.We use the first, 5, 7, and 10 data points available for each patient and estimate the equilibrium fitness by fitting equation (8) to those early data points, Supplementary Figures S(4 -6), S(8 -10), & S (12 -14).The resulting estimates of the equilibrium fitness b can then be compared to the estimated long-term fitness derived from complete disease trajectories.All forecasts correlate with the corresponding clonal fitnesses from complete disease trajectories (Figures 4d and S15).Using the first 10 data points, we find a strong positive correlation (Spearman Rho = 0.56, p = 1.4 × 10 −6 ).Even when using only the first 5 data points for each patient, the correlation remains positive and significant (Spearman Rho = 0.43, p = 5 × 10 −4 ).The strong positive correlation of early and late fitness estimates as well as the existence of a stable equilibrium fitness in virtually all patients suggest that the average fitness CLL is determined early and maybe surprisingly, does not change significantly throughout disease progression.This agrees with very recent observations that even the transition into Richter syndrome, the transformation of CLL into an aggressive diffuse B-cell lymphoma is already encoded early in the evolutionary history of CLL patients [21]. Discussion The concept of clonal fitness is fundamental to evolutionary dynamics.Yet, direct measures of fitness remain difficult in biological systems [22].In somatic evolution, clones with increased fitness populate all aging tissues [1,3,23].A quantitative understanding of these fitness changes is complicated by the fact that somatic tissues are highly structured and regulated.Most clones, even with high initial fitness usually do not expand beyond certain limits.It is thus natural to expect that any observable clonal fitness is not a constant numerical value, assigned to a specific clone a priori, but a dynamic quantity that changes as a clone is expanding within its changing environment. Here, we have shown that differentiation hierarchies naturally reduce clonal fitness over time.This decline is universal and can be observed both during the expansion of healthy Neutrophil populations after stem cell transplant and transformed (malignant) CLL clones.The fitness decline follows a simple heuristic time dynamics that is proportional to 1/time, with only two free effective parameters, the initial fitness decline and the long term equilibrium fitness.Estimating these parameters in 85 patients with naturally progressing CLL, we found that long term fitness correlates strongly with disease aggressiveness and more importantly, long term fitness can be forecast early.This strongly suggests that in the case of CLL, clonal fitness is determined early and does not change much during disease progression.This is in line with previous observations that the acquisition and expansion of additional subclonal driver mutations in CLL is rare [18] and even the transition into Richter syndrome, the transformation of CLL into an aggressive diffuse B-cell lymphoma is already encoded early in the evolutionary history of CLL [21].This has important implications for monitoring and possibly stratifying patients into different risk and treatment groups by ideally combining genomic information and longitudinal measures of clonal expansion rates over time. Whether this Big Bang like dynamics [24] and its implications are unique to CLL or are more prevalent across other liquid and solid tumours remains an open question. All points fitting t e x i t s h a 1 _ b a s e 6 4 = " N y + S a o B M VN P k o 1 b R r M z j Z h N x V f g = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e z 6 P g a 9 e I x o H p A s Y X Y y m w y Z n V 1 m e o W w 5 B O 8 e F D E q 1 / k z b 9 x k u x B o w U N R V U 3 3 V 1 B I o V B 1 / 1 y C k v L K 6 t r x f X S x u b W 9 k 5 5 d 6 9 p 4 l Q z 3 m C x j H U 7 o I Z L o X g D B U r e T j S n U S B 5 K x j d T P 3 W I 9 d G x O o B x w n 3 I z p Q I h S M o p X u s X f a K 1 f c q j s D + U u 8 n F Q g R 7 1 X / u z 2 Y 5 Z G X C G T 1 J i O 5 y b o Z 1 S j Y J J P S t 3 U 8 I S y E R 3 w j q W K R t z 4 2 e z U C T m y S p + E s b a l k M z U n x M Z j Y w Z R 4 H t j C g O z a I 3 F f / z O i m G V 3 4 m V J I i V 2 y + K E w l w Z h M / y Z 9 o T l D O b a E M i 3 s r Y Q N q a YM b T o l G 4 K 3 + P J f 0 j y p e h f V 8 7 u z S u 0 6 j 6 M I B 3 A I x + D B J d T g F u r Q A A Y D e I I X e H W k 8 + y 8 O e / z 1 o K T z + z D L z g f 3 w v g j a g = < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = " B 7 u z o 1 d r a o I 5 e / I 8 N K u 6 e a w f 3 R w W a h d T O 7 J g D + y D I j D B C a i B K 1 A H D Y D B A 3 g C L + B V e 9 S e t X f t Y 9 K a 0 a Y z u + B P a F / f a 0 C g T g = = < / l a t e X P e 5 6 0 F J 5 / Z h 1 9 w P r 4 B 0 5 u N g w = = < / l a t e x i t > t = 0 < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 2 o N K b y b j P 1 F C A z I r E x V L H 4 g e V g = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K r 2 P Q i 8 e I 5 g H J E m Y n s 8 m Q 2 d l l p l c I S z 7 B i w d F v P p F 3 v w b J 8 k e N L G g o a j q p r s r S K Q w 6 L r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 T Figure 1 . Figure 1.Clonal expansions and differential fitness in hierarchically organised tissues.Conceptual figure showing (a) clonal expansions in a differentiation hierarchy and (b) the concept and dynamics of the differential fitness of these expanding clones.Theory predicts a decline of clonal fitness over time that is proportional to 1/time and can be heuristically described by the simple equation s(t) = a t + b with a long term equilibrium clonal fitness b. 5 Figure 2 . Figure 2. Representative realisations of theoretical predictions of clonal expansions and differential fitnesses in hierarchical tissues.(a) Realisations of eq.(2) in compartment 7. Dashed line represents an exponentially growing clone (ϵ = 0.45) and the line shows a wave of clonal extinction ϵ = 0.85.(b) Realisation of differential fitness (eq.(4)) for ϵ = 0.55 (black line), and ϵ = 0.45 (dashed line).(c) Clonal fitness decline is well described by the heuristic approximation s(t) = a/t + b.(d) Long term equilibrium fitness can be forecast using data on the early expansion of clones. Figure 3 . Figure 3. Neutrophil recovery after bone marrow transplant.a),b) Two examples of typical differential fitness trajectories of Neutrophil recovery after stem cell transplant.Differential fitness is following the heuristic 1/t decline and drops sharply within a few days.c) Time to equilibrium is estimated to be τ = 44 +10 −9 days.d) Long term clonal fitness of Neutrophil recovery becomes negative b = −1.29 +0.66 −0.49 Figure S2 . Figure S2.Neutrophil differential fitness following a bone marrow transplant. Figure S3 .First 5 Figure S4 .First 7 Figure S3.CCL patients with logistic growth pattern.The fitting is based on all available clinical observations.13
2023-10-05T13:11:21.834Z
2023-09-29T00:00:00.000
{ "year": 2023, "sha1": "9d707fefb1d85834a6930bcc49699130fa34138a", "oa_license": "CCBYNC", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/09/29/2023.09.27.559763.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "9d707fefb1d85834a6930bcc49699130fa34138a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
91187595
pes2o/s2orc
v3-fos-license
Facial reanimation with masseter nerve–innervated free gracilis muscle transfer in established facial palsy patients Background The masseter nerve is a useful donor nerve for reconstruction in patients with established facial palsy, with numerous advantages including low morbidity, a strong motor impulse, high reliability, and fast reinnervation. In this study, we assessed the results of masseter nerve–innervated free gracilis muscle transfer in established facial palsy patients. Methods Ten patients with facial palsy who received treatment from January 2015 to January 2017 were enrolled in this study. Three patients received masseter nerve–only free gracilis transfer, and seven received double-innervated free gracilis transfer (masseter nerve and a cross-face nerve graft). Patients were evaluated using the Facial Assessment by Computer Evaluation software (FACEgram) to quantify oral commissure excursion and symmetry at rest and when smiling after muscle transfer. Results The mean time between surgery and initial movement was roughly 167.7 days. A statistically significant increase in excursion at rest and when smiling was seen after muscle transfer. There was a significant increase in the distance of oral commissure excursion at rest and when smiling. A statistically significant increase was observed in symmetry when smiling. Terzis’ functional and aesthetic grading scores showed significant improvements postoperatively. Conclusions Masseter nerve innervation is a good option with many uses in in established facial palsy patients. For some conditions, it is the first-line treatment. Free gracilis muscle transfer using the masseter nerve has excellent results with good symmetry and an effective degree of recovery. nerve graft, is traditionally considered the first choice as a nerve source [1,2]. Surgeons can create a spontaneous and synchronous smile with a cross-face nerve graft using the contralateral facial nerve (cranial nerve [CN] VII). However, the presence of two nerve coaptations leads to certain drawbacks, such as low predictability and consistency of muscle contraction [3]. Furthermore, this technique involves 2-stage surgery, and patients must wait for several months after the first stage for the procedure to be completed. Contrastingly, functional gracilis muscle transfer (FGMT) using the masseter nerve is now gaining popularity. The masseter nerve has a greater axonal load than cross-face nerve grafts, resulting in stronger motor innervation [4]. Furthermore, it causes less axonal loss than cross-face nerve grafts because one fewer nerve coaptation is required [5]. FGMT can be a good alternative if the contralateral facial nerve is damaged or inappropriate for use, especially because it is done in a single stage. This study aimed to analyze the effectiveness and outcomes of single-stage free gracilis muscle transfer using the masseter nerve as the main neurotizer. An analysis was carried out to explore whether FGMT using the masseter nerve was able to provide appropriate motor power and whether it improved facial function and symmetry. METHODS A retrospective analysis was performed of patients who received treatment from January 2015 to January 2017. Ten patients who underwent FGMT using the masseter nerve as the main nerve source were enrolled in this study. The demographic characteristics of all patients were evaluated, as well as the causes of facial paralysis, the time interval between surgical stages and initial movement, and disease period. Preoperative and postoperative photographs were analyzed using the Facial Assessment by Computer Evaluation software (FACEgram), a software program used to calculate the smile excursion (the distance from the midline of the lower lip to the oral commissure) and the angle of oral commissure elevation on the affected and unaffected side (Fig. 1). The ratio of the excursion distances on both sides was calculated and used to evaluate facial symmetry. The preoperative and postoperative videos and photographs of each patient were reviewed by an investigator using Terzis' functional and aesthetic grading scale (Table 1). Double-innervated gracilis muscle transfer was indicated for complete facial palsy patients who wanted a natural smile with good excursion. The indications of masseter nerve-only innervated gracilis muscle transfer were old age, desire for a less invasive procedure, significant comorbidities, and unavailability of the contralateral facial nerve. Surgical techniques A facelift-type incision was made on the paralyzed face. A skin flap was elevated above the deep subcutaneous tissue and dissected around the nasolabial fold through the sub-superficial muscular aponeurotic system (SMAS) plane. The masseter nerve was found in the muscle parenchyma. We used the subzygomatic triangle introduced by Collar et al. [6] to find the masseter nerve. This triangle is formed by the inferior border of the zygomatic arch superiorly, a vertical line through the anterior border of the temporomandibular joint posteriorly, and the frontal branch of the facial nerve inferiorly and anteriorly. The average time to find the masseter nerve was 15 minutes. The depth of the masseter nerve was roughly 1 to 2 cm. A nerve stimulator was used to find the correct nerve. A gracilis muscle flap, measuring 10 × 5 cm on average, was harvested at the medial thigh in the standard manner. The graci- Note that the oral commissure excursion is the measurement from the midline of the lower lip to the oral commissure (A-line). Symmetry was measured according to the ratio between the A-line and B-line (the contralateral side of the paralyzed face). FACEgram, Facial Assessment by Computer Evaluation software. Fig. 1. Photo analysis of a patient lis muscle was dissected, and the flap vessels and obturator nerve were dissected with sufficient pedicle length. The gracilis muscle was cut using a GIA device-reloadable linear cutter with a safety lock-out device (Ethicon Endo-surgery, LLC, Dülmen, Germany). Vascular anastomosis was performed in the usual manner. The donor vessels were all facial vessels. End-to-end neurorrhaphy of the masseter nerve and obturator nerve was performed under microscopy. Seven patients underwent a double-innervation procedure using a cross-face nerve graft. On the healthy side of the face, an incision was made on the cheek and the buccal branch of the facial nerve was found using a nerve stimulator. Using the nerve stimulator, we checked the movement of the zygomatic major muscle. At the same time, the second team harvested the sural nerve. The mean diameter of the sural nerve graft was 18 cm. The sural nerve graft was coapted with the buccal branch of the facial nerve in an end-to-end manner and was also coapted with the masseter nerve in an end-to-side manner (Fig. 2). Finally, the gracilis muscle was fixed at the periosteum of the lateral zygomatic bond and the medial side of the nasolabial fold. A nasolabial incision was performed in elderly patients and patients with severe nasolabial disruption. De-epithelialization of the nasolabial fold skin and anchoring sutures to the base of the nasolabial fold were performed for nasolabial fold formation. In cases of relatively young patients who did not want scar formation, we made a transverse incision on the lip for muscle fixation. The proximal gracilis muscle was sutured to the periosteum or the SMAS layer with 3-0 absorbable sutures near the zygomatic arch, and the distal gracilis muscle was sutured to the modiolus, upper lip, and lower lip with 3-0 absorbable sutures. All patients were given instructions to use an external electrical muscle stimulator to rehabilitate the transferred muscle starting 2 weeks after surgery, and patients were also educated about the use of biofeedback with a mirror to obtain a spontaneous and natural smile. Data analysis Statistical analysis was performed using SPSS version 21.0 (IBM Corp., Armonk, NY, USA). The paired two-tailed t-test was used to compare preoperative and postoperative smile excursion distances and angles on the affected and unaffected sides. The symmetry score was calculated as the ratio between the affected and unaffected sides, and the preoperative and postoperative scores at rest and when smiling were compared using the paired two-tailed t-test. Differences between Terzis' grading scores were evaluated with the Mann-Whitney test. RESULTS Ten patients treated at Asan Medical Center between January 2015 to January 2017 were evaluated. Three patients received free FGMT with only the masseter nerve, while seven patients received double-innervation FGMT. Patients' demographic and clinical data are presented in Table 2. The average time between facial palsy onset and FGMT was 20.7 years (range, 2-60 years). The average time of initial muscle movement after surgery was 167.7 days (range, 83-360 days). Fig. 3 summarizes the pathophysiology of facial palsy in our patients. Tumor resection was the most frequent cause of facial palsy (5/10). Terzis' functional and aesthetic grading scores were evaluated Value are presented as mean or average (range). FGMT, functional gracilis muscle transfer. Table 2. Patients' demographic and clinical data Single-stage surgery with double-innervated free functional muscle transfer innervated by the masseter nerve (end-to-end anastomosis) and the cross-face nerve graft (end-to-side anastomosis). preoperatively and postoperatively. In both groups, there were significant changes between the preoperative and postoperative grading scores (paired two-tailed t-test, P = 0.02) ( Table 3). Using FACEgram, we evaluated the oral commissure position, the oral commissure excursion, and symmetry at rest and when smiling. In the patients who received a cross-face nerve graft, a significant increase was observed in oral commissure excursion at rest and when smiling (paired two-tailed t test, P = 0.02 at rest and P = 0.005 when smiling). Symmetry was calculated as the ratio of the oral commissure excursion on the affected side to that on the unaffected side. Significant improvement was found in symmetry when smiling (paired two-tailed t-test, P = 0.003). However, no statistically significant difference was found in the degree of symmetry at rest (Table 3). DISCUSSION The most important aspect of facial reanimation surgery is to choose the most appropriate surgical option. Numerous surgical techniques have been developed, and still others are being innovated [7,8]. Conventionally, FGMT using a cross-face nerve graft is the gold-standard treatment [1]. FGMT using a cross-face nerve graft consists of a 2-stage procedure with primary cross-face nerve graft surgery and subsequent free muscle transfer surgery that is performed 6 to 12 months later. FGMT using a cross-face nerve graft can create a spontaneous and natural smile because the transferred muscle is innervated by the contralateral facial nerve (CN VII). Despite its advantages, it also has multiple drawbacks. First, the length of the nerve can be an obstacle to nerve regeneration. Another shortcoming is axonal loss due to the presence of two anastomosis sites. This can result in a long denervation time, which can cause muscle atrophy or sometimes even a catastrophic result, such as immobility of the muscle flap [3]. FGMT using the masseter nerve has recently become more popular as a technique that can achieve relatively consistent and strong muscle contractions because it uses the strong axonal load of the masseter nerve [4,5,9]. According to a study using electron microscopy, the average axonal count of the masseter nerve is 1,542, compared to 834 for the facial nerve and a reported number of 100 to 200 for the distal end of cross-facial nerve grafts [5]. Furthermore, FGMT using the masseter nerve requires a single nerve coaptation, and less axonal loss occurs due to the relatively short length of the nerve. In this study, FGMT using the masseter nerve showed satisfactory results in terms of excursion, leading to improvements in symmetry (Fig. 4). Double innervation was introduced by Biglioli et al. [8]. The advantage of double innervation is that it guarantees strong nerve input by the masseteric nerve, allowing motor innervation to be maintained even if the cross-face nerve graft fails. Furthermore, nerve innervation from a cross-face nerve graft has the potential to enable a spontaneous smile through neural supercharge (Fig. 5) [8,10]. In FGMT with double innervation, controversy exists regarding which nerve is the main neurotizer of the transferred muscle and the timing of reinnervation of the two nerves. Biglioli et al. [8] recorded motor potentials during electrical stimulation of the contralateral facial nerve by using a coaxial needle inserted into the transferred muscle in patients who received double-innervated FGMT, and observed the excitability of the grafted facial nerve fibers. However, they reported that they failed to check the masseter muscle because of artifacts. Further studies with electromyography, nerve conduction velocity evaluation, or functional magnetic resonance imaging will be needed to identify which nerve is the main neurotizer. There was a clinically observable reduction of excursion with mastication at the time when innervation of the cross-face nerve graft was transmitted to the muscle in patients with double-innervated FGMT. This might be another phenomenon explained by cerebral adaptation theory, and it is likely that the smile center of the cere-bral cortex shifts from the facial movement center to the jaw muscle center. Using the masseter nerve as the main neurotizer has certain drawbacks. The masseter nerve is innervated by CN V and requires a primary masticatory action to make a smile, which can be an obstacle to spontaneous smiles, unlike what occurs when a cross-face nerve graft is used. However according to a study by Manktelow et al. [11], spontaneous smiles occurred in 59% of patients who underwent muscle transfer using the masseter nerve after repeated practice and training. They explained this result through the concept of cerebral adaptation, according to which repeated practice induces neural connections between the CN V cortical center and the CN VII cortical center in the cerebral cortex, thereby resulting in a natural smile without mas-ticatory action. Lifchez et al. [12] also reported that two of three patients with Möbius syndrome who underwent free muscle transfer using the masseter nerve achieved the ability to smile independently of jaw closure. Bae et al. [9] evaluated 166 free gracilis muscle flaps in children with facial palsy to compare outcomes between cross-face nerve grafts and grafting the motor nerve to the masseter nerve. They concluded that using the masseter nerve as the nerve source led to better excursion than the cross-face nerve grafts. The choice of whether to make an incision on the nasolabial fold or vermilion is also an important factor to consider in muscle fixation. In this study, the muscle was fixed to the modiolus using a facelift-type incision without an additional incision on the nasolabial fold or vermilion. When the modiolus could not be reached using a hand tie, a tie was made using a note-pusher. However, smile excursion occurred in unwanted sites in some cases. In these cases, an incision was made on the nasolabial fold to redo muscle fixation, and nasolabial fold formation was performed by de-epithelialization of part of the skin. Since nasolabial fold scars are relatively favorable, rigidly fixing the muscle by making an incision on the nasolabial fold in the initial surgical procedure can also be a favorable option. Another possible problem is muscle atrophy, which can be caused by the absence of nerve stimulation during the re-innervation period following muscle transfer. Masseter nerve innervation might bring about relatively rapid nerve regeneration compared to cross-face nerve grafts, thereby reducing the incidence of muscle atrophy. We also made use of an external muscle stimulator to prevent such incidents. All patients were instructed to use an external electrical muscle stimulator to rehabilitate the transferred muscle 2 weeks after surgery. The regimen was 3 times per day for 15 minutes to stimulate the muscle. We expect that using a muscle stimulator can reduce the atrophy of the transferred muscle during reinnervation of the nerve. However, further study will be needed to evaluate the effectiveness of rehabilitation therapy using an external muscle stimulator after free functional muscle transfer. The main limitation of this study is its retrospective nature and relatively small number of patients. The relatively short followup period could be another limitation. An objective evaluation of the spontaneity of smiles was not performed in this study. In the future, functional magnetic resonance imaging, electromyography, and nerve conduction tests should be used to investigate cortical adaptation and the spontaneity of smiles. Conflict of interest No potential conflict of interest relevant to this article was reported. Ethical approval The study was approved by the Institutional Review Board of Asan Medical Center (IRB No. 2018-1272) and performed in accordance with the principles of the Declaration of Helsinki. Written informed consents were obtained. Patient consent The patients provided written informed consent for the publication and the use of their images.
2019-04-03T13:03:24.713Z
2018-06-17T00:00:00.000
{ "year": 2019, "sha1": "15b2dda1919cbbb538a3adbe10c05c8d6f0cd354", "oa_license": "CCBYNC", "oa_url": "http://www.e-aps.org/upload/pdf/aps-2018-00717.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "15b2dda1919cbbb538a3adbe10c05c8d6f0cd354", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10293113
pes2o/s2orc
v3-fos-license
Guiding the Migration of Grafted Cells to Promote Axon Regeneration A promising therapeutic strategy to promote the regeneration of injured axons in the adult central nervous system (CNS) is the transplantation of cells or tissues that can modify the local host environment and support the growth of regenerating axons. Growth-supportive cells that have been successfully used in experimental transplantation therapy of spinal cord injury (SCI) include Schwann cells, mesenchymal stromal cells, olfactory ensheathing cells, genetically modified fibroblasts, and neural stem/progenitor cells (Huang et al., 2010). Cells derived from the embryonic spinal cord and peripheral nerve grafts have been shown to promote the regeneration of injured axons, due largely to the presence of growth-supportive cells such as glial progenitors and Schwann cells, respectively (Cote et al., 2011; Haas and Fischer, 2013). These transplants generate a permissive environment for axon growth by secreting growth factors and forming an adhesive extracellular matrix to overcome the inhibitory environment of the injured tissue. However , the value of these transplants to promote axon regeneration is limited by the fact that most regenerating axons are trapped inside the permissive environment generated by the transplants, failing to grow out of the graft (Figure 1A, B) (Haas and Fischer, 2013). While this strategy can be effective for building functional relays via graft-derived neurons (Haas and Fischer, 2014), this approach can not be generalized to other cell types. Therefore, a remaining challenge for therapeutic cell transplantation in CNS injury, in the context of long distance regeneration and connectivity, is to develop strategies to promote axonal growth beyond the graft into puta-tive target areas to form functional synaptic connections. Currently the nature of the " graft trap " of regenerating axons is not fully understood. One possibility is that the regenerating axons stay inside the graft, which expresses much higher levels of attractive guidance factors, i.e., neurotrophic factors, and much lower levels of inhibi-tory/repulsive factors, i.e., chondroitin sulfate proteoglycan (CSPG), compared to the adjacent host tissue. Another possibility is that although adult CNS axons maintain their growth potential and can regenerate in an optimized environment, their intrinsic growth capability is much lower than axons of embryonic neurons, and thus not suitable for long-distance regeneration. Targeting these mechanisms, several strategies have recently been applied to overcome the " graft trap " in transplantation-based therapy of SCI. One strategy is to further modify host spinal tissue, making the host tissue less inhibitory and thus allowing some of the regenerating axons inside the … Guiding the migration of grafted cells to promote axon regeneration A promising therapeutic strategy to promote the regeneration of injured axons in the adult central nervous system (CNS) is the transplantation of cells or tissues that can modify the local host environment and support the growth of regenerating axons. Growth-supportive cells that have been successfully used in experimental transplantation therapy of spinal cord injury (SCI) include Schwann cells, mesenchymal stromal cells, olfactory ensheathing cells, genetically modified fibroblasts, and neural stem/progenitor cells (Huang et al., 2010). Cells derived from the embryonic spinal cord and peripheral nerve grafts have been shown to promote the regeneration of injured axons, due largely to the presence of growth-supportive cells such as glial progenitors and Schwann cells, respectively (Cote et al., 2011;Haas and Fischer, 2013). These transplants generate a permissive environment for axon growth by secreting growth factors and forming an adhesive extracellular matrix to overcome the inhibitory environment of the injured tissue. However, the value of these transplants to promote axon regeneration is limited by the fact that most regenerating axons are trapped inside the permissive environment generated by the transplants, failing to grow out of the graft (Figure 1A, B) (Haas and Fischer, 2013). While this strategy can be effective for building functional relays via graft-derived neurons (Haas and Fischer, 2014), this approach can not be generalized to other cell types. Therefore, a remaining challenge for therapeutic cell transplantation in CNS injury, in the context of long distance regeneration and connectivity, is to develop strategies to promote axonal growth beyond the graft into putative target areas to form functional synaptic connections. Currently the nature of the "graft trap" of regenerating axons is not fully understood. One possibility is that the regenerating axons stay inside the graft, which expresses much higher levels of attractive guidance factors, i.e., neurotrophic factors, and much lower levels of inhibitory/repulsive factors, i.e., chondroitin sulfate proteoglycan (CSPG), compared to the adjacent host tissue. Another possibility is that although adult CNS axons maintain their growth potential and can regenerate in an optimized environment, their intrinsic growth capability is much lower than axons of embryonic neurons, and thus not suitable for long-distance regeneration. Targeting these mechanisms, several strategies have recently been applied to overcome the "graft trap" in transplantation-based therapy of SCI. One strategy is to further modify host spinal tissue, making the host tissue less inhibitory and thus allowing some of the regenerating axons inside the graft to exit into the host tissue. As an example, Tom et al. (2009) showed that in an experimental model of grafting a peripheral nerve bridge at the site of the injured spinal cord, application of chondroitinase (Chase) at the distal graft/host interface to reduce CSPG-mediated inhibition promoted modest improvement in host-entry of regenerating axons, which would otherwise stop at the distal graft/host junction. Another strategy to promote axonal growth beyond the graft focuses on genetic modification of injured neurons to enhance their intrinsic growth potential using regeneration associated genes (Ma and Willis, 2015). For example, overexpressing the constitutively active form of the Rheb GTPase (downstream of the mTOR pathway) has been shown to enhance the intrinsic growth potential of adult neurons (Wu et al., 2015). Recently, we sought to explore an alternative strategy for promoting axon regeneration by inducing the directional migration of grafted cells (Yuan et al., 2016). We hypothesized that controlled migration of grafted cells could be beneficial to axon regeneration and functional recovery by expanding the permissive environment and directing axon growth. However, following transplantation into the injured spinal cord, most grafted cells remain at the injury site, with few grafted cells showing long-distance migration without rostral or caudal directional selectivity (Lankford et al., 2008;Ekberg et al., 2012;Yuan et al., 2016). An intriguing but yet untested question is whether we can promote axon regeneration beyond the injury/graft site by guiding the migration of grafted cells toward the putative target region of regenerating axons. Theoretically this is feasible, because if a large cohort of grafted cells can be guided to migrate out of the injury/graft site toward the original target area of injured axons, the migratory stream of these growth-supportive cells is very likely to form a corridor for the advance of regenerating axons beyond the injury/graft site toward the target area. Moreover, migration of grafted cells may even enhance axon growth by towing of growth cones, like the towing of embryonic sensory axons by migrating target cells during embryonic development (Gilmour et al., 2004). To begin testing whether this novel strategy is feasible, we needed to establish a reliable method to induce the directional migration of grafted cells in the adult spinal cord, as highlighted by one of our recent research projects (Yuan et al., 2016). We first used a variety of cell culture-based assays to screen for factors that may be attractive or repulsive to the migration of glial-restricted progenitors (GRPs) derived from embryonic spinal cord, a promising cell type to support axon regeneration in transplantation-based therapy of SCI (Haas et al., 2012;Haas and Fischer, 2013;. Next, we used a cervical dorsal column lesion model of SCI in adult rats, a well-characterized in vivo nerve injury model, for transplantation of GRPs and application of lentivirus coding for candidate guidance factors rostral to the injury/graft site to test the guidance of GRP migration by candidate factors in vivo. Although GRPs for transplantation exhibit active migration in vitro, we observed limited migration of grafted GRPs in adult spinal cord, with or without injury. This limited migration of grafted GRPs may indicate the presence of endogenous factors that restrict/inhibit the migration of grafted GRPs in the adult spinal cord, and that effective guidance of GRP migration may depend on the removal of this restrictive/ inhibitory signal. CSPG is a well-characterized axon growth inhibitor in the adult CNS that is present in the gliotic scar following CNS injury. As GRPs express receptor tyrosine phosphatase sigma (PTPRS), one of the major receptors of CSPG, it is likely that these cells can also respond to this inhibitory signal. Indeed, when coated on culture substrate, CSPG strongly inhibits the adhesion and migration of cultured GRPs. Injection of lentivirus vectors encoding Chase rostral to the injury/graft area induced the preferential migration of grafted GRPs toward the injection site. These in vitro and in vivo findings support the notion that CSPG is a major endogenous factor that restricts the migration of grafted GRPs in the adult CNS. We also observed that basic fibroblast growth factor (bFGF) is an attractive migration factor for GRPs, as lenti-bFGF injection also induced directional migration of a fraction of grafted GRPs toward the injection site in vivo, similar to the effect of lenti-Chase. These findings suggest that an effective way of guiding the directional migration of grafted cells is the lentivirus-mediated delivery of factors that can either remove the restrictive/inhibitory effect of the host tissue or actively promote cell migration. An interesting future question is whether simultaneous application of these two types of factors -one relieving the inhibition and the other directly attracting -results in synergistic activity and stimulates the migration of greater numbers of grafted cells toward the putative target. The combination of the in vitro screening system together with the in vivo injury model that disrupts sensory axons described in our study (Yuan et al., 2016) can be used to test the effects of additional molecules on the migratory properties of other cells. It is also important to further explore whether directional migration of a large cohort of grafted cells can support axon regeneration beyond the injury/graft site. Moreover, guided migration of grafted cells can be further combined with other therapeutic interventions to improve axon regeneration and ultimately recovery of function. In this context, the additional advantage of using lenti-Chase to guide the migration of grafted GRPs is that this treatment also benefits the growth of regenerating axons. Thus, a therapeutic strategy that focuses on the application of a guidance factor that can promote both the extension of regenerating axons and the migration August 2016,Volume 11,Issue 8 www.nrronline.org of grafted cells may be the best option for a combined effect. For the chemotropic factor, it is unclear whether bFGF, which we found to be attractive to GRPs, is also directly attractive to regenerating axons. If a common attractant for both regenerating axons and grafted cells is not available, one potential option is to transplant cells genetically engineered to express the specific receptor for the attractant that can effectively guide the extension of regenerating axons, so that grafted cells gain sensitivity to the same attractant. It is generally accepted that the glycosaminoglycan chains in CSPG mediate the inhibitory effect of CSPG on axon growth, and that Chase treatment is a widely used method in experimental therapy of SCI to alleviate CSPG-mediated inhibition by digestion of the glycosaminoglycan chains (Bradbury et al., 2002). Consistent with Chase-mediated CSPG digestion, we observed that Chase treatment completely blocked the inhibitory effect of CSPG on the attachment of GRPs to cell culture substrate. However, in a "stripe assay" designed to evaluate the guidance effect of substrate-bound CSPG on GRP migration, we noticed that Chase-treatment mildly mitigated, but did not completely block, the repulsive action of CSPG stripes on GRPs (Yuan et al., 2016). This observation indicates the existence of CSPG inhibition that is independent of glycosaminoglycan chains, and underscores the importance of devel-oping novel ways that can effectively mitigate this Chase-insensitive inhibitory action of CSPG in the scenario of long distance regeneration of injured axons. Basic research to clarify the structural basis of this Chase-independent inhibitory action of CSPG will be the key for this solution in the near future. In summary, we have established a framework of inducing the directional migration of grafted GRPs in a SCI model using lentivirus-mediated expression of two types of guidance factors ( Figure 1C). A similar strategy can be applied when other cell types are used in transplantation-based therapy of SCI, and can be applied in combination with other therapeutic interventions to improve axon regeneration. This work was supported by NIH NS055976 and Craig H. Neilsen Foundation 280850. Figure 1 Guiding the migration of grafted cells to promote axon regeneration. (A, B) A schematic diagram of the "graft trap" after transplantation of supportive cells at the injury site of the adult spinal cord. Without cell transplantation, the proximal ends of injured axons tend to retract. After transplantation of growth supportive cells, axons can be stimulated to invade the injury/ graft site, but very few can grow out of the graft, no matter what cells have been grafted. (C) Directional migration of grafted cells can be induced by a gradient of a guidance factor, which could either eliminate the repellent in the host tissue or directly attract the grafted cells. A gradient of such guidance factors can be achieved by injection of virus vectors coding for the guidance factor distal to the injury/graft site. The resulting migratory stream of grafted cells along the gradient may form a supportive corridor for the growth of regenerating axons. D: Dorsal; V: ventral; A: anterior; P: posterior.
2018-04-03T00:16:56.061Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "a505858c2c18f469e2f6d264e275f50d191c85d4", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/1673-5374.189169", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "a505858c2c18f469e2f6d264e275f50d191c85d4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
250195829
pes2o/s2orc
v3-fos-license
Newer onset of diabetes mellitus and thyroid dysfunction in COVID-19: Study from rural India ABSTRACT Background: Cytokine and bardykine storm plays important role in then pathogenesis of COVID-19 diseses, as result there are raised inflammatory markers and blood sugar. Patients and Method: Patient with RTPCR positive with signs and symptoms of COVID-19 were investigated for fasting and postprandial blood sugar and glycoted hemoglobin percentage, inflammatory markers TSH and Covid antibodies. Result: All the 17 cases detected newly onset of diabetes with normal HBA1c and raised thyroid stimulating hormones in five cases. Significant raised levels of inflammatory markers and D-diamer. All cases showed bilateral pneumonias in the lungs. Conclusion: Newer onset of diabetes mellitus due to COVID-19 disease should be mangled with insulin therapy. Introduction In COVID-19, diabetes mellitus (DM) is a two-edged sword; already existing DM patients are more prone to being infected with SARS-CoV-2 virus, resulting in severe acute respiratory syndrome. A non-diabetic person if suffered from severe COVID-19 will manifest with hyperglycaemias and their subsequent complications. SARS-CoV-2 virus has great affinity for angiotensin-converting enzyme 2 (ACE-2) receptors. ACE-2 receptors are present in beta cells of pancreas and follicular cells of thyroid gland. Inhibition and dysfunctions of ACE-2 receptors over the insulin secretion of beta cells of pancreas and inhibition of thyroid secretion by blocking the follicular cells of thyroid gland by SARS-CoV-2 virus result in hyperglycaemia and rise in thyroid-stimulating hormone (TSH). [1] In the present study, we found that 17 patients who suffered from COVID-19, with raised inflammatory markers, had significant hyperglycaemia with normal HbA1c, confirming the newer onset of diabetes mellitus. Newer onset of diabetes mellitus and thyroid dysfunction in COVID-19: Study from rural India All 17 patients recovered with short acting insulin, favipiravir, aspirin, doxycycline, low molecular weight heparin, metformin, statin, ivermectin, vitamin D, C and zinc and nasal oxygen. All the 17 cases totally recovered from COVID-19, except hyperglycaemia for which they were advised to use oral hypoglycaemic agents. For the last 3.5 months, we have been following these cases in the outpatient department. All of them had raised immunoglobulin against SARS-CoV-2 virus [ Table 1]. Discussion Diabetes mellitus is a two-edged sword; already existing DM is more prone to severe acute respiratory syndrome due to coronavirus (SARS-CoV-2) infection [ Figure 1]. Newer onset of DM with persistent hyperglycaemias occurred due to SARS-CoV-2 virus infection. [2] SARS-CoV-2 viruses get attached to the angiotensin-converting enzyme 2 (ACE-2) receptors. High-concentration ACE-2 receptors are located in insulin-secreting beta cells of pancreas, fatty tissue, small intestine, nasal mucosa, stomach, colon, skin, lymph anodes, thymus, bone marrow, spleen, liver, kidney and brain. [2] Recently, it has been reported that the mRNA encoding for ACE-2 receptor is expressed in thyroid follicular cells. [3] SARS-CoV-2 virus may be responsible for pleiotropic alternation of carbohydrate metabolism, responsible for susceptible and severity of SARS-COV-2 viral infection in an already existing diabetic victim. In a non-diabetic patient, newer onset of hyperglycaemia occurred due to infection by SARS-CoV-2 virus [ Table 1]. During infection with SARS-CoV-2 virus, the persistent newer hyperglycaemia results in severe clinical manifestations with poor outcomes. Hypertension, diabetes, obesity, sedentary life, old people and immuno-suppression such as HIV, and cancer cases are more susceptible to SARS-CoV-2 virus infection with poor outcomes. [4] In the present study, the fasting and postprandial blood sugar significantly raised with normal HbA1c value, confirming the newer onset of diabetes mellitus. Further, it is confirmed that SARS-CoV-2 virus is responsible for newer onset of diabetes and its persistence with the presence of SARS-CoV-2 antibody explored the possibility of immune damage of beta cells of pancreas. GAD-65 antibody detection facilities are not available in this part of India. Thyroid follicular cells are rich in ACE-2 receptors; the possibility of SARS-CoV-2 virus could also infect the thyroid cells. It is observed that different virus-like particles are seen in the follicular epithelium of patients with sub-acute thyroiditis. [5] Moreover, thyroid gland is anatomically continuous with upper respiratory tract, a major entrance of SARS-CoV-2 virus. [6] It is important to note that ACE-2 receptors coexist with type II serine protease trans-membranes (TAMPRSS2) and thyroid tissues exhibit a high expression of the TAMPRSS2 mRNA. [7] Autopsies of fatal cases of SARS-CoV-2 have confirmed the primary injury of thyroid cells with apoptosis of follicular cells. [7] Table 1]-a 54-year-old female who developed newer diabetes soon after the COVID-19 symptoms In a short history of human infection with SARS-CoV-2 virus, and understanding of how COVID-related diabetes and hypothyroidism develop, the natural history of these two endocrinal diseases and their appropriate management will be helpful. Thus, the ACE-2 receptor plays an important role in the pathogenesis of endocrine disorders. [7] ACE-2 receptor agonists will be a universal antidote for the management of endocrine disorders. Irrespective of the vaccination, these COVID-19 cases need a long-term follow-up. [8]
2022-07-02T15:03:45.301Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "d58bc118cd1e958c626d66de9caa8e9298b1933d", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jfmpc.jfmpc_2232_21", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65d1bcd4cd064ff56a8c430fb2583a19cb395b54", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236831528
pes2o/s2orc
v3-fos-license
The DEDO Forest Conservation Culture a Means to Conserves the Ororo (Ekebergia Capensis) Tree. Background The forest people around the world through their indigenous knowledge contribute to the sustainable management of forests. This article argues that the Sheka people in southwestern Ethiopia by their ecological knowledge, values, and spiritual use could manage the Ororo tree (Ekebergia capensis). The Ororo tree (Ekebergia capensis) is one of the most important endemic tree species in the Sheka zone southwestern Ethiopia and, at the same time, one of the most endangered species. Methods Data collected on the indigenous ecological knowledge of the Sheka people and how the Ororo tree could be managed and conserved through the DEDO culture documented and the spiritual connection between the Ororo trees and the Sheka people traditional belief system measured. Introduction Human interactions with nature have shaped both the attitudes and behaviors of people towards nature (Cristancho and Vining, 2004). Thus, every culture has a system of beliefs that guides their interactions with nature. The growing interest in the fundamental and consumptive values human attached to forest resource (Kruger, 2003), has led to the growing eld of traditional forest-related knowledge (Parrotta and Agnoletti , Parrotta et.al, 2008). The role of indigenous knowledge in forest conservation have historically been connected with sustaining biodiversity and ecosystem services as well as increase forest productivity (Becker & Ghimire, 2003). Whole forest resource and forest parts, and forest -based products have been used across the world for various purposes (Ali andRahut, 2018, Shackleton, 2019). Besides use in human, forest and forests parts are important for spiritual rituals of many societies (Amots, 2007). These associated values have resulted in the reverence of forest across culture. Among the Sheka people of south western Ethiopia, relationship with nature is cultivated in the societies traditional values and beliefs. There is emphasis on safeguarding the environment. For instance, the Sheka people in southwestern Ethiopia by their ecological knowledge, values, and spiritual use could manage the Ororo tree (Ekebergia capensis). The Ororo tree (Ekebergia capensis) is one of the most important endemic tree species in the Sheka zone southwestern Ethiopia and at the same time one of the most endangered ones. Those cultural practice have resulted in a wealth of biodiversity in most areas inhabited by the Sheka people (Getaneh, 2019, Melca Mahiber and the African Biodiversity Network 2007). The traditional forest conservation strategies of the Sheka people, is less in con ict with forest resource as compared to e.g agricultural land use form. As a result, approximately 2.7 million hectares, which is 24 percent of the country's territory, a huge segment is found in the South Western Highlands of Ethiopia (WBISPP, 1990). Ororo tree (Ekebergia capensis) in these areas are increasingly facing threats such as local and extralocal economic activities and socio-cultural (counting IK and its connection to resource management) change. While tangible monetary bene t for forest resource conservation results, socio-cultural values may be relevant to Ororo tree (Ekebergia capensis) conservation efforts (Melca Mahiber, 2007). The Sheka zone landscape has undergone socioeconomic change with potentially negative consequence for forest conservation in Ethiopia (Woldemariam andFetene 2007, Hundera 2013). Recent literature suggested that in Ethiopia, forest-based research has yielded valuable bits of knowledge into linkages between country employment, nancial change, and ecological administration, with critical rami cations for advancement strategy (Yeraswork, 2000;Desalegn, 2000;Woldeamlak, 2003;Dereje, 2007;Gessesse, 2007).. The Sheka forest has undergone socio-economic changes with potentially negative consequences for forest conservation in Ethiopia (Woldemariam andFetene 2007, Hundera 2013). Tree cutting, habitat loss due to increasing demand for agricultural land continues to be a major challenge for Ororo tree (Ekebergia capensis) conservation. Considering that no current conservation approach fully address these emerging challenge , to Ororo tree (Ekebergia capensis) conservation, there is a need to consider the potential role of values and perception in conserving forest resource. Understanding the dimensions of these values perception can present new socioeconomic perspectives in Ororo tree (Ekebergia capensis) conservation. In this paper, we access the values and perception on Ororo tree (Ekebergia capensis) among the Sheka people in southwestern Ethiopia and discuss how this traditional forest conservation strategy can be integrated in forest conservation. Study area The Sheka Zone is located at about 670km from Addis Ababa. It is found in the South Nations Nationalities and Peoples Regional State. The Sheka zone shares boundaries with the Oromia Regional State in the North, Bench Maji Zone in the South, Gambella Regional State in the West, and Kefa Zone in the east. The total area of Sheka was 2175327 ha. Geographically, the Sheka Zone lies between 7°24'--7°52' N latitude and 35°31'--35°35'E longitude. The Zone has three woredas namely: Masha, Andracha, and Yeki. In the Zone, there are 56 rural and seven urban peasant associations (Pas) in three woredas. Data Collection and Analysis The study made used both qualitative and quantitative data. Quantitative data were collected through administration of questionnaires to the head of the household and interviews with key informants within the selected area. Qualitative data were collected through key informant interviews (KIIs) and focused group discussions (FGDs). Data for this study were obtained through an interview conducted over 4 weeks in July and August 2019. The rst aim of the survey was to explain how the indigenous ecological knowledge of the Ororo tree could be managed and conserved through the DEDO culture and the second aim was to explore the spiritual connection between the Ororo trees and the Sheka people traditional belief system Data Analysis Data obtained were analyzed using descriptive statistics and presented in tables, means, percentages and frequency. This was based on the information provided by the respondents. Furthermore, computer software, known as statistical package for social sciences (SPSS) version 21.0, was employed in analyzing the data. Results Today, deforestation is one of the major environmental challenges affecting the world; however, the Sheka people through their indigenous knowledge of forest conservation strategies can sustainably manage the Sheka forest. The Sheka people have long been sustainably managing and conserving the Sheka forest by utilizing different procedures. Shockingly, these indigenous methods for normal asset administration and nearby adjustment techniques are ordinarily absent from scienti c forest management and not archived. The DEDO culture demonstrates how the Sheka people through their indigenous culture can provide valuable, appropriate, and effective forest conservation strategies. Here under results of the analysis of " the DEDO culture" are explained in detail by considering key points Traditional sacred tree, People's beliefs, and conservation mechanisms in the study area Six belief systems were identi ed, and eight conservation mechanisms observed to be in practice in the area, which were relevant to sustainable conservation and management of the Ororo tree. The sacred tree (Ekebergia capensis) and the DEDO culture are the most common cultural institutions in all villages, and they have a direct bearing on the lives and behaviors of the people. People's knowledge of the sacred ororo tree (Ekebergia capensis) The vast majority of respondents (97%) were aware of the presence of the Ororo tree (Ekebergia capensis) and the DEDO culture in and around their village. It is found near to villages of the community. Older people (>55 years of age) could more accurately describe Ororo tree and DEDO culture than younger people could, but this difference was not signi cant. All research participants from Masha and Anderacha woreda knew where these Ororo tree (Ekebergia capensis) stands. All of them worshiped at least once in the Ororo tree (Ekebergia capensis). The spiritual connection between the DEDO forest conservation culture with the Ororo tree (Ekebergia capensis) The forest people around the world through their indigenous knowledge contribute to the sustainable management of forests. The Sheka people in southwestern Ethiopia by their ecological knowledge, values, and spiritual use could manage the Ororo tree (Ekebergia capensis). The Ororo tree (Ekebergia capensis) is one of the most important endemic tree species in the Sheka zone southwestern Ethiopia and, at the same time, one of the most endangered species. Eighty-ve percent (85%) of respondents con rmed that the sacred Ororo tree (Ekebergia capensis) is a cultural symbol related to indigenous beliefs and signi es spiritual connections to the forestland and with Sheka people. The Sheka people in southwestern Ethiopia had a well-de ned social structure that is closely associated with forest management. Through their traditional forest-related knowledge, the Sheka people conserve and manage a single larger tree called Ororo. The Ororo tree is a special type of tree that has cultural and spiritual attachments that are presently non-existent. This unique forest conservation practice has been referred to as the DEDO culture. The culture of DEDO comes up with worshiping around the Ororo tree. Participants of FGD both in Masha and Anderacha woreda explained the historical connection of the Ororo tree with the DEDO culture. "According to Sheka people traditional belief once upon the time in the history of Sheka people, there was a drought for a long period. The drought had damaged all trees except the Ororo tree. The survival of Ororo trees from the rest made the Ororo tree as cultural symbols and related to the indigenous belief that signi es spiritual connections to God (Shemayo tato). During drought time in the history of Sheka, there is a saying that the angel of GOD rest under the Ororo tree because of this if people pray or worship under the Ororo tree, the angel of God will take their prayer to God. As many Sheka people do believe that the angel of GOD rest under the Ororo tree because of this if people pray or worship under the Ororo tree, the angel of God will take their prayer to God. Because of the spiritual connection to this particular tree of Ororo, the Ororo tree will not be used for any other economic activities like the production of honey and other domestic uses. Therefore, the conservation of the Ororo tree has a direct spiritual connection, and has contributed to the conservation and protection of the Ororo tree. The Ororo tree and DEDO cultural ceremonies Overall, 86% of respondents "agreed" with the Statement that the ororo sacred tree and the DEDO culture are used by sheka people as cultural symbols related to indigenous beliefs and signify spiritual connections to the forests" (Table 1). The belief that tender and lightening will damage the villager if the sacred Ororo tree (Ekebergia capensis) are felled in the village was very popular-86% "agree" response ( Table 1). The DEDO culture was celebrated once in a year in the months of December around Christmas as Thanksgiving Day. Offerings were made each year at this time. One of the key informants in Gecha Town explains how the DEDO culture was celebrated each year. During the months of December, when yields were harvested, people in the village were gathered together to celebrate Thanksgiving Day around the DEDO sacred tree (Ororo) under the advice of clan leaders (Gebi tato). The DEDO sacred tree (Ororo) culture was celebrated near to village according to their clan and the clan leaders (Gebi tato) as "traditional forest-related knowledge experts," i.e., persons recognized by the Sheka community were responsible for making and enforcing rules related to the DEDO cultural ceremony. The purpose of the offering was to giving thanks to GOD (Shemayo tato) for the harvest season. After giving thanks to GOD (Shemayo tato) for the good harvest of the season, the Sheka people pray to GOD (Shemayo tato) the next season to be a season of health, fortunes and good harvest. Therefore, the DEDO sacred tree (Ororo) was believed to bring health, fortune, and good harvest. The other key informant in his description the way the DEDO culture was celebrated he cogently explained that the DEDO culture was celebrated each year seven days before Christ-mas. local cereals (Teff) were harvested around Christ-mas time and for the DEDO celebration foods and alcohol, drinks were mostly prepared from local cereals called Teff. Wednesdays is a day used for the celebration of DEDO culture. The aim of the DEDO culture was praying to GOD (Shemayo tato) for the next good harvest and for the health of the people. Another relationship between the DEDO culture and the Sheka people is that long years before the Sheka people did not have health facilities access because of this many young and adult parts of the population died at an early age. In fear of this killing disease, all the village members gathered around the Ororo tree and celebrated the culture of DEDO and pray to GOD (Shemayo tato) about their health. Therefore, the Ororo tree is believed to bring health to the Sheka people. According to the Sheka belief, the DEDO tree is untouchable. No one was allowed to cut the Ororo tree. It is conserved and protected well for centuries for spiritual purposes. The interview and FGD results provide useful examples of the DEDO sacred tree conservation culture and traditional forest-related knowledge possessed by the Sheka people. As an informant recalled: In the past, the Sheka people have held ceremonies to pray for a successful harvest season and express their thanks to GOD (Shemayo tato). The Sheka people participate in rituals for GOD (Shemayo tato) on the month of December, according to the Ethiopian calendar each year. They collectively participate in traditional rituals of food preparation and beverages (made from Teff) before they put the harvest into the granary. These rituals play an important role in encouraging relationships between members of the community. Cutting of Ororo tree (Ekebergia capensis) The majority of respondents (80%) said they the DEDO tree conservation culture could manage and conserve the Sheka forest. Spiritual connections and beliefs were the main reasons why people worship around the sacred Ororo tree (Ekebergia capensis). The protection of the sacred Ororo tree (Ekebergia capensis) enables the conservation of natural forests from earlier anthropogenic disturbances, allowing trees and other plant species to reproduce. The entire the sacred Ororo tree (Ekebergia capensis) was put under the imposition of local cultural beliefs. The Sheka people considered the Ororo tree to be sacred and believed to protect the village from natural calamities, famine, and diseases Therefore the culture of DEDO sacred tree (Ororo) contribute positively to the conservation of Ororo tree. Access to DEDO sacred tree (Ororo) is forbidden by Sheka culture, and the DEDO sacred tree (Ororo) is untouchable and no person is allowed to cut or make use of the DEDO sacred tree (Ororo) for another purpose. Therefore, the DEDO sacred tree (Ororo) is considered to be the king of the trees in the village. The DEDO sacred tree (Ororo) once exists in every village as spiritual or sacred sites. These trees are usually very tall and long. The Sheka believe that these trees can provide safety, fortune, and good harvests for their villages. According to one of the key informants in Masha woreda, Yepo Kebele the clan leader (Gebi tato) said, "no one is allowed to cut down these trees, and any person who cuts these trees will be punished because of the curse that is associated with indigenous belief. According to the research participant, there was a true story about a person who violated the culture of DEDO. In Masha woreda, there was a person who cut down a DEDO sacred tree (Ororo): he was dead by thunder and lightning immediately. The above quote about a person reminds that the Sheka people represent the DEDO sacred tree (Ororo) has direct connections to their GOD (Shemayo tato). According to Sheka traditional belief, if any person who cuts down the DEDO sacred tree (Ororo) rain will become abnormal, usually resulting in oods. There is a similar story in Anderacha woreda about the death of a young man after he cut down the DEDO sacred tree (Ororo) he died consequently. Even though the younger generation has limited knowledge of the DEDO sacred tree (Ororo), all Clan leaders (Gebi tato), and older men who participated in this research work agreed that they rmly believe in the supernatural meanings attached to the DEDO sacred tree (Ororo). All twenty research participants from the study area knew how and where the DEDO sacred tree (Ororo) conservation culture was practiced. All of them had worshiped in the DEDO sacred tree (Ororo) for many years in their lifetime. According to research participants from the community elderly: no one dares to touch the DEDO sacred Ororo tree. According to the research participant (KI-9, 28 Jan 2016 Masha Town), those who touched the DEDO sacred tree (Ororo) would be cursed and died. During the interview, both Clan leaders (Gebi tato) and older men told us that before 30 years ago, the DEDO culture was a very common traditional belief in almost every village in the Sheka zone. However, this tree conservation cultural has gradually disappeared, particularly in recent decades. Discussion Over the years, there have been increasing concerns about the decline of traditional forest-related knowledge, leading to calls for effective responses to ensure forest sustainability (Parrotta and Agnoletti, 2006). This concern has been increasingly recognized, documented, and utilized both in developing and developed countries (Berkes et al., 2000;Bürgi et al. 2013;Ramakrishna,2007) In Ethiopia, forest conservation and management range from state-owned forest to privately owned forests. There was no room for traditional forest conservation and management approaches. The rst approach such as stateowned forest management (Dessalegn Rahmato, 2001;FDRE, 2007). The latter approach advocates privately owned forests. However, it has been argued that both of this approach alone often fails biodiversity conservation unless it is supported by traditional forest conservation and management approaches. The role of indigenous knowledge in forest conservation in Africa has also been recognized in recent years. Its potential contribution to Africa's ecology has not been well studied. Recently, very few studies have been conducted to depict the contribution of traditional knowledge to biodiversity, climate change, and combating deserti cation. Traditional forest-related knowledge has upheld the occupations, culture, personalities, and the woodland and farming assets of the neighbourhood and indigenous networks everywhere throughout the world (Parotta and Trosper, 2012). Traditional forest-related knowledge (TFRK) is of speci c signi cance to indigenous networks, people groups, and countries . Numerous specialists have put accentuation on incorporating traditional forest-related knowledge and scienti c knowledge for the protection of timberland nature forests Menzies, CR, & Butler, C 2006). The negative attitude towards traditional forest conservation culture can undermine local, national, regional, and international conservation initiatives. Gadgil et al. 1993, Gadgil M, Berkes F 1991, and Gadgil M 1985 argue that traditional forest conservation culture plays a pivotal role in forest conservation and management. Therefore, it is crucial to recognize and incorporate the importance of such conservation culture into forest resource management plans. The recognition of traditional forest conservation culture in forest management will not only affect population viability but may also have broader environmental impacts. The recognition of traditional forest conservation culture is also necessary for ensuring that forest management policies are both effective and sensitive to local realities (Gupta, H.K. 2005 andGupta, H.K. 2006). In this regard, it is important to continuously conduct studies on forest management to inform area-speci c policies as the conservation culture toward forest often differs from one setting to the other. Few studies to understand traditional forest conservation culture in forests are situated in Ethiopia (Desalegn Fufa, 2013). This is despite the fact that Ethiopia is wealthy in its ora, and it is evaluated to harbor more than 6000 types of higher plants, of which around 125 are endemic (Ib Frus 1982), requiring their protection and conservation. A common thread in developing and applying conservation policies requires gaining the support of traditional forest conservation cultures and engaging these conservation culture in collaborative conservation efforts (Gadgil et al. 1993;Gadgil M, Berkes F 1991;Gadgil, M. 1985;Gupta, H.K. 2005, Gupta, H.K. 2006. Therefore, studies in traditional forest conservation culture contribute to the development of effective forest conservation and management policies that are sensitive and relevant to local conditions and the degree to which local communities are willing to coexist with forest resources (Gadgil et al. 1993;Gadgil M, Berkes F 1991, andGadgil, M. 1985). Traditional forest conservation culture in sub-Saharan Africa has vast indigenous knowledge that has kept its forest ecosystem pristine and protected for decades (Mumma, A. 1999, andTengeza A, 2000). Other than the spiritual attachment to their environment, rural communities were historically dependent on forest resources for their livelihoods (FAO 2014). However, the introduction of state-sponsored deforestation and markets in uencing agricultural expansion in the African continent resulted in centralized control over natural resources by state, which resulted in the taking away of decision-making concerning forest resources from rural communities (GRAIN 2008, Cotula et al. 2009Deininger and Byerlee, 2011). Consequently, rural communities became passive observers of the forest resources around them. The state forest law of the southern nations, nationalities, and people's region (SNNPR) put community forest under state forest. The government of Ethiopia adopted state forest laws that put community forest under state forest; therefore, these laws limit the local population forest resources utilization and introduced a total ban on using forest at one point (Dessalegn,2001). This state of forest conservation practice is the detriment of local communities. As a result, there is an ongoing con ict between the state and the local people in southwestern Ethiopia, which is attributable to the hostile relationship between conservation and livelihoods of communities living adjacent to and within the sheka forest. To our knowledge, this is the rst study to analyze traditional forest conservation culture of forest, using DEDO tree conservation culture as a proxy, how the Sheka people through their indigenous culture can provide valuable, appropriate, and effective forest conservation strategies. The analysis is important in providing insights on how tree conservation culture and current practices may in uence forest sustainability and its supporting institutions. This is crucial for rethinking the design of conservation policies that allow for effective management and planning, sensitive to local realities. Speci cally, this study analyzes the spiritual connection between Ororo trees and the Sheka people traditional belief system could be managed and conserved through the DEDO culture. Conclusion The case study presented on DEDO Ororo tree (Ekebergia capensis) tree conservation culture demonstrates that the Sheka people have their own indigenous knowledge, beliefs, and management practices related to forest. This cultural and belief system is inherited from their ancestor since time immemorial and evolving over generation. The culture of DEDO described in this article shows that it still exists in every element of local forest protection, and management Ororo tree (Ekebergia capensis). As the study clearly shows, the DEDO culture is productive and e cient for forest management, and this useful culture has demonstrated its signi cance in the protection of various forest types and tree species, contributing to the conservation of biodiversity. Thus, the DEDO Ororo tree (Ekebergia capensis) conservation culture is illustrated in the Sheka people forest utilization, protection, and management provide important insights into the protection of various forest types and tree species, contributing to the conservation of biodiversity.
2020-06-11T09:04:23.172Z
2020-10-09T00:00:00.000
{ "year": 2020, "sha1": "8b1df2460e920235df0e27cf46e8a94b111a19ef", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-88906/v1.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "f7547bf5da8c6f72af132eb5f4f655130b318e98", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
196197159
pes2o/s2orc
v3-fos-license
Warehouse Layout Designing of Slab Using Dedicated Storage and Particle Swarm Optimization Warehouse is a supporting facility and has the important role in the production system. Goods warehouse arrangement is an activity based on related/rationale reason. There are several policies in random storage, fixed or dedicated storage, and class-based storage. We conducted this study in XYZ Company which the main business is producing steel. In this research, dedicated storage and particle swarm optimization will be used in the layout design of slab raw material warehouse to obtain the best layout to minimize material handling cost. Based on the result of the research, particle swarm optimization method gives the best layout result with the least material handling cost compared to the existing layout and layout using dedicated storage method. Introduction A warehouse is one of supporting and an important part of a production system. Good conditions and arrangements in the warehouse are expected to avoid corporate losses and minimize the costs incurred and speed up operations and services at the warehouse. Storage process can be performed by different storage policies. The most used and preferred policies can be given as randomized storage policy. the dedicated storage policy, and class-based storage policy. The randomized storage policy is performed by the allocation of the storage location based on the available space at the time of the storage job. Storage decision is left to the operator in another word. A dedicated storage policy determines a particular predetermined location for each product to be stored. A class-based storage policy is a common and shared policy between randomized and dedicated policies. It divides goods into classes based on some criteria and each class is assigned a block of storage locations. This policy can be called as ABC zoning [1]. Hot Strip Mill (HSM) is one of XYZ which produces the hot rolled coil with the raw material of slab steel. In the period 2016-2017. XYZ has to order raw material of slab steel as much as 2,777,089 tons, in 2018. PT. XYZ plans to increase its production capacity so as to support its new warehouse to store raw material of slab steel bar. PT. XYZ has been doing soil dredging which will be used as outer warehouse 04 to store 20 types of grades and length groups of raw materials of different slabs with 83,070.12 m2. Currently, outer warehouse 04 has no specific rules applied to arrange the placement and preparation of raw material steel slabs (slab). As there is rules, the raw material storage is arbitrary. in order to have an efficient warehouse layout which saves storage space, has short material handling distance and low material handling costs, the method of dedicated storage and particle swarm optimization (PSO) are applied to obtain the best layout. The dedicated storage method of compiling the product is based on the comparison of the activity of each product to the required space required by the product and then obtained the order of products from the largest to the decimated. Whereas. A particle swarm optimization (PSO) algorithm as a metaheuristic algorithm was developed for determining the optimal layout. Warehouse The function of a production warehouse is to store raw materials, work-in-process and finished products, associated with a manufacturing and/or assembly process. Raw materials and finished products may be stored for long periods. This occurs for example when the procurement batch of incoming parts is much larger than the production batch, or when the production batch exceeds the customer order quantity of finished products [2]. The warehouse design is a complex task because of the interaction and relationships between each of the activities in warehouses [3]. The warehouse plays role in supporting a company's supply chain success. The mission of warehouse is to effectively ship product in any configuration to the next step in the supply chain without damaging or altering the product's basic form. Numerous steps must be accomplished and hence there are key warehousing opportunities to address. Doing that will optimize the methods used to achieve the mission. If the warehouse cannot process orders quickly, effectively, and accurately. Then a company's supply chain optimization efforts will suffer. All warehousing opportunities, including improving order picking operations, utilizing cross-docking, increasing productivity, utilizing space and increasing value-added services [4]. Storage Policy Storage process can be performed by different storage policies. The most used and preferred policies can be given as randomized storage policy, a dedicated storage policy, and class-based storage policy. Randomized Storage Policy: The randomized storage policy is performed by the allocation of the storage location based on the available space at the time of the storage job. Storage decision is left to the operator in another word. With a pure randomized storage system each unit of particular product is equally likely to be retrieved when retrieval operation is performed; likewise, each empty storage slot is equally likely to be selected for a storage operation is performed. Dedicated Storage Policy: With dedicated storage policy a particular set of storage slots or location is assigned to the specific product; hence. a number of slots equal to the maximum inventory level for the product must be provided. The Warehouse layout problem considered involves the assignment of products to storage locations in the warehouse. One of the advantages of dedicated storage is the datahandling efficiency due to the fixed addressing of storage items [5]. In order to minimize the total expected distance travelled the following approach is taken. [4] (1) The following notation is used: q = number of storage locations n = number of products Sj = number of storage locations required for product j Tj = number of trips in/out of storage for product j. that is,the throughput of product j An approach is presented for determining the optimum dedicated storage; rectilinear travel is assumed. The rectilinear travel can be formulated as follows: dij = (2) The following notation is used: Class-Based Storage Policy: A class-based storage policy is a common and shared policy between randomized and dedicated policies. It divides goods into classes based on some criteria and each class is assigned a block of storage locations. This policy can be called as ABC zoning [1]. Each class is then assigned to a dedicated area of the warehouse. Storage within an area is random. The advantage of this policy is that fast moving products can be stored close to the depot while the flexibility and high storage space utilization of random storage are applicable [6]. Particle Swarm Optimization Particle Swarm Optimization (PSO) is a stochastic optimization technique and also a population-based search algorithm first proposed by Eberhart and Kennedy (1995). It is a population-based search method that is inspired by the social behavior of organisms such as bird flocking or fish schooling. A brief and complete survey on PSO mechanism. technique. and application is provided by Kennedy and Eberhart (2001) [7]. Each particle has got a velocity. a memory. and informants. The positions and the velocity of the first population of particles are generated randomly. To decide to move. the particle needs four pieces of information: her actual velocity. her best performance and the best performance of her neighbors. It is based on three parameters w. c1. and c2 which allow them respectively either to follow its own path. retrace his steps. or follow the best neighbor [8]. In PSO. each particle included by social structure keeps in mind its best position and uses this as a factor affecting its speed. A particle gains speed toward its individual best position considering with how far away from that point. It also shows the same behavior for the global best position. In other words. while it is scanning the surface. it is affected by the global best position and adjusts its own speed. In the situation of that. it is far from the global best position. there will be a higher change in its speed and direction [1]. Each individual (particle)'s speed changes according to the formula: The following notation is used: vtij : velocity of particle i. area j on t-1th iteration wt-1 : inertia weight pada iterasi t-1 : position value of particle i. area j on tth iteration xt-1ij : position value of particle i. area j on t-1th iteration pt-1ij : personal best of particle i. area j on t-1th iteration gt-1j : global best of particle i. area j on t-1th iteration c1 . c2 : cognitive and social parameter r1 . r2 : random uniform [0.1] Inertia value of the equation changes on each iteration. This change is based on the logic of decreasing from the value determined to a minimum value according to inertia function. The objective is to converge the created speed by diminishing on the further iterations; hence more similar results can be obtained. Inertia function is obtained as follows: The following notation is used: wt : inertia weight tth iteration wt-1 : inertia weight t-1th iteration α : decrement factor Positions of the particles change by speeds as shown in Eq. (5) xtij = xt-1ij + vtij (5) Research method In this section, the stage in problem-solving is described in the following flowchart. In proposal 2 layout of outer warehouse 04 is utilizing particle swarm optimization method. The problem-solving path using the particle swarm optimization method is as shown in Figure 2. The PSO method for designing the outer warehouse layout 04 is implemented using MATLAB software. The parameters used are the population (swarm size) of 50 particles. Maximum generation is 1000 Cognitive parameters (c1) and social parameters (c2) are 2 and inertia weight (wt) of 0.9. Results The following is the results obtained on proposals 1 and 2 of the outer warehouse layout 04 using dedicated storage and particle swarm optimization methods Dedicated Storage Layout placement using dedicated storage method is based on the comparison between throughput with space requirement every type of slab. The outer warehouse layout 04 is designed based on the result of the throughput ratio and the space requirement from the largest to the smallest and closest to the outer door of the warehouse 04. The following is the layout of proposal 1 outsourced warehouse 04 using the dedicated storage method. The design of the second proposal of warehouse layout is using the particle swarm optimization method which requires several steps. such as preparing the total distance input matrix for each slab. determines parameters for particle swarm optimization algorithm. The results obtained from MATLAB as shown in Figure 3. 2 -19 -11 -12 -4 -14 -1 -15 -7 -10 -20 -6 -3 -16 -13 -17 -9 -8 -18 with the types : K.3 -F1.1 -F1.3 -K.1 -H.3 -J.3 -C.1 -D1.1 -M1.3 -G.1 -D1.3 -J.1 -A.1 -H2.3 -C.3 -G1.3 -G.3 -M1.1 -A.3 -D.3. The objective fitness value for each of generation is shown in Figure 4. Total Distance Travelled The calculation of distance travelled using the rectilinear method. That distance is measured along the path by using a perpendicular (orthogonal) line with each other. Here is the distance of each area trip to the input point on the existing layout: Material Handling Cost The calculation for the total cost of material handling for an existing condition. Space Efficiency The calculation of storage space efficiency is Based on the above calculation obtained the percentage of storage space efficiency on the existing outer warehouse layout 04 is 46.704%. The following is a calculation of the area of each slab area in the proposed layout 1 outside warehouse 04. Analysis and discussion The total distance obtained in layout proposal 1 using dedicated storage method is 18992.036 m with material handling cost IDR. 340.062.757, the percentage of storage space efficiency obtained in a layout by using dedicated storage method is 46.685%. Placement of all types of slab using particle swarm optimization method based on the best solution search process in the form of moving material handling distance with parameters that have been determined. The sequence generated in the best solution (global best) will serve as a sequence on the placement of the slab in the layout. The first sequence of the slab will be placed at the location which has the smallest distance that is close to the outer warehouse output door 04. The total distance obtained in the layout using the particle swarm optimization method is 18955.780 m with material handling cost of IDR. 334.217.964 and the percentage of storage space efficiency of 48.189%.In the existing layout. The distance obtained is Thus, the best layout in this research is the proposed 2 layout using particle swarm optimization method because it produces the lowest material handling cost. Conclusion The best layout in this research is the layout of proposal 2 using particle swarm optimization method because it produces the lowest material handling cost that is IDR 334,217, 964.
2019-07-14T07:01:31.379Z
2019-06-13T00:00:00.000
{ "year": 2019, "sha1": "ed0a4ce7e72bfc672f04781d3a401bd38f849b8b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/532/1/012003", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f22e19588346ad67a40af208715c64cfaefe12be", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Engineering" ] }
255835611
pes2o/s2orc
v3-fos-license
Working to improve the management of sarcoma patients across Europe: a policy checklist The Sarcoma Policy Checklist was created by a multidisciplinary expert group to provide policymakers with priority areas to improve care for sarcoma patients. This paper draws on this research, by looking more closely at how France, Germany, Italy, Spain, Sweden and the United Kingdom are addressing each of these priority areas. It aims to highlight key gaps in research, policy and practice, as well as ongoing initiatives that may impact the future care of sarcoma patients in different European countries. A pragmatic review of the published and web-based literature was undertaken. Telephone interviews were conducted in each country with clinical and patient experts to substantiate findings. Research findings were discussed within the expert group and developed into five core policy recommendations. The five identified priority areas were: the development of designated and accredited centres of reference; more professional training; multidisciplinary care; greater incentives for research and innovation; and more rapid access to effective treatments. Most of the countries studied have ongoing initiatives addressing many of these priorities; however, many are in early stages of development, or require additional funding and resources. Gaps in access to quality care are particularly concerning in many of Europe’s lower-resourced countries. Equitable access to information, clinical trials, innovative treatments and quality specialist care should be available to all sarcoma patients. Achieving this across Europe will require close collaboration between all stakeholders at both the national and European level. Background What are sarcomas? Sarcomas are a family of rare cancers with a combined incidence of six people per 100,000 population, with 28,000 new cases each year in Europe [1]. They develop in the connective tissues and bones, and can occur anywhere in the body, at any age [1]. There are more than 50 subtypes of soft tissue sarcoma (STS) alone, each with unique clinical, prognostic and treatment characteristics [1,2]. Because of this heterogeneity, sarcoma epitomises many of the challenges encountered with rare cancers. Patients often lack access to appropriate information about their condition, specialised care centres, appropriate courses of care, and ongoing clinical trials [3]. The heterogeneity of sarcoma also complicates research efforts: recruiting sufficient numbers of patients to large clinical trials is often impossible, epidemiological data are scarce and the evidence base to guide clinical practice is inadequate [1, 4,5]. Sarcoma patients report some of the poorest experiences of any cancer type [3]. Lack of professional experience in diagnosing or treating sarcoma is a critical issue, and may lead to delays and errors in diagnosis and treatment. In one study, 27% of all patients with sarcoma had been told by their general practitioner that their symptoms were not serious, or had been initially treated for something other than sarcoma [3]. Referral pathways are often not clearly defined. Access to appropriate care also varies considerably both between and within countries. Some treatments are not reimbursed, leaving patients to pay for treatments out-of-pocket. Many patients may have to travel great distances to receive appropriate care [4]. The policy landscape In the past few years, many stakeholders have raised awareness of the need to improve the patient experience for patients with rare cancers [4,5]. Sarcoma has been a central focus of these efforts, with publications by the European CanCer Organisation (ECCO) outlining a list of essential requirements for quality care in STS in adults and bone sarcoma [6]; a paper on patient pathways by Sarcoma PAtients EuroNet (SPAEN) defining patient-driven recommendations for sarcoma care [7]; and the creation of a European Reference Network (ERN) on rare adult cancers (EURACAN) including a domain for adult sarcomas [8]. The sarcoma policy checklist Building on these initiatives, the Sarcoma Policy Checklist [9] was created and launched at the European Parliament by a multi-stakeholder group in February 2017. The Checklist aims to provide policymakers with priority areas where the greatest needs exist to improve care for sarcoma patients. This paper draws on research done to build the Sarcoma Policy Checklist, and looks more closely at how six Western European countries (France, Germany, Italy, Spain, Sweden and the United Kingdom (UK)) are addressing each of the recommendations. Drawing from the experience in these countries, the paper highlights key gaps in research, policy and practice, as well as important developments that may impact future policy and practice across Europerecognising that challenges will differ from one country to another, particularly in lesserresourced countries in Europe. A review of the published and web-based literature on sarcoma and rare cancers was undertaken, using standard search terms, which were translated into French, German, Italian and Spanish to enable local language searches in relevant countries. Local language searches in Swedish were deemed unnecessary given that most of the literature is available in English. Telephone interviews were conducted in each country with local clinical and patient experts who are members of the expert group of the Sarcoma Policy Checklist. These helped fill gaps in information and obtain an up-to-date representation of the current situation in each country. Recommendations of the sarcoma policy checklist Research findings were discussed at several stages within the expert group, to ensure consensus in recommendations. The expert group agreed on five key areas where policymakers should focus efforts to improve care for sarcoma patients (the Sarcoma Policy Checklist) and issued recommendations for each of these areas. The Checklist was launched at the European Parliament in February 2017. Recommendations are presented in Table 1. Results Research findings are summarised below based on data from France, Germany, Italy, Spain, Sweden and the UK. This section aims to outline how well each country is doing in meeting recommendations of the Sarcoma Policy Checklist, focusing on some of the most salient issues discussed during expert interviews and identified in our review of the literature. Specialist sarcoma care Centres of reference for the specialised management of sarcomas exist in all six countries, but they are not always formally designated by explicit quality criteria, nor formally recognised by national bodies. The establishment of EURACAN has required countries to nominate centres of reference to participate based on set criteria, formalising the designation of centres for the first time in many countries. In Spain, for example, the Ministry of Health recently endorsed five national sarcoma reference centres to take part in EURACAN [10]. In Germany, two centres -Essen and Mannheimcover the sarcoma domain within EURACAN. This situation may vary considerably between countries however. France has an advanced network that connects all clinical and pathological reference centres for sarcoma. There are two clinical networks: French Clinical Reference Network for soft tissue and visceral sarcomas (NetSarc) and French Reference Network for bone sarcoma and rare bone tumours (ResOs) for the medical oncology part [11,12] as well as a Sarcoma Pathological Reference Network (RRePS), which enables a second expert pathological review for all STS cases to confirm diagnosis [12,13]. However, a particular issue in France is that the accreditation of reference centres is based on a centre's expertise in oncology, not surgery. Therefore, the quality of sarcoma surgery varies considerably among centres. This is not necessarily the case in other countriesfor example, quality standards in the UK are based primarily on a centre's surgical expertise. Even if reference centres are initially designated based on their adherence to specific criteria, their performance over time, in terms of meeting quality standards for surgery, radiotherapy, chemotherapy and other aspects of care, still needs to be regularly monitored [4]. Sweden has a particularly sophisticated monitoring of quality of sarcoma care through its cancer registry for extremity and trunk wall sarcoma [14]. In some countries, such as Italy, there are ongoing efforts by professional societies to clarify quality standards for specific aspects of care (e.g. surgery). In Germany, a formal certification process for reference centres for STS is currently being developed by the German Cancer Society (DKG). Professional training In all six countries studied, there is no formal training on rare cancers (including sarcoma) within the general medical curriculum. Training on rare cancers is also not part of the formal training of oncologists in most countries, although there are ongoing efforts to change this, for example in France and Italy. Specialist training programmes have particularly focused on surgery at the European level. For example, there is the European School of Soft Tissue Sarcoma led by the European Society of Surgical Oncology (ESSO) [15] and the eSurge programme led by hospitals in both France and Italy [16]. Other efforts exist which target non-specialists to encourage them to quickly refer patients to appropriate specialists and reference centres. For example, the 'On the ball' public awareness campaign led by Sarcoma UK (a patient-led charity) aims to raise awareness among general practitioners (GPs) of the 'red flag' symptoms of sarcoma, and encourage GPs to refer any suspected sarcoma case directly to specialist centres quickly [17]. Many patient organisations have called for the establishment of clear referral pathways to improve access to expert diagnosis and treatment for sarcoma [7]. Simple referral guidelines, like those for STS in Sweden (which recommend referral to reference centres before initial surgery is performed) have been shown to improve referral rates, reduce costs associated with local recurrence and result in better surgical results and overall patient outcomes [18,19]. Of patients with deep sarcoma in Sweden, 80% are referred to a regional reference centre before biopsy [20]. Clear referral pathways may help improve the accuracy of diagnosis in sarcoma, which remains an area for improvement. Data from Germany, for example, suggest that the error rate of primary diagnosis is over 60% among non-specialised pathology departments [21], while in France over 45% of first histological diagnoses were modified after a second reading in the French reference networks and possibly resulted in an alternative treatment course [22]. Similar results were found in other regions in France and Italy [23]. Most national guidelines, as well as the recent ECCO and SPAEN recommendations [6,7], recognise that the organisation of sarcoma care in multidisciplinary teams (MDTs) is key to providing high-quality care. This is also a criterion for reference centres to be considered for inclusion into EURACAN. The networking of centres within both national networks (e.g. NetSarc) and EURACAN may provide the necessary basis for multidisciplinary care practices to develop. For example, in the UK, a National Ewing Sarcoma Multidisciplinary Team (NEMDT) advisory group meets regularly and brings together experts from around the UK to discuss patient treatment plans and best practice, and to find ways to optimise the patient pathway to improve survival rates for Ewing sarcoma patients [24]. However, the implementation of an MDT approach remains uneven between individual centres. Many centres do not have sufficient resources to implement a systematic MDT approach for sarcoma care, and appropriately trained personnel representing each required specialty are not always available. Moreover, the composition of a specialist sarcoma MDT is often not clearly defined. It is also critical for MDTs to include primary and community-based providers as well as professionals in hospitals or centres of reference, to ensure high quality of diagnosis and care across the entire care pathway [6]. Incentives for research and innovation Across all countries, there remains a need for more basic and translational research on sarcomaand funding to achieve this. Also, clearer regulatory guidance is needed to encourage the establishment of public-private partnerships to drive research in sarcomas; for example, on the appropriate interaction between academia and industry in these collaborations. There are, however, several important ongoing research initiatives on sarcoma. Individual countries have led focused research efforts in different areas. In 2013, France had 142 started or ongoing translational studies on rare cancers, 30% of them in the sarcoma networks (NetSarc, RRePS and ResOs), and 70 translational studies completed, 49% of them in the sarcoma networks [13]. In the UK, Sarcoma UK published a survey of sarcoma patients in 2015, which has provided important insights into patient experiences [3]. The charity has also made a strategic commitment to fund £3 million in scientific research by 2020 [25]. In Spain, research is taking place on very rare and ultra-rare sarcomas to determine their burden and improve treatment pathways. As has been mentioned previously, recruiting sufficient numbers of patients into sarcoma clinical trials remains an ongoing challenge due to the small number of patients with each specific type of sarcoma [3][4][5]. Real-world data pooled from multiple centres as well as established registries at national and international level are therefore critical to gather sufficient evidence of how treatments work in practice, and guide quality improvement efforts. In Sweden, the National Sarcoma Quality Registry (INCA), which collects sarcoma patient data from all regions, provides an interesting opportunity for real-world data analyses. Discussions are also underway to try to link Swedish sarcoma patient data with data from other Nordic countries, as they all follow a similar data collection template. The development of EURACAN is also likely to play an important role in encouraging the collection of comparable real-world data across different centres. Efforts to develop a standardised data set for the collection of hospitalisation data are already in place. Access to treatment and care Access to appropriate treatment and care is a critical concern for patients with sarcoma, and rare cancers more generally. Patient groups are leading efforts in many countries to try to reduce existing disparities in access to treatments [7]. If treatments are not reimbursed, patients and their families may have to pay for them out-of-pocket, causing a considerable financial burden. Differences in evidentiary requirements between regulatory authorities (for license approval) and Health Technology Assessment (HTA) or reimbursement bodies (for funding) remain a critical issue for patients' access to treatment. For example, the European Medicines Agency is increasingly allowing flexibility in drug regulatory pathways for orphan drugs and treatments that address clear unmet needs in given patient populations, such as sarcoma. Smaller trials and adaptive trial designs should be given due consideration for rare cancers, as well as accelerated review, conditional marketing authorisation and adaptive licensing [26][27][28]. This flexibility, however, is not necessarily matched by reimbursement and HTA agencies in most countries [27]. In theory, special regulatory pathways applicable to orphan drugs apply to sarcoma treatments. However, countries do not have consistent approaches to evaluating orphan drugscreating uncertainty about the level of evidence needed to obtain regulatory approval. This situation often leads to long delays, or even denial of access to patients [4,27]. Decentralisation of reimbursement and funding decisions to the regional or local level also contributes to significant inequalities in access to new treatments for patients in many countries. Early access or compassionate use programmes often exist; however, they have not necessarily been applied to sarcoma, with some notable exceptions such as the UK. Another concern often expressed by sarcoma patients and their representatives is their lack of involvement in reimbursement and funding decisionsfor example, in helping to determine what constitutes 'meaningful benefit' from a patient perspective in funding and reimbursement decisions. Finally, awareness and participation in sarcoma clinical trials remains inadequateespecially considering that clinical trials are often the only way patients may access potentially innovative treatments. Evidence suggests that sarcoma patients are often not aware of centres of excellence nor of ongoing clinical trialsand physicians may not be aware of existing clinical trials either [3,5]. For example, the National Sarcoma Survey (2015) in the UK found that the majority of patients (67%) were not asked whether they wanted to take part in a clinical trial and, if they were, uptake was low (22% participation) [3]. Discussion It is important to mention that this paper is based on findings from six relatively wealthy European countries. The situation may be very different in some of Europe's lower-income countries, where lack of available expertise and low levels of resources often result in limited access to quality care for many patients. More extensive research is required to understand the situation in lowerincome countries in Europe. In the six countries studied, there are numerous promising developments in sarcoma; however, many gaps remain. Efforts to improve access to quality sarcoma care and subsequent patient outcomes are still needed, as inconsistent availability of specialist expertise and appropriate referral patterns often result in misdiagnosis and inappropriate treatment for many patients. Strengthening cross-border healthcare initiatives may help, but ideally, every country should have at least one national reference centre for rare cancers with links to more established reference centres in another country. European organisations may also play an important role in building expertise by actively facilitating mentoring, exchange and support programmes to transfer knowledge and practice between countries [7]. Even with one centre of reference in each country and cross-border links between centres, lack of access to quality care may persist within a country. Access will often depend on the existence of suitable transport links. For example, access to a centralised sarcoma reference centre from every part of the country may be achievable in a country with radial transport links, but for countries with varied shape, and distant regions, regional centres for routine treatment would be preferable. That being said, rarer forms of sarcoma would still need to be treated centrally to retain access to true expertise. This may apply, for example, to retroperitoneal sarcomas or complex amputations in the pelvic or shoulder regions. A number of solutions may be envisaged to help improve equity in access to high-quality specialist care across a country, including harmonizing quality criteria for sarcoma reference centres, producing a national treatment strategy and clinical trials portal. Where a centre may lack expertise, technology may also help address knowledge gaps, such as through e-consultations. The growth of IT systems providing decision support software may also allow for more evidence-based decisions even in smaller or less-resourced MDTs. The certification process of 'specialist' sarcoma centres remains a critical issue. While the ideal is for centres to have formal designation with explicit quality criteria and regular quality reviews, this is not usually the case, as mentioned in the paper. The absence of explicit quality criteria is not limited to the field of rare cancers. Selfcertification with peer-review would be possible if governments regulated specialised treatment centres, but they generally do not. The role of the ERNs and guidelines in achieving greater harmonisation of quality criteria across designated sarcoma specialist centres remains to be seen. As has been mentioned previously, the existence of a MDT approach is one of the criteria for centres to become part of the ERN for sarcoma. However, it is important to recognise that consistent implementation of MDTs remains a challenge, as sometimes even specialist sarcoma centres are focused on one aspect of treatment (e.g. surgery) and may lack other specialists to contribute to a comprehensive MDT. Virtual MDTs linking different specialists across centres, again with the help of technology and IT, may offer a potential solution and are being explore in a number of countries, as well as in cross-border care networks. However, it is important to note that having a full MDT at a centralised specialised sarcoma centre is the preference, and technology can act as a tool to facilitate communications between specialistsbut not to replace specialists. Inequities in access result in delays or denial of innovative treatments for patients, effectively limiting their treatment options. Better alignment between evidentiary requirements for regulatory and HTA/access pathways may help reduce delays in access to new treatments for patients. Low participation in clinical trials remains an important issue for sarcoma patientsoften due to low patient awareness of ongoing studies. More patient involvement in clinical trial design is needed to help guide research efforts. Direct patient input into HTA and reimbursement decisions may help ensure value frameworks are closely aligned with patient priorities. Greater investment in basic, epidemiological, clinical, outcomes and translational research is still needed in sarcoma. Real-world data, aggregated from multiple centres and high-quality registries, will be vital to provide evidence of how effective different interventions are in practice, and guide quality improvement and research efforts as a result. Yet hurdles such as lack of data standardisation need to be overcome before the potential of real-world data can be realised. Looking beyond national solutions, several pan-European efforts deserve mention and are outlined in Table 2. It is also important to recognise the role of European initiatives aimed at rare cancers more generally. For example, the European Commission-funded RAR-ECARE project, which later evolved into RARECARENet [29], has provided important epidemiological insights into rare cancers [1,30]. EURACAN, mentioned previously, may significantly improve the quality of sarcoma diagnosis, care, research, and access to clinical trials [8,31], as well as facilitating cross-border care [32]. However, adequate funding for reference centres still needs to be secured [33]. The Joint Action for Rare Cancers (JARC), of the third EU Health Programme 2014-2020, is also important. It is a collaboration between 18 member states that aims to integrate the needs for rare cancers into national cancer plans by advancing quality of care and research on rare cancers [34,35]. The JARC provides direct support to EURACAN with implementation, in terms of operational solutions and professional guidance in quality of care, epidemiology, research and innovation, and education [35,36]. Finally, Rare Cancers Europe (RCE) is leading policy campaigns and actions on many of the key priority areas identified in the Sarcoma Policy Checklist, including: improving patients' involvement in clinical trial design and participation in clinical trials; standardising, capturing and merging big data for research purposes; improving access to rare cancer therapies; and improving education on rare cancers [5,37]. Although many of the above initiatives are still in the conceptual or early phases of implementation, they may have a marked impact on the landscape for sarcoma patients in years to come. Conclusions As demonstrated in this paper, there have been many promising initiatives aiming to improve the care of sarcoma patients in recent years. Yet gaps remainparticularly in Europe's lesser-resourced countriesand we must ensure all sarcoma patients have access to appropriate information, treatment and care. Patients and their representatives should be included in the planning and evaluation of sarcoma care. A multidisciplinary
2023-01-16T14:20:31.634Z
2018-04-16T00:00:00.000
{ "year": 2018, "sha1": "3ee8affda891877d5e6019f575042d4444ba8517", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12885-018-4320-y", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "3ee8affda891877d5e6019f575042d4444ba8517", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [] }
227297943
pes2o/s2orc
v3-fos-license
Is Extracorporeal Membrane Oxygenation the Standard Care for Acute Respiratory Distress Syndrome: A Systematic Review and Meta-Analysis Background Acute respiratory distress syndrome (ARDS) is a type of acute respiratory failure syndrome characterised by severe respiratory distress and stubborn hypoxaemia. Patients with ARDS have a prolonged hospital stay and high mortality rate. Over long-term follow-up, ARDS is found to be associated with a high incidence of long-term complications and decreased quality of life. Venovenous extracorporeal membrane oxygenation (vv-ECMO) has been widely used for the treatment of refractory ARDS. However, it is not the standard treatment as recommended by ARDS guidelines. Aim The aim of this study was to compare the effects of ECMO (vv-ECMO) and conventional mechanical ventilation (CMV) on the clinical outcomes in patients with ARDS. Method We searched the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library, Medline, EMBASE, Web of Science, and PubMed databases up to November 2019. We selected appropriate studies according to our inclusion and exclusion criteria, and extracted and analysed the data using RevMan 5.0 software to evaluate the effectiveness of ECMO systematically. Results A total of 18 articles and 2,399 patients were included in this meta-analysis: 898 patients in the ECMO group and 1,501 patients in the CMV group. Treatment with ECMO may be associated with reduced 1-year mortality (95% confidence interval [CI], 0.27–0.83; p=0.009) and 60-day mortality (95% CI, 0.37–0.86; p=0.008), but increased Intensive Care Unit mortality (95% CI, 1.26–2.36; p=0.0007) of patients with ARDS. Extracorporeal membrane oxygenation may not be related to 30-day mortality or complications such as nosocomial pneumonia, haemorrhagic stroke, or continuous renal replacement therapy in patients with ARDS. However, some results showed heterogeneity, such as bleeding complications and in-hospital mortality. Subgroup analysis showed that ECMO treatment might increase ICU mortality (p=0.002) and nosocomial pneumonia complications (p=0.03) in patients with H1N1 ARDS. Conclusions Compared with CMV, ECMO contributed to lower 60-day and 1-year mortality, and increased ICU mortality in patients with ARDS. However, H1N1 ARDS was independently associated with higher ICU mortality and nosocomial pneumonia. The results were not affected by removing retrospective control studies or articles published >20 years ago from the sensitivity analysis. This meta-analysis demonstrates the effectiveness of ECMO and its importance in standard treatment of patients with ARDS. Introduction Acute Respiratory Distress Syndrome First discovered in 1967 [1], acute respiratory distress syndrome (ARDS) is a unique type of hypoxaemic respiratory failure characterised by the acute onset of hypoxaemia and diffuse alveolar damage caused by non-cardiogenic pulmonary oedema. Without timely intervention, ARDS can evolve into to multi-organ failure. Therefore, it should not be underestimated. Acute respiratory distress syndrome is a multifactorial lung injury. At present, there is no clear understanding of its epidemiology and outcome. Several studies have indicated that the most common risk factors for ARDS include pneumonia and non-pulmonary sepsis [2][3][4]. Other susceptibility factors include smoking, alcohol, drugs, heavy blood transfusions, obesity, and genetic factors. A prospective, multicentre study found that the morbidity and mortality of ARDS increased with age, and that the inhospital mortality rate was 41.1% [5]. In the USA, there are an estimated 190,000 cases, 74,000 deaths, and 3.6 million hospital days annually. Another large observational study, Large observational study to UNderstand the Global impact of Severe Acute respiratory FailurE (LUNG SAFE) [6], included 29,144 patients from 459 intensive care units (ICUs) in 50 countries; 3,022 (10.4%) of whom met the ARDS criteria over 4 weeks. Mortality increased with the severity of ARDS. For patients with mild, moderate, and severe ARDS, hospital mortality rates were 34.9%, 40.3%, and 46.1%, respectively. A small number of patients with ARDS die from respiratory failure, while most die from their primary illness or secondary complications, such as sepsis and multiple organ dysfunction syndrome. Muscle weakness after ICU discharge is a frequent complication of ARDS and usually recovers within 12 months [7]. Any serious physical injury and decreased quality of life associated with muscle weakness lasts for .24 months. Extracorporeal Membrane Oxygenation for Acute Respiratory Distress Syndrome At present, there are several treatment options for ARDS. Lung protective ventilation with low tidal volume, limited plateau pressure, and prone positioning are strongly recommended treatment options for ARDS, as per 2018 guidelines [8]. High-frequency oscillatory ventilation has no advantages over conventional mechanical ventilation (CMV), and may result in higher mortality [9]. For many years, extracorporeal membrane oxygenation (ECMO) remained a weak recommendation for ARDS owing to its significant complications and the lack of high-quality clinical research data. Extracorporeal membrane oxygenation can improve oxygenation and remove carbon dioxide, and then reduce ventilator support (low tidal volume and low airway pressure, etc.), to rest the lungs and maintain a protective ventilation strategy of open lungs in order to buy time for treatment of the original disease [10]. In recent years, with the continuous progress in technology, ECMO has progressively achieved better clinical results in ARDS. The Conventional ventilation or ECMO for Severe Adult Respiratory Failure (CESAR) study [11], a UK-based multicentre trial, recommend that patients with serious and recoverable ARDS should be sent to hospitals with ECMO availability. Extracorporeal membrane oxygenation did not only increase the survival rate, but also the quality-adjusted life-years in ARDS without disability. However, the ECMO to Rescue Lung Injury in Severe ARDS (EOLIA) study [12], a recent international randomised controlled trial (RCT), found that 60-day mortality in the ECMO group was not significantly lower compared with the CMV group. This meta-analysis combined previously published highquality clinical research, with an evaluation of whether ECMO should be the standard care in ARDS. Study Selection Criteria The inclusion criteria were based on the PICOS acronym (participant, intervention, comparison, outcomes of interest, and study design). Included patients with ARDS were identified according to ARDS criteria [1,[13][14][15], which were defined when the articles were published. Meaningful outcomes for patients with ARDS treated with venovenous (vv)-ECMO included mortality and the associated incidence of complications, such as 30-day mortality, 60-day mortality, 1-year mortality, ICU mortality, in-hospital mortality, and nosocomial pneumonia, haemorrhagic stroke, bleeding, and the need for continuous renal replacement therapy (CRRT). The exclusion criteria were clear: patients without ARDS, those ,18 years of age, pregnancy, treatment with venoarterial ECMO, had none of the abovementioned outcomes, animal studies, and non-control studies. The scope of this screening article was huge, and the process of article inclusion and exclusion is shown in Figure 1. Data Collection Two independent investigators were responsible for extracting articles and related data based on the inclusion/ exclusion criteria. The disagreements were solved by consultation with the corresponding author (G.Z.). In addition, we tried to contact the original authors by email for incomplete data but received no response. Quality Assessment and Data Analysis The risk of bias of the screened RCTs was evaluated by RevMan version 5.3 (Cochrane Collaboration, Copenhagen, Denmark). The quality of non-RCTs was assessed with the Newcastle-Ottawa Scale. Data processing of the metaanalysis of was done with RevMan 5.3. Results A total of 2,570 articles that described the effects of ECMO in ARDS were retrieved. After screening, four RCTs and 14 RCSs were included in the meta-analysis. However, two of the RCTs were published .20 years ago. The study group included 898 patients with ARDS treated with ECMO and 1,501 patients with ARDS treated with CMV (control group). Thirty-Day Mortality Three (3) articles (80 patients) reported 30-day mortality. As can be seen in Figure 2, ECMO may not be related to 30 Next, the therapeutic effect of ECMO in ARDS caused by H1N1 (H1N1-ARDS) in the pneumonia subgroup was analysed. Subgroup analyses found that ECMO treatment might worsen ICU mortality in the H1N1-ARDS subgroup (p=0.002; I 2 =80%). However, the I 2 suggested significant heterogeneity among the studies. Sensitivity analyses was carried out, and the results showed that, after removing the study by Pham et al. [26], the I 2 of the ECMO study group with H1NI ARDS was 0. The I 2 of the whole ECMO study group was 0 after the removal of the studies by Munoz et al. [30] and Pham et al. [26]. Moreover, the results did not change, suggesting they were reliable. Figure 5 shows that the effect of ECMO in ARDS might not be associated with in-hospital mortality (OR, 1.06; 95% CI, 0.81-1.38 [z=0.42; p=0.67; c 2 =49.90; p for heterogeneity ,0.00001; I 2 =84%]). Subgroup analysis showed that ECMO was not associated with in-hospital mortality of the H1N1-ARDS subgroup (p=0.90; I 2 =91%). In-Hospital Mortality The I 2 value suggested significant heterogeneity among the studies. We did not find a suitable solution to the heterogeneity. Therefore, this result might not be reliable. Bleeding Complications As shown in Figure 8 Haemorrhagic Stroke As shown in Figure 9, ECMO may not be associated with haemorrhagic stroke ( Continuous Renal Replacement Therapy Extracorporeal membrane oxygenation was also not associated with the incidence of the CRRT (OR, [17], and to 25% (SOFA) after removing the studies by Mi et al. [12] and Noah et al. [31]. The p-value remained ,0.05, indicating that the APACHE II and SOFA scores were associated with ECMO treatment. H1N1 Acute Respiratory Distress Syndrome In this meta-analysis, we found that ECMO treatment might increase the ICU mortality of and the incidence of nosocomial pneumonia in patients with H1N1-ARDS. However, there has been no previous high-quality conclusion about ICU mortality. Some possibilities may account for this finding. Firstly, previous studies compared patients with H1N1 ARDS and non-H1N1 ARDS and found that patients with H1N1 ARDS had a more rapidly extensive viral pneumonia with severe lung function impairment, higher body mass indexes (BMIs), higher ICU resource consumption, required ECMO support more often, and needed longer ECMO support times and longer ICU stays [26,32,33], which may be why patients with H1N1 ARDS have a higher ICU mortality rate (in the post-pandemic H1N1 infection period). Secondly, a recent study showed that ECMO withdrawal failure was the sole factor associated with ICU mortality [34]. As a result, depending on the meta-analysis, a higher incidence of nosocomial pneumonia in the ECMO group may Systematic Review and Meta-Analysis of ECMO in ARDS lead to ECMO withdrawal failure, which then leads to higher ICU mortality. More bleeding complications in the ECMO group may also be a culprit. The present study concludes that patients in the ECMO group have higher SOFA and APACHE II scores when all studies with complete data were combined in the meta-analysis. This may also have occurred in the six included studies [21,22,[25][26][27]30] reporting ICU mortality, with bleeding complications leading to more sicker patients in the ECMO group, followed, as a consequence, by a higher ICU mortality. In addition, some differences in management deserve attention, such as time to the initiation of ECMO, the application of steroids, and sample size. However, the included studies did not provide complete data for these factors, so it is regrettable that they could not be analysed with specific data, in order to draw conclusions. Lastly, studies have found that hyperlactataemia before ECMO and higher dynamic driving pressure of patients needing ECMO in first 3 days were independent risk factors for increased ICU mortality [35,36]. Nevertheless, haematological disease, early acute kidney injury, corticosteroid therapy, and early haemodynamic failure might all be associated with the higher mortality rate in H1N1 ARDS [37,38]. Therefore, the influence factors in ICU mortality are numerous, and more rigorous studies are needed to confirm the relationship between ECMO support and the ICU mortality of patients with ARDS. Sensitivity Analysis The meta-analysis included four RCTs and 14 RCS. The studies were mostly retrospective and had been published over a long period of time, during which ECMO technology and knowledge regarding the safety of mechanical ventilation changed greatly. Furthermore, patients with ARDS receiving ECMO, whether as part of an RCT or RCS, were prone to having more serious conditions and higher disease scores (SOFA and APACHE II) (Figures 11 and 12). And as can be seen in Figures 2-12, the results of the analysis id not change, even when RCSs were excluded from the sensitivity analysis. Of the four RCTs, two were published .20 years ago. Subgroup analysis showed that the results did not change after the removal of these two RCTs. Therefore, our conclusion is reliable. Factors Influencing Extracorporeal Membrane Oxygenation Combined with the results of this paper and past studies [39], it can be concluded that ECMO plays an important role in the treatment of ARDS and will certainly be included as a standard treatment in future guidelines. However, the factors influencing ECMO treatment have not been clearly stated. Ultrasound is a convenient and commonly used monitor of the disease course, which is critical for lung assessment and to identify complications in the ICU early. Daily lung ultrasound assessment is recommended during ECMO treatment in patients with ARDS [40]. The lactate clearance within 72 hours of the initiation of ECMO may contribute to risk stratification and the mortality of patients with ARDS [41]. However, some studies do not support this conclusion [42,43]. ECMOnet score [44] has long been concerned with predicting the efficacy of ECMO, and it is also used as a tool to evaluate the indications and time nodes of ECMO in ARDS. Research has found that, in general, obese (BMI .30 kg/m 2 ) patients with refractory ARDS are more likely to need ECMO treatment [45,46], but there is no evidence of the relation between obesity and higher mortality. Right ventricular hypertrophy is a side effect of ECMO support, which may be attributed to increased afterload and higher BMI. Right ventricular hypertrophy also has a negative impact on ICU mortality [45]. Limitations Firstly, many important variables influenced the results of the study, such as the duration of ECMO, the CMV settings, number of mechanical ventilation days, prone positioning before ECMO, different populations, and time to initiation of ECMO. With regard to the 14 observational studies included, this review cannot overcome the limitations of primary studies. Most of the studies included in this meta-analysis did not report data on these indicators in detail. Secondly, even though higher disease severity scores (SOFA and APACHE II) were associated with ECMO treatment, each study outcome included different articles, and more detailed data are needed to see if outcomes are affected by disease severity scores. Conclusions The meta-analysis showed that ECMO was associated with reduced 60-day and 1-year mortality, but increased ICU mortality, compared to CMV in patients with ARDS. Extracorporeal Systematic Review and Meta-Analysis of ECMO in ARDS membrane oxygenation may have different effects on different types of ARDS, such as H1N1 ARDS. In the subgroup analysis, ECMO treatment increased ICU mortality and the incidence of nosocomial pneumonia in patients with H1N1 ARDS. Extracorporeal membrane oxygenation can be used as a standard step in the management of ARDS. It should be used immediately when high-risk criteria are satisfied, rather than as a late-stage rescue therapy in end-stage ARDS or multiorgan failure. However, the appropriate time at which to use ECMO, the best applicable population, the clinical characteristics of patients, evaluation of efficacy, the best way in which to reduce the complications of ECMO, and the ARDS pathogen type for the best treatment effect are all problems that need to be solved. Therefore, it is hoped that there will be more highquality research to address these issues. Funding Sources We report no relevant funding sources associated with this manuscript. Conflicts of Interest There are no conflicts of interest to disclose. Author Contributions J.W. was involved in the study design, data curation, formal analysis, investigation, methodology, project administration, resources, software, validation, visualisation, writing (original draft, final content, writing), and review and editing. Y.W. was involved with data curation, formal analysis, investigation, methodology, resources, validation, visualisation, writing (original draft). X.K.X. and T.W. carried out investigation and validation. G.Z. was involved in the study conceptualisation, project administration, software, supervision, visualisation, writing, review, and editing. All authors read and approved the manuscript.
2020-11-12T09:09:14.950Z
2020-11-04T00:00:00.000
{ "year": 2020, "sha1": "e403aef02a5e795b7461415205f6921dd9a7256a", "oa_license": null, "oa_url": "http://www.heartlungcirc.org/article/S1443950620314815/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "6e05b8f7bef1ad3ac58d794924f002a0db52d25b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245353136
pes2o/s2orc
v3-fos-license
Controlled RISC loading efficiency of miR168 defined by miRNA duplex structure adjusts ARGONAUTE1 homeostasis Abstract Micro RNAs (miRNAs) are processed from precursor RNA molecules with precisely defined secondary stem-loop structures. ARGONAUTE1 (AGO1) is the main executor component of miRNA pathway and its expression is controlled via the auto-regulatory feedback loop activity of miR168 in plants. Previously we have shown that AGO1 loading of miR168 is strongly restricted leading to abundant cytoplasmic accumulation of AGO-unbound miR168. Here, we report, that intrinsic RNA secondary structure of MIR168a precursor not only defines the processing of miR168, but also precisely adjusts AGO1 loading efficiency determining the biologically active subset of miR168 pool. Our results show, that modification of miRNA duplex structure of MIR168a precursor fragment or expression from artificial precursors can alter the finely adjusted loading efficiency of miR168. In dcl1-9 mutant where, except for miR168, production of most miRNAs is severely reduced this mechanism ensures the elimination of unloaded AGO1 proteins via enhanced AGO1 loading of miR168. Based on this data, we propose a new competitive loading mechanism model for miR168 action: the miR168 surplus functions as a molecular buffer for controlled AGO1 loading continuously adjusting the amount of AGO1 protein in accordance with the changing size of the cellular miRNA pool. INTRODUCTION RNA interference (RNAi) or RNA silencing is a widespread gene regulatory mechanism, playing roles in diverse biological processes in most eukaryotic or-ganisms (1,2). The hallmark molecules of RNAi are the 21-24 nt long small RNAs (sRNA). A subclass of sRNAs are the micro RNAs (miRNAs) (3). They predominantly control endogenous gene expression (4,5) to coordinate developmental processes and stress responses (6,7). During the biogenesis of miRNAs, a genome-encoded RNA, primary miRNA (pri-miRNA) with specific stemloop secondary RNA structure is subjected to subsequent cleavages producing 21-24 nucleotide (nt) long miRNA duplexes (3,8). The stem-loops of plant pri-miRNAs exhibit extreme variability in length and structure compared to animal counterparts (9,10). In plants, miRNA precursors (pre-miRNAs) are typically excised initially from pri-miRNA transcripts by the action of DICER-LIKE1 (DCL1) with a cleavage event near the base of the stem. The next DCL1 cleavage occurs in 21-24 nucleotide distance from the end of the stem resulting in a miRNA duplex (3,11). In specific cases loop-to-base or bidirectional maturation can also take place by DCL1 (11)(12)(13). The generated miRNA intermediate duplexes consist of a guide (miRNA) and a passenger (miRNA*) strand with 2-nt 3 overhang and a 5 phosphate at each strand. Plant pri-miRNAs produce miRNAs either from the 5 or from 3 arm of their precursors. Precise and efficient processing depends on pri-miRNA structural signals, like terminal loop structure, 14-15 bp paired stem proximal and basepairing at defined region of stem distal to the miRNA duplex (3,14). The accurate and efficient processing of pri-miRNAs requires DCL1 co-factors, like the G-patch domain protein TOUGH (TGH), the zinc finger protein SERRATE (SE), and the dsRNA binding domain protein HYPONASTIC LEAVES1 (HYL1/DRB1)/ DOUBLE-STRANDED RNA-BINDING PROTEIN 1 (5,15). The miRNA/miRNA* duplexes are methylated by the sRNA methyltransferase HUA Enhancer 1 (HEN1) at the 3 ends that aids their protection against exonucleases (16). The methylated miRNA/miRNA * duplexes, according to the previous model, were thought to be exported to the cytoplasm by the animal Exportin 5 (EXPO5) homologous protein HASTY (HST) (17). Subsequently, miRNA/miRNA* duplexes are incorporated into the RNA-induced Silencing Complex (RISC), the effector of RNA silencing. During RISC assembly the guide miRNA strand is loaded onto ARGONAUTE protein, the central component of RISC, while the miRNA * is ejected and degraded (1). AGO1 is the major effector protein for miRNAs among the ten Arabidopsis encoded AGO proteins which are specialized for various RNAi pathways but often display functional redundancies (18). Recently, it has been shown that AGO1 containing RISC is mainly assembled in the nucleus and exported to the cytosol as a nucleoprotein complex by EXPO1 transporter (19). The miRNA-programmed AGO-RISC identifies its RNA targets via complementary base pairing and mediates their repression through cleavage (slicing) and/or translational inhibition (1). Diverse intrinsic structural features of the miRNA/miRNA* duplexes determine their AGO loading specificities (20). Guide strand is selected according to the thermodynamic stability of the miRNA duplex ends (21). The 5 -nucleotide predominantly determines the sorting preference of miRNAs into specific AGO proteins (22,23). AGO1 exhibits strong preference for sRNAs having uridine at their 5 end (5 U), consequently the majority of miRNAs fulfil this requirement ensuring their sorting into AGO1. Structural characteristics of the miR165/166 and miR390 miRNA duplexes target them specifically into AGO10 and AGO7, respectively (24,25). The AGO1 and AGO10 sorting properties of miR168 is also determined by metastable structural configurations of its precursor (26). Transient expression studies revealed that distinct secondary structural motifs of the miRNA/miRNA* duplexes control AGO1 and/or AGO2 sorting (27). Sorting of various miRNAs into different AGO-RISCs according to the structural and/or sequence characteristics therefore represents an important regulatory check-point of RNAi pathway. However, recently another checkpoint of miRNA AGO loading was revealed by the identification of cytoplasmic protein unbound pool of miRNAs (28). This observation indicated that a highly controlled post-production regulatory mechanism is able to adjust the loading efficiency of particular miRNAs into the same AGO-RISC. This control mechanism can determine the biologically active portion of the produced miRNAs in the given cellular environment. Moreover, it was also suggested that RISCloading efficiencies of distinct canonical 5 U miRNAs are predominantly controlled by their diverse precursor RNAs. AGO1 protein is an absolutely essential trans factor of the miRNA pathway in Arabidopsis. Null ago1 mutations are lethal and hypomorphic ago1 mutants show severe developmental aberrancies (29). Moreover, over-accumulation of AGO1 also induces disturbances in miRNA levels and extremely drastic phenotypic/developmental alterations (30). Besides, AGO1 is also the central effector in siRNAinduced post-transcriptional gene silencing, antiviral im-munity pathways and the target of virus RNAi suppressors (31,32). The amount of functional AGO1 protein is precisely regulated through transcriptional and post-transcriptional feed-back loop mechanisms: (i) miR168-programmed AGO1-RISC controls the expression of AGO1 mRNA (33), (ii) AGO1 cleavage leads to production of secondary small interfering (si)RNAs that further repress AGO1 protein expression (34), (iii) AGO1 and MIR168 gene expressions are regulated co-transcriptionally (30), (iv) the existence of two MIR168 genes producing primary transcripts (pri-MIR168a and pri-MIR168b) with both distinct and overlapping functions gives an additional layer to the control of AGO1 (35), (v) miR168 is stabilized by AGO1 binding but insensitive to DCL1 levels (30). This complex multilayer regulation of AGO1 ensures a robust and precise functioning of the RNA silencing pathway itself. Intriguingly, transgenic over-expression of miR168 induces limited phenotypic alterations such as delayed flowering and leaf serration (30). Recently, it has been shown that miR168 accumulates dominantly in a protein-unbound form in the cytoplasm suggesting a regulatory step at AGO1 loading (28). Why miR168 is expressed and maturated in such great quantities far exceeding its competence to be loaded into AGO1 and whether the unbound miR168 has any biological roles remains entirely elusive. Given the importance of miRNA pathway self-regulation through AGO1 feed-back, here we investigated more in depth the effect of miR168/miR168* duplex structure on AGO1-loading efficiency. First, we have shown that intrinsic structural features precisely calibrate the loading efficiency of miR168/miR168* duplexes. Structural alterations of the miR168/miR168* duplex resulted in changes of loading efficiency, in both positive and negative manner that caused AGO1 protein amount changes and phenotypic alterations. Moreover, by altering the amount of competitor miRNA cellular pool we observed the establishment of a new AGO1 protein steady state level. In general, our data reveal a new RNAi regulatory action where not only the production rate but the finely tuned competitive loading rate into the RISC determines the biological activity of the miRNA in the given sRNA environment. Our findings suggests that the produced miR168/miR168* surplus coupled with structurally determined loading efficiency serves as a pivotal self-calibrator tool of the miRNA pathway. Plant material and growth conditions Arabidopsis thaliana seeds were surface sterilized and after incubation for three days at 4˚C, transgenic, mutant and wild type Columbia plants were germinated at 21˚C on MS agar medium supplied with 1% sucrose with or without the presence of 50 g/ml kanamycin, respectively. Seedlings were transferred into Jiffy peat blocks after seven days, and spent three weeks there before planting to pots filled with soil. Plants were grown under 8 h light 16 h darkness cycles at 21˚C until planting, and then were moved to light room under 16 h light 8 h darkness at 21˚C. Nicotiana benthami-ana plants were grown under the light room conditions described above. Plasmid constructs All constructs were built using pGreen binary vector system (pGreen0029) and 35S cassette according to the instructions of the manufacturer (http://www.pgreen.ac.uk). cDNA was produced with RevertAid First strand cDNA synthesis Kit (Thermo Fisher Scientific) from A. thaliana RNA in all cases. Constructs of MIR168a, MIR168b, MIR156a and MIR171a contained the region 10-10 bp upstream and downstream from the position of miRNA stem-loop structure. For AMIRs, we used the modified hvu-MIR171 stem-loop described previously (36). PCR mutagenesis was applied to alter respective nucleotides in passenger strand of MIR168 stem-loop, and to change duplex part of hvu-MIR171 to miR168a and to its respective passenger strand. For the AGO1-sensor, a 558-bp-long cDNA fragment containing miR168 target site was amplified and fused to the 5 part of GFP in-frame. Primers used to create constructs were presented in Additional file 2: Supplementary Table S1. All construct was introduced into Agrobacterium tumefaciens AGL1 strain with electroporation (360 , 25F, 2,5kV; Biorad) in the presence of pSoup helper plasmid. Transient assay Young leaves of six weeks old N. benthamiana were infiltrated with the respective mixture of Agrobacterium tumefaciens (AGL1) suspensions at 1.0 optical density of 600 nm [OD 600 ] containing sensor, miRNA-producing and p14 constructs as described previously (37). P14, as a suppressor of siRNA pathway, was included in every experiment with uniform concentration (38), so it did not interfere with observation of relative signal reduction. Presence of p14 was checked with northern blot (Additional file 1: Supplementary Figure S1C) according to protocol described previously (38,39). To reduce the differences in miRNA production ability of the different constructs, normalized amounts were applied and mixtures were supplemented with empty pGreen0029 vector containing A. tumefaciens (AGL1). For detailed compilation of infiltration mixtures, see Additional file 2: Supplementary Table S2. Samples were taken at the third day post-infiltration; four discs of 1 cm diameter were pooled from patches of separate leaves for individual constructs. Samples were collected in parallel from both sides of same leaves. Every miRNA producing sensor construct combination was tested on four to five plants and each experiment was repeated at least three times. Transgenic line production Arabidopsis thaliana Columbia ecotype plants were transformed with the appropriate miR168 producing construct according to the standard floral dip protocol (40). Transformant T0 plants were selected on MS plates supplemented with 50 g/ml kanamycin. Four weeks after planting their miR168 expression profile in young rosette leaves was analysed and plants exhibiting appropriate miR168 over-expression level were self-pollinated and used to generate transgenic lines. Homozygous lines were produced with self-pollination during two further generations. Selection was based on kanamycin resistance and consistent level of miR168 expression. To demonstrate the effect of overexpression, lines with comparable expression level were selected within one panel. To ensure the comparability of the used transgenic lines, different categories of over-expression levels were investigated, 10-35 times for MIR168a and MIR168-4bp, 2-6 times for MIR168a and MIR168-3mm and 8-19 times for MIR168a and AMIR1-2 lines. Gel-filtration assay Gel-filtration based size separation of crude extracts using Superdex-200 column was performed as described previously (41,42) with minor modifications. Optimized buffer of separation contained 50 mM Tris-HCl (pH 7.5), 10 mM NaCl, 5 mM MgCl 2 and 4 mM DTT. 48 fractions of gel-filtration were divided into two and RNA was extracted with phenol-chloroform method from the odd samples, while even samples were used for protein purification using acetone precipitation. Crude extracts were prepared from 0.3 g plant material collected from leaves of N. benthamiana plants three days post-infiltration ( Figure 5D Figure S4A, B and C) and from young rosette leaves of 6 weeks old Arabidopsis thaliana plants ( Figure 1D). In case of the given panel, all samples were collected in parallel and gel-filtration runs were carried out subsequently with same parameters. Figure S2B) were collected, homogenized in an ice-cold mortar in 355 l of extraction buffer (0.1 M glycine-NaOH, pH 9.0, 100 mM NaCl, 10 mM EDTA, 2% sodium dodecyl sulfate and 1% sodium lauroylsarcosine) and divided into two aliquots. To one part (60 l) an equal amount of 2× Laemmli buffer was added and was centrifuged 5 min after boiling for 5 min. The remaining part was supplemented with 355 l extraction buffer and was used for RNA extraction with the standard phenol-chloroform method. This method ensures the comparability of protein and RNA samples within one panel. For the analysis of gel-filtration blots, images were acquired with ChemiDoc equipment in Colorimetric mode. Volume intensity of the four most prominent RISC loaded and unbound fractions were measured and summarized. Loading efficiency (LE) was calculated as RISC loaded volume intensity divided with the total sum of RISC loaded plus unbound volume intensities and was represented as a percentage. Within one panel miR168 expression of all different constructs was detected with same exposure time and images of different miRNAs were produced with subsequent probing of the same membrane after washing. Between the two hybridizations, membranes were checked for activity with phosphorimager screens (Amersham). Northern blotting To detect p14 6 g of same RNA as was used to detect miR168 was separated on formaldehyde agarose gels, blotted onto Hybond NX membrane (GE Healthcare) with capillary transfer, UV crosslinked and subjected to hybridization with radiolabeled PCR product of p14 at 65ºC as described previously (39). Probe was labelled using DecaLabel DNA labelling Kit of Thermo Fisher Scientific according to the manufacturer's instructions. Image was created with phosphorimager screen (Amersham). Western blotting 5 l extract of infiltrated N. benthamiana leaves, or 20 l of Arabidopsis samples were separated on 10 or 8% sodium dodecylsulphate-polyacrylamide gel, blotted overnight to PVDF Transfer Membrane (Hybond-P; GE Healthcare, Freiburg, Germany) using wet tank transfer and subjected to western blot analysis. Membranes were blocked using 5% non-fat dry milk in phosphate-buffered saline (PBS) containing 0.05% Tween 20 (PBST) for 60 min. Blots were cut into two, and respective parts were incubated with anti-BiP (Agrisera, AS09 481) for 1 h or with anti-AGO1 (Agrisera, AS09 527) for 2.5 h at a dilution of 1:7500 in 1% non-fat dried milk in 1× PBST. AGO1-sensor was also detected on Supplementary Figure S1A with anti-EGFP (Agrisera, AS132700) at a dilution of 1:7500. After washing in PBST, blot was incubated with secondary goat anti-rabbit IgG HRP conjugated antibody (Agrisera, AS09 602) for 1 h at a dilution of 1:10 000 in 1× PBST with agitation. Blots were developed with High Clarity Western ECL (Biorad), exposure was made using ChemiDoc (Biorad) equipment in signal accumulation mode. Immuno-precipitation For crude extracts, 0.4 g of seedlings ( Figures 2E, 3E, 4E and Additional file 1: Supplementary Figure S5A, B and C) or same amount of small rosette leaves ( Figure 5F and Additional file 1: Supplementary Figure S5D) were homogenized in four volume of lysis buffer (10 mM Tris-HCl pH 7.6; 1 mM EDTA; 150 mM NaCl; 10% glycerol; 0.5% Nonidet P-40; 5 mM NaF; 1 mM dithiothreitol; 0.5 mM Na 3 VO 4 ; 1 mM phenylmethylsulfonyl fluoride), and centrifuged three times at 4˚C in fresh tubes to get rid of cellular debris. 100-100 l of extracts were used to purify RNA with the phenol-chloroform method and protein by adding an equal volume of Laemmli buffer (2×). RNA was dissolved in 20 l nuclease-free water. 1 ml extract was pre-incubated with 3 l anti-AGO1 HRP conjugated antibody (Agrisera) for 2 h at 4˚C with agitation, then applied to Dynabeads Kit according to the manufacturer. Immuno-precipitated fraction was eluted in 20 l, from which 10 l was used to purify RNA. Immuno-precipitated RNA samples were dissolved in 20 l nuclease-free water. To the remaining 10 l equal volume of Laemmli buffer was added, and used as a protein sample. Input RNA samples for northern blot were diluted 40 times before loading, and the same volume of input and immune-precipitated samples were applied on the gel. Northern and western blots were carried out as described above. Volume intensities of miR168 blots were measured with ImageLab 5. Figure S5D). High-throughput sequencing (HTS) To create cDNA libraries for sequencing, high quality RNA samples were purified with the phenol-chloroform method from 0.3 g of bulked seedlings of respective homozygote lines. 20 g of the samples were loaded onto separate polyacrylamide gels, the 21-22 nt enriched small RNA fraction was isolated and libraries were prepared only from this fraction using the Truseq Small RNA Library Preparation Kit (Illumina, San Diego, CA, USA) and the modified protocol described earlier (46). Sequencing was carried out on HiS-canSQ by UD-Genomed (Debrecen, Hungary) with a 50 bp, single-end chemistry (8 samples/sequencing lane). QIA-GEN CLC Genomics Workbench 20 was used for sequence analysis. First, raw sequences were subjected to quality control, adapters and stop sequence were trimmed and reads within the 15-30 nucleotide size range were used for further analysis. Trimmed read number of different libraries varied between 1.5 and 7.7 million, and were used as a base of RPM calculations. The size-selected reads were mapped to the wild type ath-pri-MIR168a, ath-pri-MIR168b and to the respective modified precursor sequence using the Map Reads to Reference tool. The created alignments were extracted to new sequence lists, and with the help of Microarray and Small RNA Analysis tool of CLC Genomics Workbench, reads of individual sequences were counted and exported to a single Excel file. The selected complementary reads were used for further analysis. For analysis of AGO1 derived siRNAs the whole cDNA sequence of ath-AGO1 mRNA was used as reference. Over-expression of miR168 from ath-MIR168 precursor fragments induces limited AGO1 down-regulation First, we investigated the AGO1 controlling efficiency of miR168, over-produced from the wild type A. thaliana MIR168a precursor fragment containing the hairpin structure and 10-10 base pairs up-and downstream. Due to the limited sensitivity of AGO1 antibody to detect endogenous AGO1 in Agrobacterium-infiltrated N. benthamiana leaves we built a sensor construct expressing an AGO1-GFP fusion protein (AGO1-sensor; Figure 1A). AGO1-sensor was expressed transiently in Nicotiana benthamiana leaves in the presence or absence of 35S::pri-MIR168a (MIR168a) precursor binary construct ( Figure 1A). To eliminate siRNA mediated transgene induced RNAi, viral p14 silencing suppressor construct was added to Agrobacterium infiltration mix. The expression of p14 mRNA was checked by northern blot analyses in the experiments (Additional file 2: Supplementary Table S2, Additional file 1: Supplementary Figure S1C) (38,47). The robust miR168 over-expression, however, resulted in only a moderate decrease in the GFP signal of AGO1sensor compared to control infiltration with empty vector ( Figure 1B, Additional file 1: Supplementary Figure S1A). The AGO1-sensor showed also moderate downregulation. To further investigate the regulatory efficiency of miR168 on AGO1 accumulation we produced transgenic A. thaliana (Columbia) plants over-expressing both wild type ath-MIR168a and ath-MIR168b precursor fragments (MIR168a and MIR168b). In accordance with a previous work (30), transgenic over-expression of MIR168a caused only minor changes in the phenotype. The overall look and fertility of the transgenic plants resembled to the wild-type ones. We observed only minor developmental alterations, like serrated rosette leaves and delayed flowering in transgenic lines ( Figure 1C, Additional file 1: Supplementary Figure S2A). The severity of the phenotypes correlated with miR168 accumulation levels. In MIR168a over-expressing transgenic lines we observed only a moderate down-regulation of AGO1 protein level compared to wild type plant ( Figure 1C, Additional file 1: Supplementary Figure S2B). Similarly to previous results (30), we detected slightly decreased accumulation of miR159 in parallel with the reduced AGO1 level ( Figure 1C). Over-expression of MIR168b induced similar phenotypic alterations and also associated with moderate AGO1 protein down-regulation in young leaves (Additional file 1: Supplementary Figure S2A and B). Since MIR168a and MIR168b transgenic plants exhibited similar phenotypes and AGO1 down-regulation properties, only MIR168a lines were used in following experiments. The limited impact of miR168 over-expression on AGO1 protein levels, detected in transient and stable transformant systems, correlated very well and suggested that miR168 inefficiently programs AGO1-RISC complexes. To investigate the loading efficiency of miR168 into AGO1-RISC we employed size separation gel-filtration assay (42). Crude extracts of young leaves were loaded onto size-separating column and the collected fractions were analysed for their miRNA and AGO1 protein content. In line with our previous data, miR159 was present predominantly in high molecular weight (HMW) AGO1-RISC containing complexes. In contrast, miR168 accumulated mainly in protein-unbound form in the same sample and only the minority of miR168 was loaded into HMW AGO1-RISC (28,42) ( Figure 1D). Similarly, as was described previously the elevated miR168 level resulted only in a moderate increase in HMW AGO1-RISC loading of miR168 ( Figure 1D). This observation indicates that the massive excess of miR168 matured from over-produced wild type MIR168a precursor fragment is not able to incorporate into AGO1-RISC efficiently. This restricted loading efficiency of miR168 resulted only in a moderately reduced AGO1 protein level. The majority of the produced miR168 accumulated in fractions representing protein-unbound miRNAs. This phenomenon is characteristic of miR168 since transient or transgenic overexpression of miR159 and miR171 results in almost total or very efficient loading of HMW AGO1-RISC, respectively (28). In MIR168a over-expressing plants, miR159 preserved its well-loading feature ( Figure 1D). This observation confirms that not the unavailability of free, unloaded AGO1 proteins limits the incorporation of miR168 into AGO1-RISC. Altogether, the observation that MIR168a precursor fragment mediated miR168 over-accumulation is not associated with drastically enhanced AGO1-RISC loading implies that AGO1 loading efficiency of miR168 is strictly regulated. Modification of miR168/miR168* duplex structure of MIR168a precursor fragment can further reduce AGO1 loading efficiency According to gel-filtration experiments, miRNAs processed from structurally different precursors, can be incorporated into AGO1-RISC to variant extents (28) ( Figure 1D). In silico analyses of miR168/miR168* duplex structure encoded by genomes of various plant species revealed a dominantly conserved nucleotide mismatch at the fourth nucleotide of the duplex. Structural features may affect AGO1-loading efficiency. To test whether this mismatch has any impact on AGO-loading we created a construct in which base pairing at position fourth was introduced by the modification of the miR168* strand only (MIR168-4bp precursor fragment construct, Figure 2A, Additional file 1: Supplementary Figure S2C). Transient over-expression of MIR168-4bp by agroinfiltration resulted in higher GFP signal coupled with el- Figure S1A and B). This observation indicates that despite the high production rate the biological activity of MIR168-4bp derived miR168 species was reduced. To corroborate these observations, we generated multiple independent MIR168-4bp stable transformants. MIR168-4bp and wild type MIR168a over-expressing lines having comparable miR168 level were selected and their phe-notype and AGO1 content were analysed. The selected MIR168-4bp lines showed less delay in flowering and higher AGO1 protein level than the corresponding MIR168a lines suggesting reduced control of AGO1 level ( Figure 2C, Additional file 1: Supplementary Figure S3B-E). Although the severity of delayed flowering phenotype and AGO1 down-regulation showed correlation with miR168 level, even extreme over-expression of MIR168-4bp derived miR168 could not provoke greater effect than observed by the MIR168a control line (Additional file 1: Supplemen- tary Figure S3B-E). In gel-filtration experiments MIR168-4bp over-expressing plants showed reduced HMW-RISC loading efficiency of miR168 relative to MIR168a overexpressing plants ( Figure 2D, Additional file 1: Supplementary Figure S4A). This observation was further confirmed by AGO1 immuno-precipitation experiments detecting relatively decreased accumulation of miR168 in MIR168-4bp versus MIR168a transgenic plants ( Figure 2E, Additional file 1: Supplementary Figure S5A). All together, these data suggest that establishment of the miR168/miR168* duplex base-pairing at the fourth nucleotide does not inhibit the production of miR168 but decreases its AGO1-RISC loading capacity leading to a new AGO1 protein steady state level compared to wild type precursor fragment (MIR168a) over-expression. AGO1 loading efficiency can be enhanced by modifying the miR168/miR168* duplex structure of MIR168a precursor fragment In contrast to miR168, Arabidopsis miR171 exhibits a higher AGO1-RISC loading efficiency (28). Remarkably, in barley hvu-miR168 is also inefficiently loaded, while hvu-miR171 efficiently loaded into HMW AGO1-RISC (Additional file 1: Supplementary Figure S6A). These observations suggest that the loading properties of miRNAs are precisely set and conserved through evolution. In our previous work we have successfully used partial hvu-MIR171 precursor to produce efficient artificial miRNAs (amiRs) (36). Next, we attempted to increase the loading efficiency of miR168 into AGO1-RISC by remodelling the secondary structure of miR168/miR168* duplex to mimic that of hvu-miR171/miR171* duplex. In hvu-MIR171 the miRNA duplex contains three mismatches at the 4th, 9th and 12th positions and the mature miRNA originates from the 3 arm of the stem-loop structure. In contrast, ath-MIR168a miRNA duplex has two mismatches at the 4th and 15th positions and the mature miR168 originates from 5 arm of the precursor (Additional file 1: Supplementary Figure S2C). Mutations were introduced into the passenger strand of ath-MIR168a precursor fragment to create the three mismatches in the appropriate positions, producing MIR168-3mm precursor fragment construct ( Figure 3A, Additional file 1: Supplementary Figure S2C). The transient test of MIR168-3mm revealed that despite the profoundly reduced production rate of miR168, it exhibits increased capacity in down-regulating AGO1-sensor compared to MIR168a construct ( Figure 3B; Additional file 1: Supplementary Figure S1). In accordance with this, leaf patches agro-infiltrated with MIR168-3mm showed remarkable GFP signal reduction and low level of AGO1-sensor accumulation compared to MIR168a infiltrated leaves (Figure 3B, Additional file 1: Supplementary Figure S1A To test whether the enhanced activity of MIR168-3mm originated miR168 species is caused by their enhanced RISC-loading efficiency, gel-filtration experiments were performed. We found indeed that, MIR168-3mm derived miR168 incorporates more efficiently into HMW-RISC bringing about the enhanced down-regulation of AGO1 protein ( Figure 3D, Additional file 1: Supplementary Figure S4B). Immuno-precipitation of AGO1 from MIR168-3mm transgenic line also confirmed the tendency of enhanced loading efficiency of miR168 compared to the MIR168a line ( Figure 3E, Additional file 1: Supplementary Figure S5B). These data imply that intrinsic structural features of wild type miR168/miR168* duplex restrictively regulate AGO1loading of miR168. MiR168 produced from hvu-MIR171 stem-loop based artificial constructs exhibits increased AGO1 down-regulation capacity MIR168-3mm construct exhibits hvu-MIR171 specific features in miR168/miR168* duplex region but the backbone is derived from MIR168a precursor fragment. We wanted to further investigate the role of structural features defining miR168 AGO1-loading capacity in other experimental system. For this we built different artificial miR168 precursor (AMIR) constructs based on the modified version of barley hvu-MIR171 precursor fragment (36). Two variants of artificial AMIR constructs were created to express miR168 from hvu-MIR171 backbone. To retain the hvu-MIR171 stem-loop structure we changed the orientation of miR168 guide strand and modified the star strand in order to keep the distribution of the three mismatches within the duplex in the same positions as in hvu-miR171 duplex (Figure 4A and Additional file 1: Supplementary Figure S2C). AMIR-1 and AMIR-2 differ only in the identity of mismatched nucleotides in the duplex at 4th and 9th positions. These miR168 producing AMIR168 constructs (AMIR-1, -2) were expressed transiently in N. benthamiana leaves by agro-infiltration in the presence of AGO1-sensor. Both constructs induced a higher reduction of the GFP signal and the AGO1-sensor level compared to MIR168a ( Figure 4B and Additional file 1: Supplementary Figure S1A and B). Moreover, as small RNA northern blot results indicated, the increased AGO1 down-regulation in these cases were associated with less amount of miR168 over-expression compared to MIR168a ( Figure 4B and Additional file 1: Supplementary Figure S1A and B). Gel-filtration experiments of crude extracts originated from AMIR-1 and AMIR-2 agro-infiltrated leaf patches confirmed elevated HMW-RISC loading ability of miR168 compared to transient over-expression of MIR168a (Additional file 1: Supplementary Figure S6B). Complete loading of miR159 showed the existence of functional AGO1-RISCs in HMW fractions of leaves expressing AMIR constructs. These membranes were also used to detect miR168* strands using probes specific for every AMIR constructs. Signals detected in the low molecular weight, proteinunbound fractions suggest that AMIR originated miR168 species exist at least partly in duplex form (Additional file 1: Supplementary Figure S6B) confirming previous results (28). To further investigate how the altered stem-loop structures affect miR168 HMW-RISC loading efficiency, AMIR-1 and -2 stable transgenic lines were generated. Overexpressing lines producing similar amount of miR168 were selected for further studies. AMIR-1 and AMIR-2 displayed more pronounced phenotypic alterations (including delayed flowering time, reduced rosette diameter) compared to MIR168a line. (Figure 4C, Additional file 1: Supplementary Figure S2A and B). The severity of phenotypes of the analysed AMIR-1 and AMIR-2 lines correlated with the level of over-produced miR168 and in the cases of plants expressing extremely high level of miR168 resembled to that of hypomorph ago1-25 and ago1-27 mutants (Additional file 1: Supplementary Figure S2A). Although the amount of miR168 over-expression was moderately lower, the AGO1 protein content of AMIR lines was severely reduced compared to MIR168a lines ( Figure 4C). Next, size separation of protein complexes with gelfiltration experiments were performed from seedlings of selected AMIR-1 and AMIR-2 transgenic lines and a control MIR168a line. AMIR-1 and AMIR-2 lines displayed enhanced miR168 accumulation in AGO-RISC containing fractions in spite of the reduced accumulation of AGO1 protein ( Figure 4D, Additional file 1: Supplementary Figure S4C). Immuno-precipitation experiments from seedlings of AMIR-1 and AMIR-2 over-expressing lines confirmed that miR168 incorporation into AGO1-RISC is increased in both cases ( Figure 4E, Additional file 1: Supplementary Figure S5C). These data indicate that production of miR168 from alternative stem-loop structures could increase the AGO1-RISC loading efficiency leading to its enhanced biological activity. High-throughput sequencing analysis of transgenic plants identifies canonical miR168 species Over-production of miR168 from modified precursor fragments can be associated with the misprocessing of the miR168/miR168* duplex structure resulting in the differential accumulation of canonical and potentially noncanonical mi168 species. The non-canonical miR168 species may be loaded into different AGOs, at different rates to alternatively program the RISC effectors leading to the distorted auto-regulation of RNAi. To get a comprehensive understanding about how the miR168 species are maturated in our stable transgenic lines, we performed a highthroughput sequencing (HTS) analyses of sRNA pools of the selected transgenic lines. Samples originated from bulked seedlings of the representative lines were used in two replica experiments. Following the quality check and adapter and stop oligo sequence trimming (Additional file 1: Supplementary Figure S7A) reads between 15-26 nt were initially analysed for their size distribution (Additional file 1: Supplementary Figure S8A). In the investigated samples the 21 nt reads exhibited the highest abundance, but reads between 15 and 24 nt long were also represented in high proportion. In contrary to total read size distribution, reads mapped to miR168 producing precursors revealed the high dominance of 21 nt long small RNAs. 81-98% of the sequences were find to be be-tween 20 and 22 nt, with the lowest proportion observed in case of MIR168a over-expressing plants (Additional file 1: Supplementary Figure S8B). Investigation of the reads per million (RPM) data revealed, in accordance with the small RNA northern blot data, that all the investigated precursor fragments overproduced miR168 compared to the wild type plant ( Figure 5A; Additional file 1: Supplementary Figure S5E). In addition to the dominant over-production of 5 U miR168 we also experienced the over production of 5 C and the presence of minor quantity 5 G and 5 A miR168 species in the transgenic plants ( Figure 5A). The differential accumulation of various miR168 species in transgenic lines can raise the possibility that misprocessing of miR168 can be an important factor influencing AGO1 loading efficiency. To investigate this possibility, we calculated the relative accumulation of different 5 end miR168 species ( Figure 5B). Detailed investigation of iso-miRNA168 species revealed, that the majority of 5 U reads represent the canonical miR168 species except AMIR-2 where one nt truncated version of miR168 accumulated to the highest level (Additional file 1: Supplementary Figure S7B). The 5 C iso-miRNA content of MIR168-4bp and MIR168-3mm overexpressors was similar to that of wild type and MIR168a precursor fragment over-expressing transgenic plants exhibiting one nt truncation at the 5 end. In contrary, in AMIR-2 and especially AMIR-1 over-expressors the dominant version of 5 C iso-miRNAs were one nt longer at the 5 end. (Additional file 1: Supplementary Figure S7B). Since, AMIR-1 and AMIR-2 exhibit very similar features it is unlikely that the observed differences in the accumulation of iso-miR168 species play important role in enhanced AGO loading. 5 G and A miR168 iso-miRNAs were strongly under-represented in the samples, less than 4 and 0.6% of total miR168 reads, respectively. Due to the small quantity of these sRNAs it is unlikely that they contribute considerably to the investigated phenomenon. MIR168a and AMIR-1 lines revealed a very similar distribution of 5 U and 5 C miR168 species, even detecting slightly less 5 U miR168 in AMIR-1 line compared to MIR168a ( Figure 5B). This observation suggests that the generated 5 C miR168 species do not interfere with or take part in differential AGO1 loading at a high extent since at the level of comparable expression AMIR-1 derived miR168 is more efficient in AGO1 loading than that of MIR168a ( Figure 4E, Additional file 1: Supplementary Figure S5C). AMIR-2 line, which has slightly more 5 U but less 5 C miR168 relative to MIR168a line, behaved very similarly to AMIR-1 ( Figure 4E, Additional file 1: Supplementary Figure S5C). This finding also indicates that the ratio of 5 U and 5 C iso-miRNAs may not have profound effect on the AGO1 loading efficiency. In the case of MIR168-3mm we detected elevated 5 U miR168 ratio relative to 5 C miR168, very similar to that of was detected in AMIR-2 line ( Figure 5B). However, MIR168-3mm was associated with more efficient AGO1 loading at less than 20% miR168 over-expression level compared to AMIR-1-2 and MIR168a lines ( Figure 5A). MIR168-4bp mediated over-production of miR168, exhibiting very inefficient AGO loading, was associated with higher 5 C miR168 ratio ( Figure 5B). However, due to and AGO1 in wild type A. thaliana Columbia-0 (col) and dcl1-9 plants. U6, BiP and Ponceau staining were used as loading controls of RNA and protein blots, respectively. (F) MiR168 accumulation in AGO1 immuno-precipitated samples of col and dcl1-9 plants. Cytoplasmic contamination in IP samples at RNA and protein level was checked with U6 and BiP, respectively. Fold change (FC) was calculated as the volume intensity ratio of miR168 and AGO1 signal intensity of IP samples, and was reported on the basis of wild type Columbia (col). the high over-expression level in MIR168-4bp the absolute content of 5 U miR168 was comparable to MIR168a and AMIR-2, and higher than MIR168-3mm and AMIR-1 (Figure 5A.). This finding further supports the assumption that the altered 5 C miR168 content is not the major component that determines the loading efficiency changes in the investigated transgenic lines. Furthermore, by comparing analysis of miR168 species in previously published AGO1 immuno-precipitation data (22) (Additional file 1: Supplementary Figure S8C) with 5 C miR168 ratio in input total RNA of our Columbia sample, we found, that in wild type Arabidopsis the 5 C miR168 ratio in input total RNA and AGO1 precipitated samples is very similar (7-8%). This observation suggests that most of the produced 5 C miR168 species are biologically active and follow the same AGO loading rules as 5 U miR168. To further investigate the potential alternative function of 5 C miR168 we tested the accumulation of AGO5, an AGO protein preferentially associated with 5 C miRNAs. We found, in accordance with published data (48), that AGO5 does not express at detectable level in the investigated tissue type under our conditions (seedlings) (Additional file 1: Supplementary Figure S8D). These data indicate that the activity of this AGO protein does not interfere with our results. Moreover, we found that AGO1 mRNA associated siRNAs do not exhibit higher level accumulation in our HTS data and northern blot analysis in plants expressing the manipulated precursor constructs indicating that the activity of this pathway has not been altered in the used transgenic lines (Additional file 1: Supplementary Figure S5E and F). Altogether these data suggest that mainly the structural features of miRNA duplex govern the loading efficiency changes of miR168 in our study. However, we cannot exclude the possibility that, to a lesser extent, the altered production of miR168 species can also contribute to the phenomenon in some cases. Competing miRNAs can affect the loading of miR168 into AGO1-RISC Our findings suggest that loading efficiency of miR168 into AGO1-RISC is tuned-down by structural elements located within miRNA duplex. Importantly, low loading properties of miR168 seem to be conserved ( Figure S6A and B; (42)), suggesting its biological relevance. We hypothesized that the high excess of unbound miR168 may act as a balancer continuously adjusting the required physiological level of AGO1 in response to external or endogenous stimuli. Modification in the secondary structure of the duplex can shift this adjusted loading balance leading to miR168 over/under loading into AGO1-RISC. The general existence of high excess of miR168 pool unbound to AGO1 suggests that the calibrated loading efficiency of miR168 into AGO1-RISC represents a flexible adaptive regulatory system. To test this hypothesis, first we investigated whether modulating the small RNA content of the cellular environment can competitively alter the loading of miR168 into AGO1-RISC. Previously we demonstrated that miR171 is wellloaded, while miR156 exhibits very inefficient AGO1 load-ing ability since it accumulates predominantly in proteinunbound form in A. thaliana (28). In transient assays, MIR156 and MIR171 precursor fragments were massively over-expressed in the presence of Ago1-sensor to find out whether the endogenous miR168 can be competitively sequestered from AGO1 loading. Robust over-expression of miR171 resulted in a higher GFP signal and slightly but consistently increased AGO1sensor level compared to miR156 co-infiltrated control ( Figure 5C, Additional file 1: Supplementary Figure S4D). These findings suggest that the miR171 efficiently outcompetes miR168 from AGO1 loading that subsequently results in de-repression of AGO1-sensor. Size separation gelfiltration assays confirmed that the over-expression of wellloaded miR171 reduced the loading efficiency of endogenous miR168 into HMW AGO1-RISC compared to lowloaded miR156 infiltration control ( Figure 5D, Additional file 1: Supplementary Figure S4D). These observations indicate that AGO1 loading of miR168 can respond to alterations in the small RNA pool of the given cellular environment in a competitive manner. The competitive loading model of miR168 predicts that the decrease of AGO1 protein amount should be achieved by increasing the competitiveness of miR168 inversely to the cellular miRNA pool. For this we took advantage of dcl1-9 mutant, in which the production of most miRNAs is severely impaired (49). Intriguingly, miR168 level was reported to be relatively unaffected in this mutant plant (30). To avoid potential deteriorating effects of the presence of unloaded AGO1-RISCs in these mutants, it is expected that AGO1 protein level is adjusted to the available miRNA pool by the action of miR168/AGO1 regulatory loop. We speculated that in dcl1-9 background, the competitiveness of miR168 would be elevated due to the stoichiometric imbalance, the relatively high level of miR168 in comparison to the competitor miRNA pool. We confirmed that in contrast to miR159, miR168 content was only slightly affected in dcl1-9 mutant, while AGO1 protein level was remarkably down-regulated ( Figure 5E). Since the low level of AGO1 made it technically difficult to use gel-filtration assays, immuno-precipitation experiments were carried out. Supporting our hypothesis, the relative miR168 content in AGO1 immuno-precipitates of dcl1-9 mutants increased ( Figure 5F, Additional file 1: Supplementary Figure S5D). These data indicate that in dcl1-9 mutant miR168 loading into AGO1 is enhanced. This new miR168 loading kinetics in turn generates a lower AGO1 protein level adjusted to the suppressed miRNA content of the cells. DISCUSSION Plant miRNAs play fundamental roles in plant growth and development, as well as in adaptation to biotic, abiotic stresses and other physiological processes via controlling the expression of target transcription factors and stress-response linked proteins (6). The multi-layered regulatory roles render the miRNA pathway to be one of the most pliable and versatile controlling mechanisms. According to these attributes, the miRNA pathway is fine tuned in an environment responsive manner involving transcriptional control of the miRNA encoding genes, tissue-specific expression of biogenesis co-factors, post-translational modifications, control of miRNA stability and processing (50). The secondary structure of the pre-miRNA, encompassing the miRNA/miRNA* duplex region, plays a pivotal role in determining the efficiency of miRNA biogenesis (14,15,51,52). Moreover, structural motifs of the precursors can also contribute to specification of AGO proteins determining the sorting of miRNAs in proper executor complexes (20,26). The central executor component of miRNA pathway, AGO1, is feedback regulated by the conservative AGO1-competent miR168 family (48). The importance of miR168 driven auto-regulatory loop was demonstrated by the over-expression of miR168-resistant version of AGO1 mRNA inducing various developmental defects and eventually the death of the plants (33). These data revealed that unbalanced over-accumulation of AGO1 protein imposes severe danger to the proper functioning of the plant. Previously, the utilization of size-separation gel-filtration method of plant crude extracts revealed an extraordinary property of miR168. It was shown that only a small subset of mature miR168 was present in HMW-RISC while the majority of mature miR168 accumulated in the low molecular weight fractions in protein-unbound form (42). Notably, the accumulation of unbound miR168 seems to be conserved, however the significance of the free miR168 pool remains elusive. Previously, we demonstrated that the transient or transgenic over-expression of various miRNA precursors can lead to complete (miR159), efficient (miR171) or limited (miR168) HMW-RISC loading in the same cellular environment (28). These miRNAs are matured from precursors having different secondary structures raising the possibility that structural motifs of miRNA precursors can affect the AGO1 loading rate. To study the biological function of this low-calibrated loading efficiency of miR168 into AGO1-RISC, the wild type ath-MIR168a and ath-MIR168b precursor fragments were over-expressed in both transient and stable transgenic systems. Drastic enhancement of miR168/AGO1-RISC loading and AGO1 downregulation could not be achieved in transgenic lines. Majority of the over-expressed miR168 were sorted into the free pool and only a smaller subset was loaded into the RISC. This finding suggested that restricted miR168 loading equilibrium into AGO1 has an important biological role. We hypothesized that structural motifs of MIR168a precursor can be responsible for the restricted loading of miR168 into AGO1-RISC. To test this hypothesis, a series of alternative, but miR168 producing precursor fragments were created by modifying the wild type MIR168a precursor miRNA duplex region or by expressing miR168 from heterologous constructs containing hvu-MIR171 precursor backbone. The secondary structure of miRNA/miRNA* duplex regions of these constructs were manipulated by introducing modifications into the star strand only, leaving the guide strand intact. In the transformation experiments several transgenic lines have been generated with various expression levels of miR168 (Additional file1: Supplementary Figure S2A, B and S3). The distribution of various construct lines amongst the over-expression rate categories was different raising the possibility that structural modifications in duplex may also af-fect the efficiency of biogenesis (Additional file1: Supplementary Figure S3A). Since our work focuses on the AGO loading efficiency of miR168, over-produced from modified precursor fragments, transgenic lines displaying similar expression properties have been selected to allow the precise comparison in the experiments. According to this, high, intermediate and low MIR168a precursor fragment overexpressing lines were used for the analyses of MIR168-4bp, AMIR168-1-2 and MIR168-3mm, respectively. The majority of these modifications resulted in enhanced down-regulation of AGO1-sensor in transient assays or AGO1 level in transgenic plants. Gel-filtration and immuno-precipitation experiments indicated that increased AGO1 down-regulation is a consequence of enhanced AGO1-RISC loading of miR168. Both AMIR-1 and AMIR-2 have similar effect on AGO1 down-regulation. This observation suggests that the identity of mismatched nucleotides within a duplex do not have strong influence on AGO1 loading rather the structure of the duplex is important. Intriguingly, in the case of MIR168-4bp the introduced modification resulted in even less efficient miR168/AGO1 loading showing that the loading balance can be altered in both directions. The over-expressed miR168 species were produced mainly in correct size and no elevated secondary siRNA production was detected from AGO1 mRNA in the experiments. However, we experienced altered 5 U/C ended miR168 ratio during the over-expression of various modified miR168 precursor fragments. Alternative maturation of miR168 may affect loading and downstream effects on AGO1 feed-back loop. The similar AGO loading properties of 5 C or 5 U miR168 species in the wild type plant (col on Figure 5B; Additional file 1: Supplementary Figure S8C) and association of various AGO1 loading efficiencies with similar 5 U/C miR168 ratios renders this possibility less likely. Still, we cannot absolutely exclude the role of altered production of miR168 isoforms in differential AGO sorting or loading as for example in the floral tissue where AGO5 is dominantly expressed (33). Moreover, we also demonstrated that loading of miR168 into AGO1-RISC is an environmentally responsive regulatory process. The drastic suppression of the overall miRNA level in dcl1-9 biogenesis mutant triggered adjustment of AGO1 content to a lower level through the enhanced loading of miR168 into AGO1. Furthermore, miR168 can be outcompeted from RISC by massive over-expression of an efficiently AGO1-competent miRNA suggesting that miR-NAs compete for RISC loading. Combining previous findings with our new results we propose a refined model for miR168 mediated regulation of AGO1 homeostasis (Figure 6). According to this 'competitive' model, the efficiently processed MIR168a precursor produces a high excess of miR168 but only a small subset of this is loaded into AGO1-RISC. The unincorporated miR168 species accumulates as miR168/miR168* duplexes in the cytoplasm. Balance of AGO1-RISC loaded and unbound miR168 is determined by structural features of the precursor encompassing the miR168/miR168* duplex region. Other miRNA precursors such as MIR159a or MIR171a, for example, possess structural features enabling their more efficient AGO1 load- . Unloaded AGO1 proteins (gray rectangles) are continuously translated from the mRNA. MiRNA precursors are also expressed in a tissue specific manner (schematically represented by black hairpin structures), including MIR168a precursor (red), which are subjected to subsequent cleavages to produce miRNA duplexes (short paired lines) determining the composition of the miRNA pool. The mature miRNA strand of the miRNA duplexes can associate with AGO1 proteins with different efficiencies (lines with arrowheads) generating miRNA loaded AGO1 pool (grey rectangles (AGO1) with lines (miRNAs)). MiRNA duplexes not able to load into AGO1 can accumulate in protein-unbound forms (paired lines on the right side of the precursor structures). The AGO1 loading of miR168 is finely calibrated by structural features of the precursor RNA allowing only a subset of the miR168 pool to be loaded into AGO1 (grey rectangles with red line) and majority of miR168 accumulates in duplex form, unbound to protein (red paired lines on the right side of the precursor structures). Due to this highly sensitive autoregulatory loop, the defined miR168/AGO1 complexes negatively regulate AGO1 mRNA (red dashed lines) determining the proper physiological AGO1 threshold (schematically represented by the column of miRNA loaded AGO1 proteins (gray rectangles with lines). (B) Regulatory action of miR168/AGO1 autoregulatory loop in RNAi defective mutants. In dcl1-9 mutant the production of endogenous miRNAs is strongly inhibited (represented by the absence of black paired lines) except the biogenesis of miR168. Because of the lack of AGO1-competent miRNAs there is a danger of over-accumulation of unloaded AGO1 proteins leading to interference with proper cell functions. However, in the absence of efficiently AGO1-competent miRNA species the extensively produced miR168 is able to load into AGO1 proteins with higher efficiency imposing strong control on AGO1 mRNA. Due to this regulatory mechanism, a new, controlled, AGO1 equilibrium is formed in balance with reduced miRNA content of the cell. ing (28). The amount of AGO1-RISC associated miR168 is determined by competition of miR168 with the AGO1competent small RNA pool of the cell for the limiting free AGO1 proteins. The low-calibrated AGO1-RISC loading of miR168 could fine tune the proper physiological level of AGO1 relative to given small RNA population of the cell ( Figure 6A). The fundamental requirement of this regulatory mechanism is the continuous presence of biologically active miR168 excess competent to be incorporated into AGO1-RISC. This requirement seems to be guaranteed by uncoupling the biogenesis of miR168 from the canonical miR-NAs since miR168 production is insensitive to many mutations affecting miRNA pathway, such as dcl1-9 (30) ( Figure 5E). The insensitivity of miR168 production to disorders in miRNA pathway can ensure the continuous miR168 driven control of AGO1 level eliminating the risk of over-loading AGO1 with unwanted sRNA species. Indeed, we found that in dcl1-9 mutant where the level of endogenous miRNA pool is severely lowered, less AGO1 protein is present due to enhanced incorporation of miR168 into AGO1 ( Figure 5F). This finding suggests that the suppressed miRNA pool results in less loaded AGO1. The availability of free AGO1 proteins enables the more efficient loading of the miR168 excess. The enhanced loading of miR168 in turn reinforces the feedback regulatory loop bringing down the AGO1 protein level to a new equilibrium ( Figure 6B). The observation that MIR168 and AGO1 expression is co-regulated (30) is in line with this model since increase in AGO1 protein level requires the increase in miR168 level to maintain the proper balance between AGO1 loaded and unbound miR168 species. Previously, we showed that various virus infections on different host plants induce drastic miR168 induction in the infected leaves which is usually associated with strong down-regulation of AGO1 level (42,53). The efficiency of the virus mediated AGO1 control can be explained by the observation that virus infection also resulted in drastic AGO1 mRNA induction. The AGO1 mRNA excess can continuously produce AGO1 proteins enabling the loading of miR168 in higher extent and the establishment of a strong feedback loop. The striking variability of RNA stem-loop shape and size of plant miRNA precursors (13,54) compared to stereotypic animal counterparts (55) suggests that structural features can play a dominant role in the biogenesis and action Nucleic Acids Research, 2021, Vol. 49, No. 22 12927 of plant miRNAs. The calibrated loading action of miR168 governed by RNA structural motifs is potentially applicable to other miRNAs. This hypothesis is supported by the observation that many miRNAs exhibit the presence of AGOloaded as well AGO-unbound forms (28). It is not clear, whether miRNA duplexes accumulated in the cytoplasm in protein-unbound form represent a biologically active pool of AGO1 loading competent molecules or they are superfluous by-products of the miRNA biogenesis. Recently it has been shown that miRNA loading of AGO1 predominantly takes place in the nucleus (19) suggesting that the loading rate of miR168 is calibrated in the nucleus. However, it cannot be excluded that the cytoplasmic miRNA duplexes represent a biologically active reservoir. In summary, we used an artificial system built for comparative analyses of AGO1 loading efficiency of miR168 derived from differently altered precursor fragments. This comparative analysis, carried out in transient and stable transgenic lines, helped us to provide a biologically relevant explanation for the existence of the recently discovered AGO unbound miR168 species, and refine the model of RNA silencing autoregulation. According to this, (i) the excessive processing of ready-to-load miR168 species, (ii) the structural properties embedded in miR168/miR168* duplex structure, and (iii) the amount of competing cellular miRNAs are defining the AGO1-RISC loading efficiency of miR168. This competition-based mechanism precisely and dynamically correlates the amount of AGO1 protein according to cellular needs. The presence of unbound cytoplasmic pool was demonstrated for many miRNAs (28) indicating that regulatory action of competition based loading efficiency can be valid for other miRNAs as well. Since we investigated solely the role of precursor fragment stem-loop structures further experiments will be necessary to investigate this phenomenon in the complex context of miRNA biogenesis. Moreover, it will also be important to fine map the requirements of precursor stem-loop structure for governing AGO1 loading by extensive comprehensive structural-function analyses. In the future it will be also important to identify and characterize protein cofactors playing a role in RNA structure-based communication between miRNA biogenesis and action. DATA AVAILABILITY HTS data were deposited at SRA database under the Bio-Project accession number PRJNA640279.
2021-12-02T06:23:07.134Z
2021-11-24T00:00:00.000
{ "year": 2021, "sha1": "18f358cd98bc9ed4195534410f51dee751affc92", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1093/nar/gkab1138", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9d22ef2b1a410fce454c26327f4ba24ec65c7217", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233445141
pes2o/s2orc
v3-fos-license
An Adolescent with Transient Hyperthyroxinemia after Blunt Trauma to Head and Neck Background Thyroid storm is a well-known complication of surgical procedures in the lower neck, but is rare after a blunt neck trauma. The cases described previously have mainly focussed on adults with pre-existent thyroid disease. In this case report, we describe the disease course of a previously healthy adolescent who had asymptomatic hyperthyroxinemia after a blunt trauma of the jaw and neck. Case Presentation. A 17-year-old girl presented at our emergency department after she fell on her head while roller blading. On physical examination, among other injuries, she had a swelling in the lower neck, which appeared to involve the thyroid gland. Subsequent laboratory analysis was indicative of primary hyperthyroxinemia, with a free T4 of 59 pmol/L (reference range: 12–22) and a TSH of 0.46 mU/L (reference range: 0.5–4.3), but the patient had no symptoms fitting with this. Four weeks after the initial presentation, the patient reported only complaints regarding tenderness in the jaw and neck region. She was no longer hyperthyroidic on biochemical evaluation (with a free T4 level of 15.6 pmol/L and a TSH level of 0.33 mU/L), and antibodies against thyroid peroxidase or TSH receptor were not present. Conclusions This case might indicate that hyperthyroxinemia following a neck trauma may go unnoticed if hyperthyroid symptoms are mild or absent and thyroid function tests are not performed. Introduction yroid storm is a well-known complication of surgical manipulation of the thyroid gland, for example, during parathyroidectomy or laryngectomy [1][2][3][4][5]. However, thyroid storm induced by thyroid injury after blunt trauma to the neck is a rare finding [6,7]. is clinical entity has been described following attempted suicide by hanging [8,9], a severe neck trauma [6,10], and thyroid ultrasonography [11,12]. A great majority of patients described so far were adults, of whom the majority experienced severe hyperthyroid symptomatology and were known with pre-existent thyroid disease [6,7,13]. yroid storm is a potentially lifethreatening complication of thyroid injury that starts abruptly and is characterized by four main symptom clusters, including fever, supraventricular arrhythmias or tachycardia, and gastrointestinal system and central nervous system symptomatology. Early recognition and prompt treatment seem necessary [6]. In this case report, we describe the disease course of a previously healthy adolescent who had asymptomatic hyperthyroxinemia after a blunt trauma to the head and neck. Case Presentation A previously healthy, 17-year-old girl presented at the emergency department after she fell on her head while roller blading. She complained about pain in her jaw and neck. On physical examination, the patient was an alert girl with stable vital parameters, including a heart rate of 70 bpm, a blood pressure of 125/85 mmHg, a peripheral oxygen saturation of 99% on room air, and a temperature of 37.6°C. Injuries noted were a fractured mandible and a painless, mobile swelling in the lower neck at the level of the thyroid region. ere was no audible stridor or other signs of upper airway obstruction. Computed tomography (CT) scanning of the head and neck was performed, showing an enlarged and inhomogeneous thyroid gland and a fractured mandible (see Figure 1). Subsequently, contrast-enhanced CT scanning and ultrasonography of the neck showed enlargement and asymmetry of the thyroid gland, with more hypodense areas in the right lobe, suspicious of contusion or laceration, though the possibility of autoimmune thyroiditis could not be excluded (see Figure 2). After the imaging results, thyroid function tests were performed, demonstrating a free T4 level of 59.0 pmol/L (reference range: 12-22) and a TSH level of 0.46 mU/L (reference range: 0.5-4.3). e patient did not experience symptoms of hyperthyroidism, in particular no jittery, no palpitations, no excessive sweating, and no previous weight loss. She had no history of a neck swelling. e family history was unremarkable, with no family members reporting a history of thyroid or autoimmune disease. e day after admission, the laboratory tests were repeated, showing a decline in the free T4 level (see Table 1). e patient still did not experience symptoms of hyperthyroidism, and 24 hours later, she was discharged. At an outpatient visit 4 weeks after presentation, the patient reported to be nearly recovered, except for tenderness in her jaw and neck region. On physical examination, her vital signs were stable and her thyroid gland was no longer enlarged. yroid function tests showed a free T4 level of 15.6 pmol/L and TSH level that was still slightly suppressed at 0.33 mU/L and absence of antibodies against thyroid peroxidase or TSH receptor (see Table 1). e transient hyperthyroxinemia could therefore most likely be attributed to thyroid injury. Discussion We have presented a patient with transient, asymptomatic hyperthyroxinemia after a blunt trauma of the head and neck. e pathomechanism of trauma-induced hyperthyroxinemia is thought to involve rupture of acini and liberation of thyroid hormones in the bloodstream, which may result in a potentially life-threatening condition called thyroid storm [6]. yroid storm is a rare finding among trauma patients that has previously been described only in those with a severe trauma to the neck, such as after hanging and strangulation [8,9]. ere are only a few reports describing thyroid injury and thyroid storm after a blunt neck trauma [6,7,10]. Most of the cases described before were adult females, with a median age of 44 (range: 8-89) years, and approximately half of them were known with preexisting thyroid disease [7]. Newer theories suggest that thyroid storm represents a form of allostatic failure in situations, where severe illness would normally result in euthyroid sick syndrome, but where the patient is unable to downregulate T3 concentrations due to thyrotoxicosis [14]. However, in our patient, T3 concentrations were not measured. Another complication of thyroid injury is a rapidly expanding hematoma that may cause airway compression Case Reports in Endocrinology requiring hemithyroidectomy. It has been described that nearly half of the patients with thyroid injury required hemithyroidectomy due to a rapidly expanding hematoma of the thyroid [6,15]. e onset of symptoms of an expanding hematoma, such as a painful pre-and paratracheal swelling and dyspnoea, may be delayed in approximately 50% of the patients [7,13], necessitating inhospital monitoring for more than 24 hours. yroid function tests of the patient were indicative of primary hyperthyroidism. During the follow-up, her free T4 level recovered, while her TSH level remained slightly suppressed, which may be attributed to a delayed pituitary response, as evidenced from Jostel's TSH Index (7.16, 4.40, and 0.99 at days 1, 2, and 28, respectively). Given the lack of antibodies directed against the thyroid gland and the absence of evidence for a pre-existing thyroid disease, it is amenable that the hyperthyroxinemia in the patient could be explained by thyroid trauma. is case is special and interesting in such a way that it concerned a 17-year-old adolescent who did not experience symptoms suggestive of hyperthyroxinemia. In addition, the patient had no own or family history of thyroid disease. Despite the absence of hyperthyroid symptoms, thyroid function tests showed hyperthyroxinemia. e lack of hyperthyroid symptoms in our patient could be explained by the fact that she was young and previously healthy. For the publication of this case report, we obtained a written informed consent from our subject. With this case report, we add that hyperthyroxinemia following a neck trauma may go unnoticed if hyperthyroid symptoms are mild or absent and thyroid function tests are not performed. erefore, hyperthyroxinemia may occur more frequent after a blunt trauma of the neck than previously expected. Data Availability e data used to support the findings of this study are available upon request to either m.romijn1@amsterdamumc.nl or m.finken@amsterdamumc.nl. Ethical Approval For this case report, the ethical approval was not required. Consent Written informed consent was obtained from the patient for publication of the clinical details and images. Conflicts of Interest e authors declare no conflicts of interest.
2021-04-30T05:15:10.896Z
2021-04-08T00:00:00.000
{ "year": 2021, "sha1": "e44463e9976bd43240fae35ff9b950f2801f8b74", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/crie/2021/6628035.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e44463e9976bd43240fae35ff9b950f2801f8b74", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16719834
pes2o/s2orc
v3-fos-license
Participatory Patterns in an International Air Quality Monitoring Initiative The issue of sustainability is at the top of the political and societal agenda, being considered of extreme importance and urgency. Human individual action impacts the environment both locally (e.g., local air/water quality, noise disturbance) and globally (e.g., climate change, resource use). Urban environments represent a crucial example, with an increasing realization that the most effective way of producing a change is involving the citizens themselves in monitoring campaigns (a citizen science bottom-up approach). This is possible by developing novel technologies and IT infrastructures enabling large citizen participation. Here, in the wider framework of one of the first such projects, we show results from an international competition where citizens were involved in mobile air pollution monitoring using low cost sensing devices, combined with a web-based game to monitor perceived levels of pollution. Measures of shift in perceptions over the course of the campaign are provided, together with insights into participatory patterns emerging from this study. Interesting effects related to inertia and to direct involvement in measurement activities rather than indirect information exposure are also highlighted, indicating that direct involvement can enhance learning and environmental awareness. In the future, this could result in better adoption of policies towards decreasing pollution. Introduction Air pollution has an important effect on our health, with an increasing number of studies showing higher risk of respiratory and cardiovascular diseases for people exposed to higher pollution levels [1,2]. In this context, keeping air pollution at bay has been a major priority for policy makers in the past decades. A lot of effort has been put into monitoring and controlling air pollution. Large scale monitoring networks routinely monitor target pollutants. They allow for temporal trends in air pollution to be tracked. Significant effort has also been made to make information accessible to the wider public. However, several papers indicate that official monitoring networks do not have sufficient spatial coverage to provide detailed information on personal exposure of people, as for some pollutants, this may vary substantially among microenvironments [3,4], i.e., in urban, traffic-prone areas spatial variability is very high [5][6][7]. Several pollution sources have been addressed with success. However, persistent problems remain in urban areas, where traffic and domestic heating are important sources [8]. Next to the technical solutions (e.g., electrical mobility), people's personal perceptions, behavior and choices play a major role in addressing these issues and facilitating change in a bottom-up manner. This includes wide adoption of technologies developed for monitoring, which is mandatory in order to enable relevant results. Participatory sensing, involving citizens in environmental monitoring, can have multiple potential benefits. Firstly, it can increase coverage of monitored areas, both in time and space, due to the ability to distribute the monitoring activities to multiple individuals [9]. Secondly, the act of monitoring pollution by citizens could facilitate learning and increase their awareness of environmental issues [10]. A recent report on environmental citizen science concludes that few studies on public participation in science and environmental education have rigorously assessed changes in attitudes towards science and the environment, and environmental behaviors. There appear to be relatively few examples of participatory citizen science having a tangible impact on decision making, although the potential is often noted [11]. One element to foster large scale participation in participatory monitoring campaigns is the availability of low-cost wearable sensing devices. These will give intrinsically lower quality data, since low cost implies decreased sensor accuracy, so at the moment the tradeoff is between the additional social benefits stemming from large participation and data quality [12]. As technology advances, low cost sensors will become increasingly more accurate, so that the tradeoff will disappear and the additional social effects will be obtained with no cost on data quality. Several efforts have been made to develop low-cost wearable sensing devices, integrating low-cost gas sensors, GPS and mobile phones. The CommonSense project [13] built handheld devices containing CO, NOx and ozone sensors. Another example, which was quite successful in raising funds through crowdfunding, is the Air Quality Egg [14], designed for static measurements and containing NO 2 and CO sensors. However, many of these projects focus mainly on the electronics and systems integration, power issues, wireless data transfer, data storage and visualization and pay little attention to the limitations and quality issues of the gas sensors adopted. Very few tests or validation results have been published in publicly available reports or peer reviewed literature. Examples are Hasenfratz et al. and Mead et al.. Hasenfratz et al. [9] introduce GasMobile, a platform measuring ozone concentration, which is connected to a smartphone by USB. They take into account important issues such as sensor quality, calibration, and effect of mobility on sensor readings. Mead et al. [15] developed sensor boxes with electrochemical sensors, which entailed changes in the sensor technology itself, in the electronics and complex data analysis. The CitiSense [16] project is currently building an infrastructure for citizen engagement in environmental monitoring. Another issue is the collection of a representative data set using mobile air quality sensing technologies. To be representative and useful for personal or community decision making, mobile measurements have to be repeated regularly, data have to be aggregated over relevant time frames and locations, and carefully interpreted using data handling and expert knowledge to filter out inaccuracies [6,17]. The supplementary material S1 File discusses the challenges involved in using low-cost sensors for air quality monitoring and describes the approach used by our project to address quality issues. An important issue concerns the technological versus social aspect of such projects. Most of the existing projects concentrate mainly on the sensor side of participatory air quality sensing, i.e., how to build the sensing devices and map pollution. However, participant engagement, participatory patterns, learning and awareness are equally important aspects, and feed back into the quality of the data collection, as we have also shown in a parallel project concerned with noise pollution [18]. By collecting subjective data as well, monitoring campaigns can enable not only air quality data collection, but also analysis of volunteer behavior, strategies and a possible increase in awareness. The test case In this paper, we discuss the behavior and perceptions of citizens involved in monitoring, during a large scale international test case: the AirProbe International Challenge (APIC) [19]. This was organized simultaneously in four cities: Antwerp (Belgium), Kassel (Germany), London (UK) and Turin (Italy). In this test case a web-based game, air quality sensing devices and a competition-based incentive scheme were combined to collect both objective air quality data and data on perceived air quality, to analyze participation patterns and (changes in) perception and behavior of the participants. The test case was organized as a competition between the cities, to enhance participation. For the first time to our knowledge, an end-to-end scientific platform for participatory air pollution sensing, developed as part of the EveryAware project [20], was used. This platform is described briefly in the Methods section, with more details included in S1 File. The quality and representativeness of the collected air quality data are also discussed in S1 File. During this test case, volunteer participants were asked to get involved in two activity types. The first one consisted in using a sensing device (Sensor Box), to measure air pollution (black carbon (BC) concentrations) in their daily life, generating what we call objective data. The second activity was playing a web game (AirProbe), where volunteers were asked to estimate the pollution level in their cities by placing flags (so called AirPins) on a map and tagging them with estimated black carbon (BC) concentrations on a scale from 0 to 10 μg/m 3 , resulting in subjective data on air pollution (perception). Volunteers involved in the measuring activities were encouraged to play the game and bring other players as well (create a team). The two data types allow for an analysis of user behavior and perception throughout the challenge. To enable this, the test case was composed of three phases. In phase 1, only the online game was available, so we could obtain an initial map of the perceived air pollution. In phase 2 the measurements started in a predefined area in each of the cities (corresponding also to the web game area), with the web game running in parallel. Phase 3 introduced a change in the game, so that players could acquire limited information about the real pollution in their cities in the form of sensor box measurements averaged over small areas (so called AirSquares). At the same time, measurements were continued, this time without a restriction of the area to be mapped. Incentives in the form of prizes were given at the end of each phase to the best teams/players (please see Methods and S1 File for more details). The data collected during the test case are used here to analyze participation patterns, in terms of activity and coverage, and any changes in perception. Our results indicate that better coverage is obtained when volunteers are assigned a specific mapping area, compared to when they are asked to select the time and location of their measurements. Additionally, when allowed to measure freely, they seem to be attracted to places with higher pollution levels. Furthermore, while at the beginning of the challenge the general perception was that pollution was higher than in reality, perceptions changed in time indicating increased knowledge of real pollution levels. The amount of data collected in the test case, together with the first insights we obtained from it, suggest that bottom-up participatory sensing approaches are effective in attracting participants with high levels of activity and also in enhancing citizen awareness of real pollution levels. Results Volunteer involvement and activity levels are among the most important elements in participatory monitoring campaigns, since these can determine the success of the campaign. Large activity is required for acquiring meaningful data, both objective, for analysis of the environment itself, and subjective, for analysis of social behavior. The test case presented here has successfully involved 39 teams of volunteers in 4 European locations, gathering 6,615,409 valid geolocalized data points during the challenge (the measuring device collects one data point per second). An additional 3,326,956 data points were uploaded to our servers in the same period, but were missing complete GPS information, and were not included in the analysis. Some of these measurements contained labels (tags), with 742 geo-localized overall tags coming mostly from one location of the challenge (London). Additional information on perception of pollution has been extracted from the online game. The platform had 288 users in total, over six weeks, 97 of which played the game at least ten times. Their activity resulted in 70,758 AirPins at the end of the test case, which we will use to assess perceived pollution levels. Fig 1 shows general participation patterns, both for the measuring activity and for the web game. Further details about participation, for each of the four locations of the test case, can be found in S1 File. The daily number of measurements show larger activity during the week compared to weekends, with almost twice the activity in the peak days (Wednesday/Friday). This indicates that the volunteers were strongly interested in monitoring their exposure in relation to the routine activities of the week, which probably include commuting and access to highly polluted environments. It might also mean that it was easier for participants to monitor as part of their weekly routine whereby at the weekend monitoring would require more effort as it would not comprise part of their commute, for example, or may have impacted on other leisure activities that they wanted to carry out. Daily patterns (hourly measurements) indicate a peak in activity in the afternoon, around 5 pm, again probably due to afternoon commuting. However, measurements are performed at all hours of the day, indicating the presence of very dedicated volunteers. In fact, the total number of measurements per team indicates several teams with very high activity levels, with the most active team reaching almost 1 million points (equivalent to over 270 hours of measurements). However, team activity was very heterogeneous, with some teams collecting much less data than the others. This heterogeneity was found within the same city (e.g., the highly active teams are spread over three of the four cities), indicating that differences in activity were in general based on personal predisposition and not location. However, some of the heterogeneity between the cities can also be explained by the differences in instructions, emphasis and incentives. The web game activity follows similar heterogeneous patterns. Fig 1 also shows the distribution of the number of AirPins used to declare perceived pollution levels by game players. Some of them got very involved in this activity, with over 2000 AirPins used, while many players had very low activity (started the game but did not continue). The distributions appear to follow a power law, also typical for other social activity patterns [21,22]. It is important to mention that managing hundreds of AirPins required a large amount of time to be spent in the game, indicating the high involvement levels that the players reached. Besides activity in terms of number of measurements, another important aspect is coverage, both in space and time. As we have seen before, measurements have been performed at all hours of the day and days of the week. However, usually not all areas are covered equally. Here we show general information about overall coverage achieved (with more details for each location included in S1 File). In order to compute the coverage, the area of each of the four participating cities was divided into 10 by 10 meter squares (tiles). Phase 2 mapping areas were selected to be around 2 km 2 , so the tiling resulted in about 20,000 such squares per location (80,000 in total). However, when computing coverage we selected larger areas that cover most of the surface of the 4 cities and encompass most measurements. Thus, the resulting number of squares considered was of 14,150,070. One square was considered covered if at least one measurement was performed within it. Fig 2 shows how the number of squares covered grows as users perform more measurements, both overall and for each phase individually. The volunteers had different tasks in the two measuring phases (phase 2 and 3 of the test case). In phase 2, they had to concentrate on covering as much as possible of a specific area, while in phase 3 they could explore any area they wanted. The total number of squares covered at the end of the challenge was over 243,000, i.e. over 24 km 2 , which is three times more than the mapping areas. If compared to the total surface of the cities considered, coverage is 1.7% only, but this depends a lot on the fact that some of the locations are very large, while the number of teams was comparable across the cities. Fig 2 indicates that space coverage grows steadily with the number of measurements, meaning that users continue to explore new areas over the course of the challenge. However, while at the beginning of the challenge the growth is fast, this decreases in time. This indicates less exploration as the challenge evolves, due to the fact that volunteers measure at the same location multiple times. When looking at individual phases, it appears that during phase 2 space coverage was much better than in phase 3. This does indeed mean that volunteers displayed a better exploratory behavior at the beginning and when asked to cover a specific area of the city, compared to when they were asked to map any place they wished. In the latter case, they went for their daily routes that were not so extensive, and did not explore further. For both phases the growth of the space coverage follows a power-law, with exponent 0.73 in phase 2 and 0.79 in phase 3. This suggests that, although on the short term, space coverage in phase two is larger, in the long run the strategy of phase 3 might actually produce better coverage. However, the restricted time frame of our challenge can not provide further proof for this hypothesis. Since pollution levels vary both in time and space, it is important to have more measurements in the same location. So, for each tile, we also look at how measurements are spread in time, i.e., time coverage. We divided the measurements into 8 categories based on the time of measurement. First we separated the working days (Monday to Friday) from the weekends (Saturday and Sunday). Each of the two groups were divided into 4 further categories, by setting time thresholds at hours 08:00, 14:00, 18:00 and 23:00. The entropy of the resulting sets was computed. For each square, we obtained the fraction f i of measurements in each category i as the ratio between measurements falling into that category and the overall number of measurements in that square. Then the entropy for that square is S ¼ À P 8 i¼1 f i log 2 f i . A higher entropy indicates a better spread of measurements in time. Fig 3 shows the distribution of the entropy for all squares covered, in a rank-entropy plot (squares are sorted descending by entropy and the entropy values plotted for each square). A few squares had a very good time coverage. These correspond to hubs in the four cities such as popular leisure locations (e.g. Königsstrasse in Kassel), main squares (e.g. Piazza Castello in Turin) and transportation hubs (e.g. the Barbican and Bank subway exits in London). At the other extreme there are many squares (more than half) that have been covered only in one time slot (entropy is 0). Between the two extremes, time coverage is dropping fast when moving through the ranked squares. The curves display jumps and it appears that squares can be divided into sets based on time coverage. One first set (rightmost) includes those squares that have measurements only at one time of the day (entropy 0), which is followed by those covered in 2 time slots, ending with those that are covered at all times of the day (leftmost). Within each set, coverage decays differently. While for the highly covered squares decay appears to be exponential (as plotted in the inset), this becomes slower as the coverage decreases, with curves resembling polynomial decay. When comparing the two phases, time coverage in phase 2 is much better overall than in phase 3. This indicates that volunteers not only explored more in space, but also in time, during phase 2, while in phase 3 they followed their daily schedule which allowed for poor time coverage as well. This underlines again the importance of giving volunteers a specific mapping area in order to obtain better measurement spread. The overall coverage results are also displayed as spatial heat maps in Fig 4 (phase 2) and Fig 5 (phase 3). These show the areas of the 4 cities (mapping area for phase 2 and the entire city for phase 3) with the covered tiles. Bright colours correspond to higher time coverage, with bright red indicating the locations with most measurements. It is clear that the mapping area (phase 2) is much better covered than others (phase 3), with a few clear locations containing many measurements. These do correspond to landmarks and main roads in the 4 cities, as discussed earlier. The measured BC levels can also provide useful insight into the aims and strategies of the volunteers during the challenge. To this end, we can examine how these change from phase 2 to phase 3. Thus, Fig 6 shows graphs of BC levels measured in the two phases, and we can observe larger BC values in phase 3 (the distribution is shifted to the right). A Kolmogorov-Smirnov test was performed to test whether differences are significant and a p-value of 2.2e-16 was obtained, confirming the difference. When volunteers can freely choose where to take measurements, it appears that they primarily target more polluted areas. When the mapping area is restricted, they tend to have a more systematic approach and cover lower pollution levels as well. One may argue that pollution levels may change naturally from one day to another, so the shift we see could be due to a higher average pollution level from phase 2 to phase 3. However, comparison with reference data seem to suggest that this is not the case (S1 File). Additional comparisons per location are also included in S1 File. The analysis of the structure and location of the collected objective data gives some insight into volunteer behavior and interests when measuring air pollution. Subjective data, on the other hand, can provide a stronger indication of changes in perception. For this, we look at the data collected by the web game, which consists of perceived levels of pollution in the mapping area, the AirPin values. In particular, to inspect awareness improvement and the learning process, we are interested in the relation between these annotations and the 'true' pollution values available in the web game during phase 3 in the form of AirSquares. Thus we define the APD (AirPin difference) as the difference between the AirPin value (perception of the volunteer) and the relative AirSquare value (real pollution level). In other words, the APD is the amount of 'error' in the annotation intended as distance from the measurement. Fig 7 shows several distributions of the APD. In the left part we have APD distributions in each phase for Turin, Kassel and London. Antwerp did not reach the critical mass of data required for this analysis (the number of web game volunteers was very restricted). In phase 1, when no volunteer had been exposed to real measurements, we observe three different opinion structures in the three cities, representing the initial perception of volunteers. A systematic overestimation of pollution is present, i.e., the APD has peaks at * 4 μg/m 3 . This is likely to be caused by a scale misunderstanding: players, which were not accustomed to the BC concentration scale, almost ignored completely which values were to be considered reasonable and thus used the middle of the scale (i.e., 5 μg/m 3 ) as a 'normal' value. This results in the observed overestimation since the real average BC concentration measured lies between 1 and 2 μg/m 3 . In phase 2 things began to change. Some volunteers (so called Air Ambassadors) were given the sensor boxes to start performing measurements. The web game players consisted of these volunteers plus a set of other players recruited by them (so called Air Guardians). No data, except for the direct feedback from the boxes, was shown to the volunteers. Even so, a change is visible in the distribution of APD reported in the left part of Fig 7. By observing the measurements from their sensor boxes, Volunteers learn that in general BC concentrations are lower than what they believed, and respond by changing the values of the AirPins or taking the information into account when placing new ones. Since the change is quite significant, we also believe that those volunteers with the sensor boxes spread the information about what they were measuring, so that all players changed their perception. This decrease in the pollution levels reported in the subjective data of phase 2 is a first strong indication of learning during this phase. The right side of Fig 7 shows APD distributions separately for AirAmbassadors (performing measurements) and AirGuardians (who had no direct exposure to measurements until phase 3). We analyzed just the Turin dataset because in the other cities there was no clear distinction due to Ambassadors sharing their sensor boxes. The opinion shift in phase 2 is very strong for AirAmbassadors, but some change is also visible for AirGuardians, at least for part of the AirPins. This indicates that there was interaction among players, so that not only volunteers performing measurements, but some of their friends also, changed their perceptions. Phase 3 brought an important change in the web game. AirSquares were made available, so players could acquire aggregated information (punctual information would have been just copied by the users) in form of average pollution levels within the respective square measured by the sensor boxes. There is a corresponding radical change in the subjective air pollution estimation emerging clearly in the left part of Fig 7. In all cities, there is a peak around zero in phase 3 in the APD distribution, meaning there were more players estimating the air quality correctly. This was in some way expected, since we are giving strong hints about pollution levels by means of AirSquares, but there is something more happening. In London there is another bigger peak and also in the other cities the distributions show some asymmetry, pointing out that people are not trusting the hints completely because in that case the distribution would have been more similar to a delta function, i.e., narrow and symmetric. In order to describe this phenomenon we defined a stochastic transformation to reproduce the APD distribution for phase 3 starting from the APD distribution of phase 1. This transformation should reproduce the effect of the hints received by our volunteers on the initial distribution of their errors. Based on the empiric observation, the transformation takes into account two main effects: the possibility of complete trust in the hint, so that the opinion is reset near the hint, and the possibility of incomplete trust, so that the opinion is just shifted closer to the hint. The mathematical definition can be found in (S1 File). The left part of Fig 7 shows, for each location, how the transformed phase 1 data (black squares) matches phase 3 distributions, and this has also been confirmed with statistical procedures described in Methods and in S1 File. This provides an indirect proof of the assumptions of our model on the effect of objective data (complete and incomplete trust). Also, we were able to measure the 'trust' in the hints for the three cities, by fitting the model to data. We obtained the lowest trust values in London and the highest ones in Turin (full results are reported in S1 File). Discussion Volunteer participation is crucial for the success of bottom-up monitoring campaigns, however most projects concerned with air pollution monitoring concentrate only on the development of the technical tools necessary. Here, we give a different user-centric perspective, using the experience from the EveryAware project, through its large scale international challenge, APIC. The tools developed by the project are described in more detail in S1 File. During the challenge both objective and subjective data were collected, and used here to analyze participatory patterns and possible changes in behavior or perception. Objective measurements allowed for analysis of user interests during the challenge and activity patterns. A large number of measurements was obtained, however, coverage varied from location to location, with higher values when monitoring areas were restricted. Both coverage and pollution levels measured indicated a volunteer tendency to monitor familiar areas when there was no restriction, with a search for highly polluted spots. Subjective data, on the other hand, allowed for analysis of perceived pollution levels and learning mechanisms. We observed, by analyzing differences between perceived and real pollution levels, that users are able to reduce the 'errors' in the annotations, by learning the true values. However, some inertia in changing the old opinion structure was also observed, since asymmetric tails and slow shifts of old peaks are present. We also looked at differences between AirAmbassadors (volunteers with sensor boxes that played the web game) and Air-Guardians (only web game players). In phase 1 there is no clear distinction between them, as it is expected. In phase 2 Ambassadors, who begin to learn real pollution levels from the sensor boxes, start to shift their opinions, reducing the errors, while Guardians change less. Finally, in phase 3 we observe Ambassadors continuing to shift their opinions in a smooth way, with a certain inertia, while Guardians change radically showing a prominent primary peak at zero estimation error with a secondary peak in the position of the old peak. We can argue that the personal experience of the Ambassadors produces a smoother transition (which begins in phase 2), while the in-game information produces radical changes. But still both approaches shows the inertia we described earlier, even if in different forms. In general, we can conclude that all our evidence shows that involving volunteers in monitoring campaigns can result in large amounts of data collected. These data show that participation can help learning, to create a more accurate perception of air quality. Thanks to our case study, it has also been possible to outline some of the mechanisms behind the resistance of subjective opinions to objective results. Based on our experience, we can also propose a set of recommendations for future similar studies. First, the delineation of a mapping area is important, otherwise coverage is not uniform and becomes difficult to control. A second factor affecting uniformity of measurements is the length in time of the test cases. These should be at least a few weeks long, and ideally even spanning a few months, since at the beginning users tend to actively look for highly polluted spots. This is also important if behaviour shifts are expected, since just a few weeks are not enough to observe behaviour change. In terms of recruiting, our experience shows that upfront talks and events are most effective in attracting volunteers. While for enhancing awareness, we found that encouraging volunteers to recruit, among their friends, participants for some of the activities (the web game in our case) allowed for information from the sensors to spread also to volunteers not involved in the measuring activity. So, when the number of sensors is restricted, these other activities can facilitate the spread of awareness also outside the measuring group. Materials and Methods The study presented here is based on data collected by volunteers during a large scale test case (AirProbe International Challenge-APIC) organized in four European cities (Antwerp, Kassel, London and Turin) in from October 2013 to November 2013. It required volunteers to measure air quality as well as provide their opinion on air pollution, using the EveryAware platform. This consists of a sensing device (Sensor Box), measuring air pollution, a mobile application (AirProbe), allowing for data visualization and upload to servers, a set of web services and websites, handling data storage and visualization and a web game developed on the XTribe platform [23], allowing to collect individual perceptions of pollution. In the following we provide a brief description of each of the components and of the tools used for data analysis, with further details included in S1 File. Ethics statement This work is part of the European project Every Aware, contract number IST-265432. The European Commission finances only those projects that comply to its ethics and privacy regulations. Citing from the regulations of the Seventh Framework Programme, Decision No 1982/ 2006/EC, Article 6: "All the research activities carried out under the Seventh Framework Programme shall be carried out in compliance with fundamental ethical principles." At the same time, the official rules for participation, Article 15, mention: "A proposal which contravenes fundamental ethical principles shall not be selected. Such a proposal may be excluded from the evaluation and selection procedures at any time". Hence, acceptance and funding of this work by the European Commission implies approval of the ethics statement made in the proposal. This is why no further formal ethics approval was required for this research to be performed. All participants to our study had to participate in training for using the sensor box and install our mobile application. Before admission to the test case, all volunteers were required to sign our Terms and conditions, which represents the user's consent to use the measurements made. These clearly state that the data will be used for research purposes only and no personal information will be made public or used for other purposes. This includes sensitive information such as location data, names and contact information, that were collected during the test case. Volunteers were recruited using a range of approaches in each city. These included a designated Facebook page, the EveryAware project website, posters, newspaper articles and either university mailing lists or those of local interests groups and environmental agencies (see the methods section 'Case study' and S1 File for further details). There was no specific inclusion/ exclusion criteria used in the process. All volunteers could leave the study at any stage, however none chose to do so. There were 72 volunteers recruited in total, grouped into teams: 19 in Anwerp, 8 in Kassel, 35 in London and 10 in Turin. All volunteers named in the Acknowledgements section gave specific permission to be named. Sensing device: the sensor box The sensor box contains a sensor array of 8 commercially available gas sensors and two meteorological sensors (temperature and humidity). The gas sensor array consists of low-cost continuous sensors of CO, NOx, O 3 and VOC, which are important pollutants in the urban outdoor environments. These pollutants are either directly emitted by vehicles or other combustion processes, or formed from emitted precursors in the vehicle exhaust. The main criteria for sensor selection were the specific requirements posed by the mobile use of the sensor box for air quality monitoring as well as the hardware compatibility with the box. The gas sensors were examined by a range of performance tests under laboratory and outdoor conditions. These tests showed that none of the individual sensors can be used on its own. The observed selectivity, stability and response times of the different sensors introduced the need for a multivariate calibration procedure for the sensor boxes. Performance tests and calibration are described in more detail in S1 File. The sensor box electronic system has been designed with the purpose of being a low-cost, open and scalable platform. It is composed of two main boards (Fig 8). The first is a general purpose one that includes basic storage (micro SD card), positioning (GPS) and communication (Bluetooth) capabilities, while the second is a sensor shield able to host all gas sensors. The design is based on Arduino components and it is completely open source, so that anyone can reproduce and modify the hardware or even use the original hardware and develop different software to be run on it. The AirProbe mobile application AirProbe is an Android application designed to connect to the sensor box via Bluetooth, acquire sensor readings and transit them to the EveryAware servers as soon as a working connection to the Internet becomes available. In addition, the application allows users to visualize the data they collect. Specifically, they can see their tracks on a map, calculate an estimated black carbon exposure and follow sensor output in real time plots. While collecting data, users can make free annotations (tags) that will be attached to the recordings and sent to the servers. Web platform The case study web platform [24] is designed for collecting, storing, retrieving, analysing and visualizing large amounts of data data from different data sources. It provides endpoints for application like the AirProbe mobile application to upload data to. These data are then processed and cleaned, with several statistics and visualizations available on a public as well as a personal level. This facilitates further analysis and deeper understanding of the data by the user. A collection of statistics pages provides overall information about the data, such as graphs showing currently active sensor boxes, the overall black carbon average per day, or the overall number of collected measurements per day. Also, information on separate sessions corresponding to different tracks (defined both by the Sensor Box and by the user) is available. This allows users to compare routes and locations. A world map gives a visual overview on the collected data. This includes cluster and grid views as well as a heatmap representation of the collected data on a personal as well as a global level providing visual information about areas with good measurement coverage and their average pollution levels. Users also have the possibility of downloading their own data, in case they want to compile any further personal statistics. During the APIC challenge, the platform was specifically tuned for the needs of the game. Even though the platform supports several statistics and visualization of the data, most of this functionality has been disabled during the second stage of the challenge, in order to make opinions on air quality during the web game as unbiased as possible. The goal was for the AirAmbassadors and their sensor boxes to be the sole source of information regarding real measurements in order to limit information flow and facilitate a more controlled environment for the experiment. All visualisations were back online in the third phase of the challenge. The web platform has been also providing a ranking page for the AirAmbassadors to be motivated throughout the challenge. Points were issued for space and time coverage during each collection phase. The ranking page showed which city and which team was ranked first globally as well as per city. In addition, the AirAmbassadors and their teams were able to access several statistics about their measurement behavior and the data collection process, including a coverage heatmap, the amount of covered squares and their points. The web game The AirProbe web game is a simplified map management game. Players are called to fulfil their role of Air Guardians by annotating the map with so-called AirPins: geo-localized flags tagged with an estimated or perceived pollution level (black carbon concentration in μg/m 3 , on a scale from 0 to 10). The game area of each city is divided into tiles. At the beginning of the game, users are asked to create a profile (by choosing an avatar and a name) and to choose a city and a team. Then the volunteer starts from a given tile of the map of the chosen city. Users can interact by placing (or editing or removing) AirPins or by expanding their territory, i.e., buying more tiles. Each day, the AirPins placed generate a revenue based on the precision of the annotation (precision depends on what other users think of the same area). In order to collect the revenue generated every day by each AirPin, the user has to access the game daily, otherwise the revenue will be lost. The collected revenue will be added to the user balance, allowing them to buy more AirPins and more tiles. In this way, players can build their air pollution perception map. At the beginning of phase 3, a new feature was made available in the web game: the Air-Square map. This consisted in an alternative map on which players could buy AirSquares, i.e., information about measured pollution levels aggregated on a small area. This data spreading stimulated the learning process described earlier. Case study In order to set up the APIC study, volunteers were recruited in each of the four cities and they comprised two types of participants: Air Ambassadors, who were tasked with collecting air quality measurements with the sensor box, playing the online game, and recruiting Air Guardians, and Air Guardians, whose central focus was to play the online game and who were linked to a team of Air Ambassadors. Volunteers were recruited using a range of approaches in each city. These included a designated Facebook page, the EveryAware project website, posters, newspaper articles and either university mailing lists or those of local interests groups and environmental agencies (see S1 File for further details). Incentives were offered during the initial call to participate in the study with the aim to encourage participation and maintain engagement. Prizes were given out to the team of Air Ambassadors with the best temporal/spatial air quality measurement coverage and the most active Air Guardians in each city over the different phases. Various strategies were incorporated into the online game to encourage ongoing play and the prizes related to the number of days played and the total revenue gained for each day of play. The rewards offered varied slightly across the four cities and are detailed in S1 File. Data analysis To model the evolution between the phases of the APD distribution represented in the left part of Fig 7 (Phase 1 trans.), we implemented a simple modeling approach rearranging the opinions depending on their distances from the hint which is defined in S1 File. The transformation introduces 4 parameters, quantifying the inertia effects in the opinions shift. To check the quality of our model and to determine the values of parameters introduced we used a Kolmogorov-Smirnov test applied to the phase 3 dataset and to the phase 1 transformed dataset. Since it is a stochastic model, we performed several applications and found a convincing result for the p val of 20%, which means that the hypothesis is consistent with observations. More details are provided in S1 File. Supporting Information S1 File. Platform description and further data analysis. Details for the different platform components and data features can be found in this file. (PDF)
2015-03-26T13:33:39.000Z
2015-03-26T00:00:00.000
{ "year": 2015, "sha1": "fe46626a6a50e7b7f27bf3ad20d2024e8b1fee06", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0136763&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb96b031f215be4e07b8795ec533f41d5559e19a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Business", "Physics", "Medicine" ] }
5863077
pes2o/s2orc
v3-fos-license
In Vitro impairment of whole blood coagulation and platelet function by hypertonic saline hydroxyethyl starch Background Hypertonic saline hydroxyethyl starch (HH) has been recommended for first line treatment of hemorrhagic shock. Its effects on coagulation are unclear. We studied in vitro effects of HH dilution on whole blood coagulation and platelet function. Furthermore 7.2% hypertonic saline, 6% hydroxyethylstarch (as ingredients of HH), and 0.9% saline solution (as control) were tested in comparable dilutions to estimate specific component effects of HH on coagulation. Methods The study was designed as experimental non-randomized comparative in vitro study. Following institutional review board approval and informed consent blood samples were taken from 10 healthy volunteers and diluted in vitro with either HH (HyperHaes®, Fresenius Kabi, Germany), hypertonic saline (HT, 7.2% NaCl), hydroxyethylstarch (HS, HAES6%, Fresenius Kabi, Germany) or NaCl 0.9% (ISO) in a proportion of 5%, 10%, 20% and 40%. Coagulation was studied in whole blood by rotation thrombelastometry (ROTEM) after thromboplastin activation without (ExTEM) and with inhibition of thrombocyte function by cytochalasin D (FibTEM), the latter was performed to determine fibrin polymerisation alone. Values are expressed as maximal clot firmness (MCF, [mm]) and clotting time (CT, [s]). Platelet aggregation was determined by impedance aggregrometry (Multiplate) after activation with thrombin receptor-activating peptide 6 (TRAP) and quantified by the area under the aggregation curve (AUC [aggregation units (AU)/min]). Scanning electron microscopy was performed to evaluate HyperHaes induced cell shape changes of thrombocytes. Statistics: 2-way ANOVA for repeated measurements, Bonferroni post hoc test, p < 0.01. Results Dilution impaired whole blood coagulation and thrombocyte aggregation in all dilutions in a dose dependent fashion. In contrast to dilution with ISO and HS, respectively, dilution with HH as well as HT almost abolished coagulation (MCFExTEM from 57.3 ± 4.9 mm (native) to 1.7 ± 2.2 mm (HH 40% dilution; p < 0.0001) and to 6.6 ± 3.4 mm (HT 40% dilution; p < 0.0001) and thrombocyte aggregation (AUC from 1067 ± 234 AU/mm (native) to 14.5 ± 12.5 AU/mm (HH 40% dilution; p < 0.0001) and to 20.4 ± 10.4 AU/min (HT 40% dilution; p < 0.0001) without differences between HH and HT (MCF: p = 0.452; AUC: p = 0.449). Conclusions HH impairs platelet function during in vitro dilution already at 5% dilution. Impairment of whole blood coagulation is significant after 10% dilution or more. This effect can be pinpointed to the platelet function impairing hypertonic saline component and to a lesser extend to fibrin polymerization inhibition by the colloid component or dilution effects. Accordingly, repeated administration and overdosage should be avoided. Background Normovolemia and sufficient coagulation capacity are major goals during early resuscitation of traumatized patients with hemorrhagic shock. Nevertheless, significant morbidity and mortality are related to coagulopathy due to loss and consumption of coagulation factors as well as volume substitution induced hemodilution. After patient admission to the emergency care department definite strategies have been established to improve outcome after severe hemorrhagic shock [1] including transfusion of packed red blood cell concentrates, fresh frozen plasma, cryoprecipitate and coagulation factor concentrates. However, during the prehospital period various crystalloids and colloids have been suggested for treatment of hemorrhagic shock. Whatever fluid is administered, there is at least a dose dependent dilution of coagulation factors which is associated with a further impairment of coagulation. Recently, small volume resuscitation by intravenous administration of small amounts of hypertonic saline hydroxyethyl starch has been introduced for rapid restoration of normovolemia following severe trauma. However, both hypertonic sodium chloride as well as hydroxyethyl starch, impair coagulation and platelet function; the former by altering plasma clotting times and platelet aggregation [2], the latter by decreasing FVIII plasma concentration and by interference with fibrin polymerization and thus decreasing clot strength [3][4][5][6]. Nevertheless, in a porcine model of hemorrhagic shock and resuscitation, in general, the least effects on coagulation were observed following small volume resuscitation by administration of hypertonic saline hydroxyethyl starch for resuscitation [7]. Since small volume resuscitation was associated with alterations in the coagulation system in this animal model as well, we evaluated these complex effects on coagulation and thrombocyte function in vitro in human whole blood and tested the hypothesis that HyperHaes causes impaired whole blood coagulation and platelet function. Methods The study was designed as experimental non-randomized comparative in vitro study. Following institutional review board approval (study number: 2953, University Hospital Düsseldorf) this study was conducted in accordance with the Helsinki Declarations and European Unions Convention on Human Rights and Biomedicine. The guidelines for reporting non-randomized studies [8] were utilized in the drafting of this report. Blood samples Ten volunteers (six male/four female; average age 33.7 years (range: 26-42 y)) of Caucasian origin participated in the study after oral and written information and written consent. All volunteers were healthy and free of medication. Blood was taken from a basilic vein using an 18gauge IV catheter and collected in both citrated and heparinzed tubes (Vacutainer, Becton Dickenson, Heidelberg, Germany). Whole blood coagulation Whole blood coagulation was analyzed by rotation thrombelastometry (ROTEM, TEM international, Munich, Germany) in citrated whole blood samples. The technique has been described previously elsewhere [9][10][11]. In brief, ROTEM analyzes viscoelastic clot characteristics over time in activated whole blood and recognizes both the time course of clotting as well as the firmness of the resulting clot. The following commercially available tests were performed following the manufacturer's instructions: ExTEM (extrinsically activation by tissue factor) and FibTEM (extrinsically activation by tissue factor with addition of Cytochalasin D to inhibit platelet function and display fibrin polymerization onlyall tests Pentapharm, Munich, Germany). Since maximum clot firmness (MCF) in whole blood coagulation is mainly determined by platelet function and fibrin polymerization, while clotting times (CT) are dependent on the speed of thrombin generation by clotting factors [10] the chosen parameters were: CT quantifying the time from beginning of the reaction until start of clot formation and MCF indicating clot stability at its highest degree. Since samples for thrombelastometry are recommended to be analysed within two hours we used three ROTEM devices in parallel. Tests were performed in a standard sequence. ROTEM devices were chosen in a random order. Platelet function Platelet function was determined by multiple electrode aggregometry (MEA) using a novel multiple platelet function analyzer (Multiplate, Dynabyte, Munich, Germany, heparinized whole blood samples) following TRAP activation (thrombin activating peptide, TRAPtest, Dynabyte, Munich, Germany). The technique has been described previously elsewhere [11]. MEA utilizes single uses test cells. These cells contain two pairs of sensor wires extending into a 50% diluted whole blood sample. Platelets are non adhaesive in resting state, but when activated stick to the sensor wires enhancing electrical impedance between wires. These impedance changes are recorded over a period of six minutes. Tests were performed regarding the manufacturer's instructions. As indicator for platelet function the area under the aggregation curve (AUC) was determined indicating overall platelet activity. Electron microscopy Scanning electron microscopy (SEM) was performed at 1:2000 and 1:5400 magnification on samples to evaluate effects of HyperHaes on the cell shape of the thrombocytes, using a Jeol 35 CF SEM and documentation by Orion 6.60 software (Orion Microscopy, Belgium). Statistical analysis A power analysis was performed based on results of a previously performed pilot test. Assuming an alpha error of 0.05 with a power of 0.95 we calculated a necessary sample size of 8 to show a significant effect of a 10% dilution of HH on MCF in the EXTEM test. Based on this calculation and to ensure reasonable data we have chosen to increase sample size to 10. After positive testing on normal distribution (Shapiro-Wilk-test) two way ANOVAs with Bonferroni post-hoc testing were performed for statistical analysis. The Statistical Package for Social Sciences (SPSS for Windows, 13.0, SPSS Inc., Chicago, IL., USA) and GraphPad Prism (Version 4.02, GraphPad Software Inc., San Diego, CA., USA) were used. Values are displayed median ± standard deviation. Considering a confidence interval of 99% an α-error below 0.01 was considered to be statistically significant. Whole blood coagulation Maximum clot firmness (MCF) in rotational thrombelastometry after extrinsically activation (ExTEM) showed a dose dependent impairment in all tested groups (figure 1A). In the control group ISO significant differences to baseline were found at 40% dilution (p = 0.0001). In HH and HT significant influence on MCF was found when dilution was ≥ 10% (HH: p = 0.0009; HT: p = 0.0002). HS impaired MCF statistically significant when dilution was ≥20% (p = 0.0033). No differences were found between HH and HT (p = 0.452). HS (p < 0.0001) and ISO (p < 0.0001) showed less impairment of MCF compared to HH. Clotting times (CT) were statistically significant prolonged in all tested groups but the control group ISO (figure 1B). ISO did not induce significant differences as compared to baseline (ISO 40% dilution; p = 0.128). Significant influence on CT was found in HH and HT when dilution was 40% (HH: p = 0.0003; HT: p = 0.0002). HS already impaired CT statistically significant when dilution was 20% (p = 0.0022). Fibrin polymerization (FibTEM) was statistically significant impaired in all tested groups ( figure 1C). In the control group (ISO) MCF as compared to baseline was significantly reduced when dilution was ≥20% (p = 0.0005). Significant reduction of MCF by HH was found when dilution was ≥10% (p < 0.0001). HT significantly impaired MCF at 40% dilution (p = 0.0006). MCF was significantly reduced by HS throughout the test beginning at 5% dilution (p = 0.0033). Platelet function AUC was significantly impaired in all tested groups including ISO in a dose dependent fashion ( figure 1D). As compared to baseline ISO and HS significantly decreased AUC when dilution was ≥10% (ISO: p = 0.0022; HS: p = 0.0002). AUC was significantly decreased in HH and HT in all tested dilutions beginning at 5% dilution (HH: p = 0.0001; HT: p = 0.0014). Between HH and HT no significant differences were found (p = 0.449) while impairment of platelet function in HH was pronounced compared to HS (p = 0.0011) and ISO (p < 0.0001). Electron microscopy Dilution with HH caused deformed platelets and large aggregates of platelets (figure 2). Since building of aggregates prohibits exact counting of platelets within these aggregates a quantification of morphological changes was impossible. Discussion HH significantly impairs whole blood coagulation and platelet function in a dose dependent fashion in vitro by reducing platelet function as well as fibrin polymerization. The mechanism can be attributed to the hypertonic saline component and is associated with a dehydration and activation of platelets leading to accumulation of thrombocytes as demonstrated by scanning electron microscopy. HH is suggested for first line treatment in hemorrhagic shock. Since studies in trauma patients are always affected by an inhomogeneous cohort of patients we have chosen a model of in vitro dilution for standardization of study conditions to estimate the effects of HyperHaes and to identify a possible coagulation impairing substance. Since our study was not designed to evaluate effects on circulatory conditions, we did not adapt dilution volumes of the different agents to possible hemodynamic potentials but in a fixed manner as compared to HH infusion alone. Furthermore the study cannot assess or predict effects on blood loss or outcome. In vitro studies on coagulation are limited because complex hemostasis pathways cannot be simulated in a complete natural way. Interaction between primary and secondary hemostasis cannot be displayed in coagulation tests. Regular laboratory tests on coagulation use plasma as matrix for analysis. Therefore we decided to use rotational thrombelastometry and multiple platelet aggregation which assay whole blood as a more physiologically matrix to assess coagulation including platelet function. Furthermore thrombelastometry analyzes the end product of coagulation: the clot itself and its stability over time, which indicates clot building potential at the time of analysis. A dynamic time course of coagulation impairment and possible recovery from impairment cannot displayed in our study. In vivo osmolarity is influenced by numerous factors. Osmolarity in dogs after a 50% blood volume withdrawal and following infusion of 4 ml*kg -1 hypertonic NaCl (2400 mOsmol*l -1 , which is comparable to HyperHaes) led to an increase of plasma osmolarity from 307 mOsmol*l -1 to 333 mOsmol*l -1 within 30 minutes [11]. Estimating average plasma osmolarity of 300 mOsmol*l -1 and an osmolarity of 2400 mOsmol*l -1 for HyperHaes in vitro dilution by 5% would suggest a resulting osmolarity of approximately 405 mOsmol*l -1 which is already markedly above physiological levels. These in vitro high osmolarity conditions could compromise the translation of the results into clinical settings. Nevertheless, it remains unclear if compensation mechanisms are able to adjust osmolarity before interfering with platelets. In a different setting of acidosis and diminished coagulation laboratory parameters did not return to normal after compensation of acidosis [12]. Furthermore it could be possible that repeated administration or overdosage of HH could account for a non-physiological increase in osmolarity exceeding possibilities of compensation. Normal blood volume in adults may be estimated to be 70 -80 ml/kg bodyweight. Accordingly, the recommended HH dose of 4 ml/kg bodyweight in patients with hemorrhagic shock yields a hemodilution of 1:17.5 (5.7%) to 1:20 (5%). Since this mirrors normal conditions without blood loss we have chosen a 5% dilution as lowest degree of dilution for our study. Blood loss would lead to a further reduction in circulating blood volume and thus to a relatively increased portion of infused HH per ml blood volume resulting in an increased test agent/blood ratio, ergo to greater dilution. Blood loss of 50% blood volume then would lead to approximately 1:10 (10%) dilution, 75% blood loss would account for a 1:5 (20%) dilution and 40% dilution would be comparable to 87.5% blood loss. With respect to this consideration increasing blood loss would lead to increasing relative overdosage accounting for possible enhancement of otherwise induced coagulation disorders. Even 5% whole blood dilution with HH significantly impaired platelet function. This effect on thrombocytes cannot be adequately detected in whole blood coagulation. However, MCF was affected in all samples with ≥10% dilution and CT prolongation finally occurred when dilution was 40%. Maximum clot firmness in Figure 2 Scanning electron microscopy of native platelets (panels A and C) and platelets from blood after 40% dilution HyperHaes (panels B and D) in 5400fold (panels A and B) and 2000fold (panels C and D) magnification. Representative scans demonstrate deformed platelets, spreading activated platelets (panel B), as well as large aggregates of activated platelets (arrows in panel D). Note small bars on the lower right side of each panel indicating length of 1.0 U = 1 μm (panels A and B) and 10.0 U = 10 μm (panels C and D), respectively. whole blood coagulation is basically determined by platelet function and fibrin polymerization, while clotting times are dependent on the speed of thrombin generation by clotting factors [13]. Thus, HH affected platelet function and fibrin polymerization in a more severe way than action of clotting factors. Responsible for interference with fibrin polymerization of HH is its HS portion, since we demonstrate a comparable impairment of fibrin clot firmness by HH as compared to HS. It is well known that HS inhibits fibrin polymerization [14][15][16][17][18]. Our data are consistent with these findings. This effect is most likely caused by dilution of fibrinogen [19] and decreased FXIIIa-mediated fibrin cross linking [14,15]. However, the precise molecular mechanism still remains unclear. The mechanism of action of HH to improve blood pressure is based on mobilization of extravasal fluids along an osmotic gradient by intavasal administration of HH [20]. We suspected this intavasal hyperosmolarity also to be one possible mechanism of interaction between the hypertonic solution and platelets leading to dehydrated and functionless thrombocytes. Platelets treated with and without HH were examined by electron microscopy. In the HH dilution deformed single platelets as well as large aggregates of activated platelets can be seen (figure 2). Such aggregates could account for a loss of platelet function and in vivo could lead to an obstruction of small vessels leading to a reduced platelet count as well. A detection of such aggregates after in vivo administration of hypertonic saline solution has not been done to date and would be of great interest concerning our findings. In experimental settings controversial effects of HH on coagulation have been described. In animal models of uncontrolled hemorrhage treatment with hypertonic saline led to an aggravation of hemorrhage [21][22][23]. In these studies only hypertonic saline was studied while HS was not administered alone or in combination with hypertonic saline. In a recent study in a model of uncontrolled hemorrhage in pigs after liver injury less hemorrhage after HH administration was observed as compared to the use of colloids alone [7]. However, in this study red blood cells collected by an automated cell saver were simultaneously to the test agent infused. As a consequence the dosage of the hypertonic and hyperoncotic agent was reduced in a relative way by the parallel infusion of red blood cells which could have weakened the coagulation impairing effect of HH. Despite this, to reflect comparable hemodynamic potential greater volumes of colloid infusions were admitted leading to a higher dilution of clotting factors in the control group. Since red blood cell concentrates or cell saver blood is available in the hospital only the settings of this study are more comparable to an admission in the emergency room or the operating theatre than to a preclinical situation. As a consequence conclusions on the influence of these solutions on coagulation and blood loss in a preclinical situation should be drawn with caution. Another hazard might occur when hypertonic saline is used in combination with large doses of colloids due to additional risks of adverse effects of colloids itself as for example anaphylactic reactions or reduction of kidney function which also have to be considered [24][25][26][27]. In different clinical situations of major blood loss such as penetrating chest trauma [28], patients undergoing cardiac surgery [29,30], or vascular surgery [31][32][33] studies indicating beneficial effects on outcome have been published. However, results of meta-analysises showed if any only minor improvement of survival no matter if hypertonic saline solution is used exclusively or in combination with colloids [34][35][36]. Our results indicate HH to cause a dose dependent impairment of platelet function and whole blood coagulation. However, these effects appear to be small in dilutions comparable to expected dilution after treatment of shock when the circulating blood volume is not reduced. From a different point of view this implicates that considering a small therapeutic index the risk of overdosage seems to be high and should be strictly avoided. Whether this also accounts for repeated admission and length of a time interval for possible safe repeated administration of HH cannot be assessed in the present study and may be addressed in future investigations. Furthermore, the recommended dosage of HH is calculated with respect to bodyweight. In clinical situations variables as for example body weight can be assessed easily. In preclinical situations it is much more difficult to assess the patient's bodyweight which could lead to overdosage per se. We calculated our dilution series to compare resulting dilution effects to HH treatment at different degrees of severe blood loss. Since we found greater effects on platelets with increasing dilution due to higher drug levels, we suspect HH treatment to show increasing negative effects on coagulation and platelet function with increasing blood loss due to possible relative overdosage. HH is designed to help stabilizing circulatory conditions in these situations. This implicates that dosage in patients with higher blood loss should be calculated with care, repeated administration should be avoided and the physician should be aware of increasing coagulopathy. Since it remains questionable if our findings can be transferred into clinical settings clinical studies are necessary to evaluate such issues. Conclusions HyperHaes as an example for hypertonic saline hydroxyethyl starch solution impairs whole blood coagulation and platelet function in a dose dependent fashion. Responsible for impairment of platelet function is the hypertonic saline component, while interference with fibrin polymerization is based on both colloid and dilution effects. Overdosage and relative overdosage due to underestimated blood loss should be avoided and increasing coagulopathy considered in a subtle manner.
2016-05-12T22:15:10.714Z
2011-02-10T00:00:00.000
{ "year": 2011, "sha1": "707f9787db0d49d0ce2dac4133d12d3b246853f5", "oa_license": "CCBY", "oa_url": "https://sjtrem.biomedcentral.com/track/pdf/10.1186/1757-7241-19-12", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "707f9787db0d49d0ce2dac4133d12d3b246853f5", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17144373
pes2o/s2orc
v3-fos-license
Differences in weight status and energy-balance related behaviors among schoolchildren in German-speaking Switzerland compared to seven countries in Europe Background Overweight in children and adolescents have increased significantly and are a major public health problem. To allow international comparisons, Switzerland joined the European study ‘ENERGY’ cross sectional survey consortium that investigated the prevalence of overweight and obesity as well as selected dietary, physical and sedentary behaviors of 10–12 years old pupils across seven other countries in Europe. The aims of the present study was to compare body composition and energy-balance related behaviors of Swiss schoolchildren to those of the seven European ENERGY-countries and to analyze overweight and energy-balance related behaviors of Swiss children according to socio-demographic factors. Methods A school-based cross-sectional study among 10–12 year old children was conducted in Switzerland and seven other European countries using a standardized protocol. Body height, weight and waist-circumference were measured by trained research assistants. Energy-balance related behaviors –i.e. selected dietary, physical activity and screen-viewing behaviors were assessed by questionnaires. Weight status and behaviors in Switzerland were compared to the seven European ENERGY countries. Within the Swiss sample, analyses stratified by gender, parental education and ethnicity were performed. Results Data of 546 Swiss children (mean age 11.6±0.8y, 48% girls) were obtained and compared to the ENERGY- results (N=7.148; mean age 11.5±0.8y, 48% girls). In Switzerland significantly less children were overweight (13.9%) or obese (2.3%) compared to the average across the ENERGY-countries (23.7% and 4.7%, respectively), and were even somewhat lower than the ENERGY countries with the lowest prevalence. Sugar sweetened beverage intakes and breakfast habits of Swiss children did not differ significantly from those of ENERGY. However, the mean time devoted by Swiss children to walking or cycling to school and attending sports activities was significantly higher and screen time significantly lower compared to the other ENERGY-countries. Within the Swiss, sample relatively large and consistent differences were observed between children from native and non-native ethnicity. Conclusions The prevalence of overweight and obesity among Swiss children are substantial but significantly lower compared to all other European ENERGY-Partners, probably due to the fact that Swiss children were found to be more active and less sedentary comparing to the rest of the European sample. Background The number of overweight children in Europe has increased substantially over the last decades [1]. Although a recent meta-analysis indicated that this increase might have come to an end [2,3], the prevalence of overweight children remains high and constitutes a major public health problem. Overweight and obesity in childhood and adolescence increase the likelihood of being overweight in adulthood [4] and are associated with an increased risk for various diseases, such as type II diabetes, chronic back pain or cardiovascular diseases [5]. Thus they are important determinants of avoidable burden of disease. Recent reviews suggested increased consumption of sugar sweetened beverages, breakfast skipping, lack of physical activity, high levels of TV and computer time and short sleep duration to be associated with overweight and obesity among school-aged children [6][7][8][9][10]. The European ENERGY project "EuropeaN Energy balance Research to prevent excessive weight Gain among Youth" has set out to develop a comprehensive intervention aiming to promote dietary and physical activity behaviors that contribute to a healthy energy balance among school-aged children [11]. A key point of the project was a cross-European study on measured weight status and reported energy-balance related behaviors (EBRB) among 10-12 year olds living in seven different European countries [11][12][13]. Switzerland has been invited to join the ENERGY consortium with its own funding after the ENERGY-project was approved. So far the only internationally comparable data on EBRB and overweight of Swiss children originated from the HBSC study and showed conflicting results [14]. On one hand Switzerland ranked low with respect to the prevalence of overweight and obesity and on the other levels of physical activity were also reported to be low [14]. As weight and height of the HBSC study was based on selfreports of the children and no objective assessment of physical activity was available. The ENERGY project offered the opportunity to compare overweight rates and EBRB of Swiss children to their European peers using a standardized protocol and including objectively assessed data [15]. The present analysis aims (1) to compare body mass index (BMI), waist circumference, percentages of overweight and obesity and EBRB of Swiss schoolchildren to those of the seven European ENERGY-countries and (2) to analyze overweight and EBRB of Swiss children according to socio-demographic factors. Sampling and organization of the study The school-based cross-sectional survey included anthropometric measurements, a child questionnaire and a parent questionnaire and was carried out among [10][11][12] year old pupils at school. A detailed description of the rationale and organization of the ENERGY-project [11] and a comprehensive description of the design, procedures of the ENERGY school-based cross-sectional survey have been published elsewhere [12]. In brief: the study was conducted between June and December 2010 in differently urbanized regions of German-speaking Switzerland. The three regions randomly selected from each of the lowest, mid and highest tertiles of degree of urbanization were Basel, St. Gallen and Bern/Solothurn. The schools in these regions were randomly selected for inclusion in the study. The ENERGY protocol aimed for a minimum sample of 1,000 schoolchildren per country and one parent/caretaker for each child. A school recruitment letter was sent to the headmaster of the sampled schools, followed by a personal telephone call. Following the school's agreement, parents received a letter explaining the purpose of the study and were asked for written consent for their child's and own participation. The study protocol was approved by the ethics committees of the participating cantons (Basel, Bern, Aargau and St. Gallen). Measurements Measurements were conducted according to the standardized ENERGY protocols. The children completed questionnaires and anthropometric measurements during school time. The parent/caretaker filled in the questionnaire at home. Detailed information regarding the procedures, training of research staff, development of questionnaires, are published elsewhere [12]. Anthropometric measurements Body height, weight and waist-circumference were measured by trained research assistants. The children were measured in light clothing without shoes. Body height was measured with SECA 225 Leicester Portable stadiometer (accuracy of 0.1 cm). Weight was measured with a calibrated electronic scale SECA 861 (accuracy of 0.1 kg), waist circumference with the SECA 201 measuring band (accuracy 0.1 cm). Two readings of each measurement were obtained and the mean was used for analyses. When the two readings differed more than 1%, a third measurement was conducted. Body mass index (BMI) was calculated as BMI=weight/ height 2 (Kg/m 2 ). Overweight status (overweight, obesity) was calculated based on the International Obesity Task Force criteria (IOTF) [16]. In order to make BMI comparable across age and sex, the BMI standard deviation scores (z-scores) [17] were calculated. Questionnaire The English version of the ENERGY questionnaires were translated into German and then back translated and compared to the English version. Dietary habits, physical activity and screen viewing behaviors were assessed by the child questionnaire. Child's sleep duration, parental education and ethnic background were reported by the parents. Dietary habits Intake of soft drinks and fruit juices were each assessed with two food frequency questions (FFQ) referring to a general week and to the last 24-hrs. First, children were asked on how many days per week they drank the beverage. Subsequently they were asked to indicate how much they drank on days they consumed the beverage by ticking the number of glasses or bottles, which were pictured in the questionnaire. Mean intake in milliliters per day was calculated from the FFQ by multiplication of number of days per week and amount per day in ml divided by 7. Breakfast habit Were assessed by two questions asking the children on how many schooldays per week and on how many weekend days they normally had breakfast. The frequency score was recoded into a skipping breakfast score (had breakfast 7 days/week; had breakfast 0-6 times/week). Physical activity behaviors Transport to school was assessed by two questions on how many days per week the child cycled and or walked to school and two questions on how long the bike ride or walk to school was. Questions referred to a general week and to the last 24hrs. Total bike/walk time per week was calculated by multiplying the number of days with the mean time of the answering category multiplied by 2 (round-trip). Total active transport to and from school was calculated by adding up total bike and walk times. Regarding organized sports participation, questions were included on how many hours per week children participated in different sports activities. Based on the answers average hours of sport participation per week was calculated for each child. Sedentary behavior Screen time was assessed by asking questions about time spent watching TV (including video and DVD) and computer activities for weekdays and weekend days separately, referring to a general week and to the last 24hrs. Mean TV, computer and total screen time per day were calculated. Sleeping Parents indicated how many hours the child sleeps on average per night, separately, for weekdays and weekends. A mean number of hours of sleep per night was then calculated. Parental education Was assessed as a measure of socio-economic background by asking parents to report their own level of education and that of the other parent/caregiver. For analyses, information of the parent/caregiver with the longer education was used. In contrast to the other EN-ERGY-Partners, parental education in Switzerland was dichotomized into low and high using a cut-off of 12 years of education (ENERGY-Partners 14 years) since preschool education does not count as school education in Switzerland. Ethnic background Was assessed by all ENERGY Partners based on the language spoken at home or on the country of origin of the parents [18]. It was classified as 'non-native' if another language than German, French or Italian was spoken at home or one or both parents were born in a foreign country, and as 'native' if German was spoken at home or if both parents were born in Switzerland. Questionnaire validity Test-retest reliability was tested in Switzerland according to the ENERGY protocol [19] by administrating the questionnaire one week after the first assessment to 114 school-children. To assess construct validity, the agreement between questionnaire responses and a subsequent face-to-face interview (15 children) was evaluated. Both test-retest reliability and construct validity were determined by calculating the intra-class correlation coefficient (ICC). Of the 36 questionnaire items related to EBRB in the child questionnaire 72% showed good to excellent test-retest reliability as indicated by ICCs > 0.60, whereas 22% showed moderate (ICC 0.41-0.60) and 0.5% poor reliability (≤ 0.40). Similar results were found for construct validity. Statistical analysis Stata 11.2 (StataCorp LP Texas, USA) was used for all statistical analyses. Means and standard deviations for continuous variables and percentages for categorical variables were reported for anthropometric measurements and EBRB. Because of skewed distribution, medians were also provided (either in the tables or in the Additional file 1, Additional file 2, Additional file 3, Additional file 4). All skewed distributions were log-transformed for analyses. T-test was performed to assess differences in means of anthropometrics and EBRB between the Swiss sample and the ENERGY-Partners. In addition to the mean values of the ENERGY-Partners, the range of individual country means was also provided. To assess differences in anthropometrics and EBRBs within Switzerland according to gender, ethnic background and parental education t-tests for means, Wilcoxon signed-rank tests for medians and chi-squared tests for proportions were calculated. Because of the high proportion of zeroes in the variables 'weekly minutes of walking to school' and 'weekly minutes of cycling to school' , we used bootstrap with 1,000 replications to confirm the results of the Wilcoxon-test [20]. For all analyses a p-value of 0.05 was used for statistical significance. Participant characteristics 24 of the 68 invited schools (35%) agreed to participate ( Figure 1). Many schools declined participation arguing that they were already busy with several other surveys. Informed consent for study participation was available for 636 (49.5%) out of 1286 invited children. Rates widely ranged between schools (20% to 81%). Most children completed the child questionnaire (n=596) and the anthropometric measurements (n=609), and had a parent completed questionnaire (n=577). Complete data was available for 564 children. Their mean age was 11.6±0.8yrs, and 48% were girls. The total sample of the seven ENERGY-Partners comprised 7757 children (mean age 11.5±0.8yrs, 48% girls). Anthropometrics A significantly lower prevalence of overweight (13.9%) and obese (2.3%) children were observed in Switzerland when compared to the mean of the seven European ENERGYcountries (23.7% and 4.7%, respectively) ( Table 1). The overweight prevalence differed greatly between the seven European ENERGY countries (ranging from 14.4% in Norway to 40.8% in Greece) but was always higher than in Switzerland. Similarly, mean BMI and mean waist circumference were lower in Swiss children. Within the Swiss sample, overweight and obesity rates (and related anthropometric indices) were significantly lower in Swiss natives as compared to non-natives most notably when ethnicity was based on language spoken at home (Table 2) . Gender and socio-economic differences were less pronounced. Dietary behaviors Mean consumption of soft-drinks of Swiss children (388 ml/day) differed significantly from those of the European ENERGY-Partners (Table 3) but were well within their range (soft drink consumption ranging from 114 ml/day in Greece to 632 ml/day in the Netherlands). The consumption of fruit juice (314 ml/day) was slightly but not significantly higher in Switzerland compared to the European average. Within Switzerland, boys from lower socio-economic background and non-native reported a significantly higher intake of soft drinks and fruit juices than girls and children from higher socio-economic background and Swiss natives (Table 4). Physical activity Time spent with active commuting (walking and biking to school combined) was significantly higher in Switzerland compared to the average time spent by the European ENERGY-Partners (Table 3). The difference resulted mainly from the higher number of days the children walked to school and the longer duration of the walking trips. However, the total active commuting time in Switzerland was within the range of the European partners (40 min/week in Greece and 103 min/week in Norway). Swiss children also reported to be significantly longer engaged in sports activities (164 minutes/week) than the European children on average (149 minutes/ week) although the Swiss results were again within the range of the ENERGY-Partners (ranging from 128 minutes per week in Greece to 173 minutes per week in Norway). Figure 1 Overview of data collection and response rate in Switzerland. Within Switzerland, boys reported significantly more minutes of engagement in sports (190 minutes/week) than girls (135 minutes/week) ( Table 5). Non-native Swiss children (mainly when ethnicity was based on the country of origin of their parents) spent significantly less time commuting actively to school, mainly because of less time spent for cycling. Girls tended to walk more often to school. Sedentary behavior and sleeping Swiss children spent significantly less time on total screen activities (107 min/day), both for watching TV or computer activities than their European peers ( Table 3). The mean time watching TV in Switzerland (79 min/day) and time spent with computer activities (53 min/day) was well below the range of time reported by the different ENERGY-Partners (ranging from 101 min/day TV time in Norway to 123 minutes/day in Greece, and from 73 min/ day computer activities in Spain to 94 minutes/day in Hungary). Within Switzerland, boys from lower socio-economic background and non-native children spent significantly more time with screen activities than their counterparts ( Table 6). Gender differences were more pronounced for computer activities whereas differences with respect to socio-economic background and ethnicity were observed for both TV and computer activities ( Table 6). Mean sleep duration of Swiss children was significantly higher compared to that reported by the European ENERGY-Partners (Table 3). Within the Swiss sample, no difference was observed with respect to gender, socio-economic background and ethnicity (Table 6). Discussion The prevalence of objectively assessed overweight and obesity among Swiss children was significantly lower than across the other European countries in the EN-ERGY-consortium. Reported physical activity and screen viewing behaviors were more favorable whereas dietary habits were similar. Within the Swiss sample, ethnicity was more strongly related to differences in overweight prevalence, dietary habits, active commuting and screen activities than parental education or gender. The prevalence of overweight and obesity observed in the present study are in line with more recent Swiss studies based on measured weight and height [21,22] but are clearly higher than the rates of the HBSC study which were based on self-reports of the children [14]. Studies assessing time trends of childhood overweight prevalence in Switzerland based on measured weight and height documented a strong increase during the nineties of the last century [23] and a 'leveling off' or even decrease since the beginning of the new century [2,[21][22][23]. This trend is in line with the results of a recent review analyzing data of 52 studies from Australia, Europe, Japan and the USA [3]. Within the Swiss sample no statistical significant gender difference in the prevalence of overweight and obesity was found, but ethnicity appeared as a strong risk factor as previously reported [21,[23][24][25]. However, non-native Swiss children had a significantly lower BMI, waist circumference and proportion of overweight than the average across the ENERGY-countries. In contrast to the previous report of the HBSC study [14] the present study showed that Swiss children accumulated significantly more minutes of active commuting and of participation in sports activities than the other ENERGY-Partners. These results are supported by the recently published accelerometer measurements of the ENERGY project [26] indicating that Swiss children spent significantly more minutes in moderate to vigorous physical activity than children from the other European countries. There might be several explanations for these findings. First, all schools in Switzerland are legally obligated to provide 3 physical education sessions Sleep duration (hr/night) 9.2 (1.5) 9.5 (0.8) 9.5 (0.7) 9.5 (2.1) 9.5 (0.8) 9.5 (0.8) 9.6 (0.7) 9.6 (0.7) 9.5 (0.7) *p≤0,05. **p≤0,01. ***p≤0,001. per week and although children only spend one third of these lessons in moderate to vigorous physical activity, they significantly increase children's accelerometer based MVPA levels during school time [27]. Secondly, there is a national sports promotion program ' Youth and Sport (Y+S)' which offers optional physical education sessions after school as well as courses and sport camps for children and fosters children's integration into a sports club [28]. Third, a vast proportion of Swiss children still commutes actively to school [29,30]. The results of the present study might even underestimate the time spent in active commuting as they are based on the ENERGY algorithm to calculate total commute time assuming two trips per day. Yet, children in Switzerland usually return home for lunch and therefore travel up to four times a day to or from school. Active commuting to school is popular in Switzerland since more than 95% of the Swiss children attend the public schools located closest to their homes [29] and there is no free school choice. The short distances facilitate walking or cycling to school. In addition, 64% of parents reported to perceive their children's way to school to be safe [30]. Safety concerns of Swiss parents were mostly related to dangers from traffic (85%) and less often to violence and harassment (<10%) [30] contrasting reports, e.g. from the UK, where a large proportion of parents were worried about abduction or molestation [31]. Swiss children also indicated to spend less time with screen activities than their peers in the seven European ENERGY-Partners' countries as also reported by the HBCS study [14]. Screen-viewing behaviors are usually assessed as an indicator of physical inactivity. Yet, recent comparisons with accelerometer-derived sedentary time clearly showed that self-reported TV and computer time did not correlate well with objective measures [32,33]. TV viewing may not be a good indicator of physical inactivity but it has been linked to unhealthy eating behaviors, such as lower fruit and vegetable intake, higher sugar-sweetened beverage consumption snacking and higher fast food intake [34] which in turn are related to overweight. Interestingly EBRB of non native Swiss children were in most aspects comparable to the behaviour of Swiss native children (active transport, sleeping duration, screen activities) indicating some adoptive behaviour. An exception was cycling where non-native Swiss resembled more the one from southern Europe ENERGY Partners (Greece, Spain). Strength and limitations An obvious limitation and potential source of bias of the present study is the low participation rate. First, the relatively small sample size reduced our ability to detect statistically significant differences between sub-groups. Second, overweight/ obesity and EBRB rates may be underestimated if participation in the study was dependent on overweight status or behavioral patterns. To address this question we evaluated whether the response rate in a given school was associated with the prevalence of overweight assuming that in schools with low participation rate a lower prevalence of overweight would result. Response rates in our samples varied between 20% and 81% and were subdivided into quartiles. The quartile with the lowest participation rate (1 st quartile) yielded an overweight prevalence of 25.5%, the 2 nd quartile 15.6%; the 3 rd quartile 7.2%; and the 4 th quartile a rate of 13.1%, thus giving no evidence of a systematic under-representation of overweight children due to selective participation. Third, the assessment of dietary habits, physical activity and sedentary behavior were self-reported and depended upon the respondents' recall and ability to give correct answers. We thus evaluated the test-retest reliability and construct validity of the respective questions and found good levels of agreement similar to those of the ENERGY-Partners [19]. Finally the three regions included in the ENERGY study were all from the German speaking part of Switzerland (63.7% of the Swiss people) limiting the generalization for the whole country. A clear strength of the present study was the use of the standardized ENERGY protocol allowing international comparisons and the inclusion of objective weight and height data. Conclusions It can be concluded that the prevalence of overweight in Switzerland is substantial but lower than in to other countries across Europe. Children from Switzerland engaged less frequently in unfavorable energy balance related behaviors associated with overweight and obesity. However, considerable differences were observed within Switzerland, with children from non-native ethnicity more likely to be overweight and obese. Socio-economic and cultural aspects need be taken into account in planning preventive health interventions. Additional files Additional file 1: Dietary habits, physical activity and sedentary behaviors of the Swiss sample and the ENERGY-Partners. Medians and 25-75% percentiles of dietary habits, physical activity and sedentary behaviors of the Swiss sample and the ENERGY-Partners. Additional file 2: Dietary habits of the Swiss sample stratified by gender, parental education and ethnicity. Medians and 25-75% percentiles of dietary habits for the total Swiss sample stratified for boys and girls, for children with low and high educated parents and for native and non-native children. Additional file 3: Physical activity behaviors of the Swiss sample stratified by gender, parental education and ethnicity. Medians and
2017-04-09T18:41:49.966Z
2012-11-29T00:00:00.000
{ "year": 2012, "sha1": "6c26bfcec0b8eb5c6675431200d2d2ba75716777", "oa_license": "CCBY", "oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/1479-5868-9-139", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80640689e112980fd7f31fd95d467162cdfe5e78", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
251302182
pes2o/s2orc
v3-fos-license
miR-133a-5p Inhibits Glioma Cell Proliferation by Regulating IGFBP3 Objective This research aims to investigate the expression of miR-133a-5p in glioma tissues and its impact on glioma cell proliferation. Methods Fluorescence-quantitative PCR was used to detect the expression of miR-133a-5p in 25 cases of glioma and adjuncent tissues. CCK-8 and colony formation analyses were used to evaluate the impact of transfection with miR-133a-5p inhibitors or mimics on glioma cell growth and colony formation. The IGFBP3 (insulin-like growth factor-binding protein-3) and miR-133a-5p binding sites were predicted using Starbase, and the miR-133a-5p binding capacity with 3'UTR of IGFBP3 gene was determined using a luciferase gene reporter system. Following transfection with miR-133a-5p mimics or inhibitors, the IGFBP3 protein expression in glioma cells was determined by western blotting. The colony formation assay was applied to evaluate the influence of IGFBP3 overexpression on the miR-133a-5p in glioma cell proliferation. For assessment of the IGFBP3 expression in glioma tissues and prognosis, TCGA database was employed. Results The expression of miR-133a-5p was considerably reduced in glioma tissue compared to adjuncent control tissue. In addition, miR-133a-5p expression decreased with increasing glioma malignancy. Glioma cell growth and colony formation were reduced after miR-133a-5p mimics were transfected, while transfection of miR-133a-5p inhibitors had a reverse impact. The expression of IGFBP3 was affected by miR-133a-5p by binding to its 3'UTR region. Additional study demonstrated that the overall survival (OS) of subjects with increased IGFBP3 expression was considerably lower compared to patients with decreased IGFBP3 expression. The IGFBP3 overexpression effectively counteracts the glioma cell proliferation-inhibiting impact of miR-133a-5p. Conclusion miR-133a-5p acts as a glioma tumor suppressor gene. It reduces glioma cell proliferation by modulating IGFBP3 and could be a target for glioma therapy. Introduction Glioma arises from glial cells' surrounding neurons and is the most prevalent tumor of the central nervous system (CNS) found in clinical settings. Among all gliomas, glioblastoma has an extremely high recurrence rate and is responsible for roughly 80% of all aggressive brain tumors. Glioblastoma is one of the malignant tumors with the poorest prognostic outcome [1,2]. Despite significant advances in surgery, radiotherapy, and chemotherapy in recent decades, the prognosis of glioma remains poor, with a mean survival time of only 14.6 month [3,4]. Various fundamental and clinical research studies have revealed that glioma is a polygenic illness and that its onset and progression are controlled by numerous genes [5,6]. erefore, understanding the interaction among relevant factors at the level of gene regulation and looking for additional molecular candidate genes have become important for glioma therapy. MicroRNA (miRNA) is a kind of non-coding singlestranded RNA (ncRNA) that can attach to the 3′UTR domain of target genes and restrict gene expression at the post-transcriptional level, ultimately leading to mRNA breakdown and translational reduction [7,8]. Prior research has shown that miRNA plays a significant role in controlling cell differentiation, proliferation, apoptosis, and tumor growth and development [9]. miRNAs are implicated in all tumor-related activities, including immunological response and angiogenesis. ey can enhance or suppress cancer growth by blocking the production of specific molecules in signaling networks [7,10]. For instance, miRNA-451 can modulate the NF-B signaling cascade by activating IKKβ, reducing glioma cell proliferation both in vivo and in vitro [11]. miR-23b-5p increases sensitivity of glioma to temozolomide treatment by negatively regulating TLR4 expression [12]. ese findings suggest that miRNAs play important roles in the incidence, drug resistance, and progression of glioma. Recent research has indicated that miR-133a-5p is a tumor suppressor gene that is poorly expressed in a range of tumor tissues and plays an antitumor role via modulating downstream target genes in gastric cancer, bladder cancer, and non-small-cell lung cancer. For example, miR-133a-5p is modestly expressed in gastric cancer cell lines and tissues. e analysis of its molecular mechanism has revealed that miR-133a-5p suppresses metastasis and cell growth while promoting apoptosis by targeting TCF4 [13]. Furthermore, in prostate cancer cells, miR-133a-5p reduces cellular invasiveness and proliferation via targeting the androgen receptor (AR) [14]. Other molecules, such as circRNA circP4HB increases the metastasis and invasiveness of non-small-cell lung cancer (NSCLC) by sponging miR-133a-5p, can also control the regulating effect of miR-133a-5p to exert biological functions in tumor cells [15]. However, the expression and regulatory effects of miR-133a-5p in glioma tissue is unknown. is study sought to determine the expression of miR-133a-5p in glioma cells and tissues. Moreover, the impact of miR-133a-5p overexpression or inhibition on glioma cell proliferation was investigated. e regulatory link between IGFBP and miR-133a-5p was validated. is study will provide a theoretical foundation for clinical treatment of glioma. Research Materials. Twenty-five participants that underwent glioma excision at our hospital between June 2020 and January 2021 were included in this study. e glioma and para-carcinoma specimens (∼5 cm away from tumor tissue edge) were obtained and preserved in the freezer at −80°C. ere was no preoperative chemotherapy, radiotherapy, immunotherapy, targeted therapy, or other relevant treatments. All participants signed an informed consent form. All specimens were obtained and operated following the ethical standards of clinical trials. Cell Culture and Transfection. e Shanghai Cell Bank (Chinese Academy of Sciences) provided normal astrocyte (NHAS) cells and human glioma cell lines T98MG, U251, and U87. ese cells were maintained in a DMEM containing 10% FBS. e culture conditions were 5% of CO 2 concentration, incubation temperature of 37°C, relative humidity of 95%, and incubation in darkness. ey were subcultured when the cells grew to an appropriate cell density (∼80% confluence). e cells at logarithmic growth were implanted into 6-well cell plates (1 × 10 6 cells per well). e transfection was performed when the cell fusion reached about 60% as per the instruction of the Lipofectamine 2000 transfection kit. Fluorescence-Quantitative PCR Detection. e TRIzol method was utilized for RNA extraction from para-carcinoma and glioma tissues. e glioma cell lines and concentration and purity of the RNA were determined using a spectrophotometer. rough a reverse transcription kit, the mRNA was reversedly-transcribed into cDNA. miR-133a-5p and internal reference U6 primers were added after the reverse transcription and then amplified in ABI fluorescence-quantitative PCR instrument. Each sample was repeated three times to quantify the relative expression level using the 2 −ΔΔC T method. paused and enumerated following transfection. Cells were kept in a 5% CO 2 incubator and inoculated into a 96-well plate at 37°C with 100 μL (1 × 10 3 cells) per well. At four periods of time (24,48,72, and 96 h) following transfection, 10 μl CCK-8 solution was poured into each well 2 h before detection, and a microplate reader was used to quantify the absorbance rate. Each sample was repeated in triplicate. Colony Formation Assay. After transfection, 0.25% trypsin was applied to digest the glioma cells in the log growth phase, dispersed into a single-cell suspension, and centrifuged at 1000 rpm at room temperature for 5 min. e supernatant was discarded, and the remaining cells were enumerated after resuspension. A total of 500 cells were plated into a six-well plate, the media were replaced every two to three days, and cell development was constantly monitored. After around 2 weeks, the medium was removed, 4% paraformaldehyde was introduced for 15 min, and the cell staining was performed using 1% crystal violet solution. Cell pictures were collected. e cell mass with ≥50 cells was taken as the number of colonies, and the colony formation rate was analyzed. e procedure was repeated three times. 2.6. Dual-Luciferase Reporter Gene. Starbase online prediction software was applied to identify the probable downstream miR-133a-5p target genes. It was found that miR-133a-5p had a binding affinity with the 3′-UTR region of IGFBP, resulting in a dual-luciferase reporter gene vector formation. Glioma cells were harvested and plated at a density of 5 × 10 4 cells per well in a 24-well plate. e no-load vector and reporter vector were prepared with miR-133a-5p mimics and control, respectively, to form a transfection mixture. After adding the transfection mixture to the cells, they were kept in a 37°C incubator with 5% CO 2 . e cells were cultured and kept in a complete medium for 48 h after being cultured for 12 h. e cells were lysed per the Promega dual-luciferase Fluorescence-quantitative PCR detected the miR-133a-5p expression level in U87 and T98MG cells following transfection with miR-133a-5p mimics or inhibitors. (c, d) In U87 and T98MG cells, the MTT assay was used to examine the transfection of miR-133a-5p mimics or inhibitors that affected cellular proliferation. (e, f ) A colony formation test was utilized to examine the impact of transfection of miR-133a-5p mimics or inhibitors on colony formation ability in U87 and T98MG cells ( * P < 0.05 and * * P < 0.01). reporter gene detection kit protocol, and luciferase activity was determined by adding a fluorescent substrate. Western Blot Analysis. e expression of IGFBP protein in glioma cells transfected with miR-133a-5p inhibitors or mimics was analyzed using a western blot technique. Equal amounts of cells from each transfection group were taken, and the SDS lysis method was applied to extract the total protein. After protein denaturation, SDS-PACE electrophoresis was performed, and the membrane was transferred, and then it was blocked overnight with 3% BSA. e primary antibody (1 : 800 dilution) was applied and kept at 4°C overnight before washing the membrane. e HRP-labelled secondary antibody was used and incubated at room temperature for 1 h before washing the membrane and performing ECL chromogenic exposure. For the film after imaging, the absorbance values of each band were quantified using a density scanner. Statistical Data Analysis. e SPSS software (v.19.0) was used for data analysis. e statistical data were described as mean ± SD. For independent samples, a t-test was used to look at measured values. e ANOVA was applied to search for differences between each group. Statistical significance was defined as a P value of less than 0.05. miR-133a-5p Was Expressed at Low Level in Glioma Tissues and Cell Lines. First, fluorescence-quantitative PCR determined the expression of miR-133a-5p in glioma and para-carcinoma tissues. e analysis indicated that glioma tissue had less miR-133a-5p than para-carcinoma tissue (Figure 1(a)). e glioma tissues were then subdivided into high-grade and low-grade glioma tissues. e findings demonstrated that the miR-133a-5p expression reduced as the malignant degree of the glioma increased (Figure 1(b)). Furthermore, compared to normal astrocyte (NHAS) cells, the expression level of miR-133a-5p was lowered in glioma cell lines, particularly in U87 cells (Figure 1(c)). ese results suggested that the expression of miR-133a-5p reduced in glioma cells and tissues. e Effect of miR-133a-5p on Glioma Cell Proliferation. e initial experimental results suggested that the expression of miR-133a-5p was lower in U87 cells than in T98MG cells. U87 cells were transfected with miR-133a-5p and control mimics, and fluorescence-quantitative PCR detection revealed that miR-133a-5p mimic transfection substantially enhanced the miR-133a-5p expression level in cells as compared to the control (Figure 2(a)). In addition, T98MG cells were transfected with miR-133a-5p inhibitor and the control. Compared to the control, transfection of miR-133a-5p inhibitor significantly reduced miR-133a-5p expression in the cells (Figure 2(b)). Compared to the control, the miR-133a-5p mimic transfection decreased cell growth considerably in U87 cells, as revealed by MTT assay findings (Figure 2(c)). e control mimics, transfected with miR-133a-5p inhibitor, greatly increased cell proliferation in T98MG cells (Figure 2(d)). Furthermore, the colony formation experiment indicated that miR-133a-5p mimic transfection considerably lowered the competence of U87 cells to form colonies when compared to the control (Figure 2(e)). Transfection with miR-133a-5p inhibitor substantially elevated the ability of T98MG cells to form colonies when compared to the control (Figure 2(f)). miR-133a-5p Regulates the IGFBP3 (Insulin-Like Growth Factor-Binding Protein-3) Expression in Glioma Cells. e TargetScan prediction software study revealed that miR-133a-5p had a binding site with IGFBP3 (Figure 3(a)). e luciferase activity of U87 cells in the miR-133a-5p mimic and IGFBP3 Wt co-transfection group was substantially lower than that in the control and IGFBP3 Wt co-transfection group, according to the findings of the dual-luciferase report analysis. Compared with the control mimics and IGFBP3 Mut co-transfection group, the luciferase activity of U87 cells in the miR-133a-5p mimic and IGFBP3 Mut co-transfection group did not change significantly (Figure 3(b)). According to western blotting assay results, IGFBP3 protein expression in U87 cells was decreased in the miR-199a-5p mimic group compared to the control group (Figure 3(c)). In comparison to the control group, the IGFBP3 protein expression in T98MG cells was dramatically increased in the miR-199a-5p inhibitor group (Figure 3(d)). IGFBP3 Was Expressed in Gliomas and Could Be a Biomarker for Patient Prognosis. To evaluate the expression of IGFBP3 in glioma, the expression of IGFBP3 in glioma and para-cancerous brain tissues was determined by fluorescence-quantitative PCR. e findings confirmed that IGFBP3 was overexpressed in glioma tissues compared to para-cancerous tissues (Figure 4(a)). Analysis of published studies from TCGA database on IGFBP3 expression in glioma revealed that IGFBP3 was strongly elevated in glioma tissues (low-grade glioma (LGG) and glioblastoma multiforme (GBM)) compared to para-cancerous tissues (Figure 4(b)). Significantly shorter overall survival (OS) and disease-free survival (DFS) were seen in patients with high IGFBP3 expression compared to those with low IGFBP3 expression (Figures 4(c) and 4(d)). IGFBP3 Overexpression Can Mitigate the miR-133a-5p Inhibitory Effect on the Proliferation of Glioma Cells. e western blot analysis showed that the miR-133a-5p mimics and the vector group could reduce IGFBP3 expression, confirming IGFBP3 gene suppression by miR-133a-5p. Following transfection with an overexpression vector (Vector-IGFBP3) that overexpressed IGFBP3, the inhibition of IGFBP3 by overexpression of miR-133a-5p was dramatically reduced in the miR-133a-5p mimic group ( Figure 5(a)). Compared to the vector group and miR-133a-5p mimics, both groups effectively suppressed cell colony formation. Similarly, the inhibitory effect of miR-133a-5p overexpression on cell colony formation was inhibited after transfection with an overexpression vector (Vector-IGFBP3) that overexpressed IGFBP3 ( Figure 5(b)). Discussion Glioma is the most frequent recurrent malignant brain tumor. Due to the unsatisfactory effect of surgery, radiotherapy, and chemotherapy and its poor prognosis, gliomas seriously endanger human health [16,17]. According to previous research, miRNAs contribute significantly to the transduction of intracellular signaling pathways, initiation, and progression of gliomas by modulating the expression of target genes and are a special class of prospective indicators in targeted therapies [18,19]. miR-133a-5p was expressed at low level in glioma tissues, and its level of expression decreased substantially as glioma malignancy progressed, according to this study's results. Cell function experiments demonstrated that miR-133a-5p overexpression greatly decreased the glioma cell proliferation and colony formation, whereas the inhibition of miR-133a-5p had the opposite effect. Molecular mechanistic studies have reported the binding capacity of miR-133a-5p with the 3′-UTR region of IGFBP3 gene, influencing its expression. IGFBP3 overexpression can drastically counteract the inhibitory activity of miR-133a-5p on glioma cell growth. erefore, miR-133a-5p could be utilized as a potential treatment target for glioma therapy. e latest evidence has suggested that abnormal miRNA expression is closely related to the development and proliferation of tumor cells. For example, the overexpression of miR-191 can enhance cell proliferation both in vivo and in vitro by negatively regulating the expression of NDST1 in human glioblastoma tissues and cells [20]. mir-758-5p targets ZBTB20 expression, which reduces glioblastoma invasion, migration, and proliferation [21]. miR-133a-5p was found to be weakly expressed in glioma tissues. e cell proliferation was inhibited considerably by miR-133a-5p overexpression. miR-133a-5p suppression enhanced cell proliferation. e target gene of miR-133a-5p was identified as IGFBP3 using bioinformatics predicted analyses in this study, which aimed to expand our understanding of the mechanism of miR-133a-5p in glioma cell proliferation. IGFBP3 belongs to a class of intracellular molecules with numerous regulatory functions that play various roles in many cancers. IGFBP3 is abundantly expressed in nasopharyngeal cancer tissues, which correlates with poor prognosis and tumor metastasis. Overexpression of IGFBP3 can promote cell proliferation and migration [22]. In cervical cancer, IGFBP3 inhibits tumor angiogenesis by intracellular regulation of THBS1 expression [23]. In hepatocellular carcinoma cells, overexpression of IGFBP3 induces apoptosis of hepatocellular carcinoma cells and reduces colony formation [24]. is study demonstrated that IGFBP3 is the target gene of miR-133a-5p and that increasing miR-133a-5p can inhibit glioma cells from expressing IGFBP3 protein. In this study, an IGFBP3 overexpression vector and a miR-133a-5p mimic were transfected in glioma cells. It was revealed that IGFBP3 overexpression reverses the inhibitory effect of miR-133a-5p on glioma cell proliferation. is work established the role of IGFBP3/miR-133a-5p axis in the glioma cell proliferation at the cellular level in vitro. More in vivo research is required for further exploration. In conclusion, the proliferative potential of glioma cells is inhibited by miR-133a-5p, which reduces IGFBP3 synthesis. e results of this study encourage the development of novel clinical therapies by targeting miR-133a-5p. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2022-08-04T15:15:25.179Z
2022-08-02T00:00:00.000
{ "year": 2022, "sha1": "bceefa81b58ca67da353a01f1ec34dc8368d4695", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jo/2022/8697676.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "47b66e2f0c3e06b5f7ff5965882929bd5817795d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
5359390
pes2o/s2orc
v3-fos-license
Tunneling singularities in the open Hubbard chain We study singularities in the I-V characteristics for sequential tunneling from resonant localized levels (e.g. a quantum dot) into a one dimensional electron system described by a Hubbard model. Boundary conformal field theory together with the exact solution of the Hubbard model subject to boundary fields allows to compute the exponents describing the singularity arising when the energy of the local level is tuned through the Fermi energy of the wire as a function of electron density and magnetic field. For boundary potentials with bound states a sequence of such singularities can be observed. Introduction Electronic correlations together with strong quantum fluctuations are known to determine the low temperature properties of quasi one dimensional conductors. Theoretical investigations using integrable lattice realizations of these Tomonaga-Luttinger liquids (TLL) together with numerical studies and field theory approaches such as bosonization have provided much of the insight into the peculiar properties of such systems. Experimental evidence for TLL behaviour on the other hand is still rare in spite of the tremendous progress in synthetization of quasi one dimensional materials or fabrication of nano-structures in which the transport of electrons is confined to a single one dimensional channel [1][2][3]. One reason for this is that the theoretical work on TLLs has concentrated on low energy bulk properties of perfect infinitely long system which are difficult to access experimentically. Recently, studies of the response of a TLL on local perturbations have become feasible due to the construction of integrable models with open boundaries and a better understanding of quantum field theories in the presence of a boundary. Local inhomogeneities may have a profound effect on the transport properties of one dimensional interacting electron gases, even leading to phases with vanishing transmission of a barrier [4][5][6][7][8]. Finite size effects in the resulting open chains have been studied to understand the possible experimental consequences of TLL properties in such systems (see e.g. [9][10][11][12][13][14]). Among possible consequences of local perturbations are Fermi edge singularities which may be observed in X-ray absorption amplitudes and -as will be discussed in this paper -in tunneling experiments [15][16][17][18]. In both cases the nature of the singularities is strongly affected by the properties of the TLL. In this paper we study tunneling from a resonant localized level into a TLL in the limit of low barrier conductance as observed in the current-voltage characteristic at zero bias. As a specific representation of the latter we choose the one-dimensional Hubbard model. In this lattice model we find -similar as in previous work on optical absorption processes [19] -a rich spectrum of edge singularities due to the existence of bound states reflecting the separation of spin and charge in the TLL [20]. An experimental realization of the tunneling processes under investigation might be a quantum dot (providing the localized level) coupled to a quantum wire. The reservoir supplying charges to fill the state in the quantum dot is left unspecified. The energy E i of the local level can be tuned by varying a gate voltage of the dot. The only influence of the quantum dot on the Luttinger liquid considered below is the electrostatic interaction with its net charge (we consider the dot occupied with a single electron to be electrically neutral). This description is very similar to the model invoked to describe the X-ray edge singularity in metallic systems [21] and has previously been used to study tunneling from a resonant local level into two-and three-dimensional systems [22]. These considerations lead to the following where b † (b) are canonical fermionic creation (annihilation) operators for a spin-↑ electron the localized state and c † jσ creates an electron of spin σ on site j of the one-dimensional chain. The chemical potential µ and magnetic field h = gµ B H allow to control the filling factor and magnetization of the quantum wire. Upon variation of the gate voltage tunneling between the local level and the wire becomes possible if the energy E i of the local level exceeds the Fermi energy. We restrict ourselves to the case where the barrier conductance is low. Hence, the transport is dominated by incoherent sequential tunneling processes and we can neglect Coulomb blockade effects and higher order processes such as "cotunneling" [23]. Within the "orthodox theory" [24] the current due to sequential tunneling is computed by application of the golden rule leading to Here |Õ = b † |0 denotes the ground state of the open Hubbard chain in the N e -particle sector with the local level occupied and hence vanishing boundary potential p. The sum in (1.2) extends over all eigenstates |n of the chain in the (N e + 1)-particle sector in the presence of the boundary potential p. Eq. (1.2) can be rewritten as a Fourier integral: Near the threshold E i ≈ E th the intensity exhibits a characteristic singularity: For non interacting electrons the exponent α can be expressed in terms of the phase shift at the Fermi surface [22]. As in the case of the X-ray edge singularity one expects several thresholds if the electrostatic potential p is strong enough to form bound states in the TLL [25,26]. In this paper we want to study this problem for tunneling into a TLL where an additional dependence of the exponent α on the interaction parameters (i.e. electron density, magnetization and strength of the Hubbard interaction 4u) in (1.1) of the Luttinger liquid is to be expected from the results obtained for the related X-ray problem (see Refs. [19,27,28]). In the following section we summarize the relevant properties of the model (1.1) obtained from its Bethe Ansatz solution. From this solution combined with results from boundary conformal field theory (BCFT) [28][29][30] we extract the spectrum of thresholds and the corresponding exponents α. Bethe Ansatz Solution of the model The Bethe Ansatz equations (BAE) determining the spectrum of H with empty local state (i.e. with boundary chemical potential p) in the N e -particle sector with magnetization M = 1 2 N e −N ↓ read [31][32][33]: where one should identify k −j ≡ −k j and λ −α ≡ −λ α . The boundary phase shifts appearing in the BAE read The energy of the eigenstate of Eq. (1.1) corresponding to a solution of the BAE is given by In Refs. [31][32][33] the ground state and the low-lying excitations of this model where studied for small boundary fields. In [20] the existence of boundary states for |p| > 1 has been established. In the Bethe Ansatz solution these bound states manifest themselves as additional complex solutions for the charge and spin rapidities. In Fig. 1 the spectrum of bound states for u = 1 is shown. Using standard procedures, the BAE for the ground state and low-lying excitations in the thermodynamic limit can be rewritten as linear integral equations for the densities ρ c (k) and ρ s (λ) of real quasi-momenta k j and spin rapidities λ α , respectively: with the kernel K given by Here we have introduced a y (x) = 1 2π y y 2 /4+x 2 , and f * g denotes the convolution A −A dyf (x−y)g(y) with boundaries A = k (0) in the charge and A = λ (0) in the spin sector. These boundaries are functions of the external chemical potential µ and magnetic field h. Alternatively, in a canonical approach the values of k (0) and λ (0) are fixed by the conditions where C c (C s ) denotes the number of complex k (λ)-solutions present in the ground state [20]. The boundary phase shifts (2.2) and the presence of complex solutions to the BAE determines the driving termsρ 0 c andρ 0 s in (2.4). Their explicit form can be found in Refs. [20,[31][32][33]. Denoting the solutions of (2.4) without the constant contribution 1/π to the driving term bŷ ρ c andρ s we introduce shift angles Following Woynarovich [34] one can calculate the finite size spectrum of the model, reproducing the result of [31]: Here Le ∞ and f ∞ denote the bulk and boundary energy, N + c,s are non negative integers counting the number of particle hole excitations at the Fermi points and the v c,s are the Fermi velocities of the massless charge and magnetic modes. ∆N c,s specify the quasi-particle content of the state, in a TLL these are holons/anti-holons in the charge sector and spinons in the magnetic sector of the theory. The dressed charge matrix Z [34-36] is defined in terms of the integral equation (2.10) Results from boundary conformal field theory allow to extract the exponent α in Eq. (1.4) from the finite size spectrum (2.8) [28][29][30]: the Green's function of an operator O with dimension x on the complex half plane is given by: Conformal mapping of the half plane onto a strip of finite width L allows to extract the scaling dimension of the boundary changing operator O from the finite size spectrum (2.8) by taking differences of the energy E 0 A of the system's ground state |A in the N e -particle sector without boundary potential and the energy E n B of the lowest excitation |B, n in the (N e + 1)particle sector with boundary potential p and non vanishing form factor | B, n|O † |A | 2 (see Refs. [19,20,35,36] We now want to study the exponents at the several possible thresholds. To gain some more insight into the role of the boundary states we begin with a discussion of noninteracting fermions. Ferromagnetic case For sufficiently large magnetic field the electrons are polarized ferromagnetically, hence an explicit expression is available for the wave function in terms of a Slater determinant of singleparticle states. Considering |p| < 1 these are plane waves corresponding to real wave numbers k exist and we expect a single threshold. The corresponding edge exponent is a function of p and the density of electrons n e = N e /L (see also Ref. [37]): (3.4) These predictions can be checked by studying the finite-size behaviour of the form factors. From the conformal mapping mentioned above one expects [20,26] where |p and |p denote the ground state and the lowest state with empty bound state in the (N e + 1)-particle sector with boundary potential p > 1. Note that exponent x r p vanishes in the limit p → 0. This coincides with exact result, I(E i ) ∝ 1 − The situation changes completely for p < 0 where the ground state is always parametrized by real wave numbers k giving a negative exponent α at the absolute threshold. Occupation of the anti-bound state corresponding to a complex k leads to a positive exponent. For the X-ray edge problem one can show that the functional dependence of J(E) is nearly unchanged by increasing the system size [38,39]. This allows to extract quantitative informations of rather small systems (L = 80 in the present case) by fitting of J(E) to the trial function: The resulting exponent c can be compared to the BCFT-results. Using the 600 lowest states contributing to the sum (1.2) we obtain a good agreement with the CFT-results (see Fig. 5) for p > 1 and electron densities n e 0.4. For larger densities a bigger discrepancy between the numerical results and the BCFT predictions is found due to stronger finite size effects. The second threshold due to the presence of a bound state is most pronounced for p < −1. Here the jump of J(E) characteristic of a positive edge exponent occurs at the second threshold (see Fig. 3). While the BCFT result for the edge exponent at the absolute threshold is α abs = α r = −0.33 the fit to the numerical data gives α abs = −0.46 -this indicates that a genuine singularity is strongly affected by finite size effects. On the other hand the numerical value for the positive exponent at the threshold corresponding to occupied anti-bound state, α c = 0.988, is in very good agreement with the CFT-result α c = 0.977. Magnetic field dependence of edge exponents For vanishing magnetic field one has λ (0) = ∞ allowing to solve the spin part of the integral equations by Fourier transformation. As a consequence the dressed charge matrix Z (2.9) is function of a single variable ξ = ξ(k (0) ) [34], which is defined by the following integral equation: is the digamma function). Furthermore, one finds θ s p = 1 2 θ c p yielding for the exponent at the absolute threshold. In Fig. 6 we present regions where the exponent α abs is positive as a function of electronic density n e and boundary potential p together with the density dependence of the exponent for some fixed values of p. In a finite magnetic field h the bulk state of the Hubbard model is ferromagnetic below a critical particle density n c . This density can be calculated from For non vanishing magnetic field we will only consider electron densities above n c . For h > 0 the exponent is given by: . (3.11) In Fig. 7 the magnetic field dependence of this exponent is shown for several values boundary potentials p. Note the characteristic change of this curve near p = p 1 = u + √ u 2 + 1 where a low lying excited bound state for a charge and a spinon (corresponding to a complex quasi momentum k and a complex spin rapidity λ in the set of roots of (2.1) [20]) appears, see Fig. 1. The difference of the limiting values at p = p 1 is ∆α abs = 1. The crossover due to this behaviour is clearly seen in Fig. 7. As before additional edge singularities arise as a consequence of bound states in the boundary potential. The corresponding exponents can be computed from the Bethe Ansatz equations as above (for details see Ref. [20]). Each of the boundary states seen in Fig. 1 gives rise to a singularity (1.4) in the I-V curve. Summary and Conclusion We have studied the I-V characteristics for tunneling from a resonant localized level into a one dimensional interacting electron gas described in terms of a Hubbard model (1.1). Compared to tunneling into a higher dimensional system one finds a rich structure of thresholds due to the presence of various bound states in the many particle spectrum, each of them leading threshold is characteristic to a singularity with a positive exponent α. Its height is given by the matrix element p|c † 1,↑ |0 which vanishes in the thermodynamic limit according to Eq. (3.5). This is the well known orthogonality catastrophe [41,42].
2014-10-01T00:00:00.000Z
1999-05-19T00:00:00.000
{ "year": 1999, "sha1": "0e81d180c2ac72312c90017eeee1bfb220ad08d8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9905275", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0e81d180c2ac72312c90017eeee1bfb220ad08d8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
58941604
pes2o/s2orc
v3-fos-license
Competing damage mechanisms in a two-phase microstructure: how microstructure and loading conditions determine the onset of fracture This paper studies the competition of fracture initiation in the ductile soft phase and in the comparatively brittle hard phase in the microstructure of a two-phase material. A simple microstructural model is used to predict macroscopic fracture initiation. The simplicity of the model ensures highly efficient computations, enabling an comprehensive study: a large range of hard phase volume fractions and yield stress ratios, for wide range of applied stress states. Each combination of these parameters is analyzed using a large set of (random) microstructures. It is observed that only one of the phases dominates macroscopic fracture initiation: at low stress triaxiality the soft phase is dominant, but above a critical triaxiality the hard phase takes over resulting in a strong decrease in ductility. This transition is strongly dependent on microstructural parameters. If the hard phase volume fraction is small, the fracture initiation is dominated by the soft phase even at high phase contrast. At higher hard phase volume fraction, the hard phase dominates already at low phase contrast. This simple model thereby reconciles experimental observations from the literature for a specific combination of parameters, which may have triggered contradictory statements in the past. A microscopic analysis reveals that the average phase distribution around fracture initiation sites is nearly the same for the two failure mechanisms. Along the tensile direction, regions of the hard phase are found directly next to the fracture initiation site. This `band' of hard phase is intersected through the fracture initiation site by `bands' of the soft phase aligned with shear. Clearly, the local mechanical incompatibility is dominant for the initiation of fracture, regardless whether fracture initiates in the soft or in the hard phase. Objective This paper studies the competition between the different fracture initiation mechanisms in the microstructure of multi-phase alloys. The focus is on a class of materials that compromises between strength and ductility by employing a microstructure that consists of two or more phases/materials with distinct mechanical properties: a comparatively hard but brittle (reinforcement) phase and a comparatively soft ductile phase. Examples include metal matrix composites, e.g. silicon-carbide particles embedded in an aluminum matrix, and advanced high strength steels, such as dual-phase steel where the martensite phase reinforces the ferrite matrix. It has been shown, experimentally and numerically, that both phases contribute to fracture, and that their relative contribution is dependent on the microstructural morphology, the relative amount of the phases, the contrast of mechanical properties between them, and the applied stress state. This paper studies the competition of the different microstructural damage mechanisms, and the consequences on the macroscopically observed fracture properties (i.e. strength and ductility), for these parameters (hard phase volume fraction, phase contrast, and stress state). This furthermore enables the characteriza-tion of the quintessential phase distribution around fracture initiation sites. The results are compared to observations from the literature, for a specific combination of parameters. A computational multi-scale approach is proposed which predicts macroscopic fracture initiation as a natural outcome of unit cell computations. The model is chosen quite general, relevant for a wide range of materials in which the considered micro-damage mechanism are present. It is composed of a microstructure with two phases: a (quasi-)brittle hard phase that is embedded in a ductile soft matrix. The hard phase thereby is able to accommodate a small amount of plastic deformation before fracture; significantly less than the soft phase. To include sufficient statistical fluctuations, the adopted multi-scale approach is constructed to be computationally inexpensive by incorporating only the essential micromechanics. It uses simple indicators for both fracture initiation mechanisms: for the hard phase the Rankine model for cleavage [1][2][3] and for the soft phase the Johnson-Cook model for ductile damage [4]. The microstructure is represented by an ensemble of two-dimensional periodic volume elements that each consist of equi-sized square cells in which the phases are randomly distributed. The ensemble, comprising many random microstructures, is assumed macroscopically representative in an average sense. The adopted simplifications with respect to reality enable a systematic study as: (1) the computations are fast (few finite elements are needed for accurate discretization) and thus allow the comparison of many different microstructures for large range of parameter variations, (2) variations in terms of composition (i.e. volume fraction) are well controlled, and (3) the identification of the average distribution of phases around fracture initiation sites is well defined and transparent. State of the art The macroscopic stiffness and yielding of the considered class of multi-phase materials as a function of the constituents is reasonably well understood and can be predicted using a variety of models, ranging from a simple rule of mixtures to involved multi-scale computations [5][6][7][8][9][10]. Most of these models however provide limited accuracy or insight when it comes to failure. The state of the art for metal matrix composites and dual-phase steel can be found in several review papers [11][12][13]. Early papers already recognized the relationship between strength and ductility on the macro-scale and the volume fraction and mechanical properties of the reinforcement phase on the micro-scale [14][15][16]. It is well-known from many experimental and numerical studies that an increase in reinforcement volume fraction elevates the fracture strength but at the same time lowers the ductility [17][18][19]. Depending on the reinforcement volume fraction and its mechanical properties, the dominant fracture mechanism changes from ductile to brittle. Mummery and Derby [20] have observed that fracture initiation tends towards failure of the silicon-carbide reinforcement particles when the volume fraction or the size of the particles is increased. Lee et al. [21] have performed different heat treatments on dual-phase steel, and found that specimens with a high martensite hardness (tempered at a low temperature) reveal cleavage fracture, while specimens with low martensite hardness (tempered at higher temperatures) evidence ductile fracture; see also [22]. Using X-ray micro-tomography, Maire et al. [23] found that fracture initiates at or near the particle-matrix interface for a comparatively soft matrix while it initiates by cracking of the particles for a comparatively hard matrix. In other multi-phase materials, such as dual-phase steel, these observations are not as extensive due to the chemical and crystallographic similarities of the phases. A reduction of fracture strain is finally observed with increasing triaxiality, whereby the decrease is stronger than in other ductile materials as brittle fracture is often observed for higher triaxialities. To complicate matters, inspection of the fracture surfaces of these so-called brittle fracture modes still reveals dimples [24][25][26][27][28]. Focusing on the role of the morphology of the microstructure, it has been observed that fracture is promoted if the reinforcement particles are clustered [29][30][31]. For dual-phase steel, voids are frequently observed near the harder martensite bands caused by the rolling during the production of commercial grades [32][33][34]. To study the effect of morphology numerically, the model should include (part of) the morphological complexity of the material. Researchers frequently use a microstructure that is directly obtained by microscopy. However, to limit the high complexity and computational cost, the mechanical degradation that results in fracture is often omitted and replaced by a highly simplified criterion whereby fracture develops along the localization of plastic strain [5,6,9,35]. To account for the effect of local degradation due to void nucleation or growth, Prahl and co-workers apply the Gurson-Tvergaard-Needleman (GTN) model for the ductile soft phase and the cohesive zone model for the hard phase to predict macroscopic stress-strain curves up to fracture with adequate agreement with experiments [26,27]. However, focus is thereby not put on the individual contributions of the two fracture mechanisms in relation to the microstructure. Vajragupta et al. [36] apply similar models to a microscopic analysis. They observe that fracture initiates in a narrow hard region and it then propagates through the surrounding soft phase. This approach is restricted through its strong dependence on the finite element discretization. The studies above do not systematically relate the microstructure to the (initiation of) fracture in a statistical sense. Mostly because the studied systems are not large enough or the models too complex to include statistical variations, but also because systematically generating different morphologies is highly nontrivial. To overcome this Kumar et al. [35] have tried to find the critical morphological feature in which the damage is high, regardless of the morphology at further distance. They thereby considered several artificial microstructures which carried the geometrical statistics of the real microstructure. More recently, De Geus et al. [37] considered a large ensemble of random microstructures to calculate the average morphology around the initiation of fracture in the soft phase. It was found that a single grain of the soft phase with neighboring regions of the hard phase on both sides along the tensile direction and interrupted by regions of the soft phase in the shear directions, correlates to high damage levels. These observations are supported by experimental studies from the literature. Avramovic-Cingara et al. [38] identified that fracture initiates at interfaces perpendicular to the tensile axis. Such an interface appears even more critical when two regions of the reinforcement phase are closely separated [22,33,35,[39][40][41]. Most notably, Segurado and LLorca [42] considered a three-dimensional periodic volume element containing spherical reinforcement particles. It was found that damage was preferentially nucleated in or in-between particles that are closely separated along the tensile axis. In a more pronounced form, it was found that a band of hard phase interrupted by the soft phase is critical for damage [32][33][34]. Outline The paper is structured as follows. The micromechanical model including the numerical implementation is discussed in Section 2. A reference ensemble of microstructures with a certain hard phase volume fraction and phase contrast is examined for a particular stress state in Section 3. Using that ensemble, Section 4 studies the influence of different stress states. The hard phase volume fraction and the phase contrast are varied in Sections 5 and 6. All variations are combined to form a coherent mechanism map in Section 7. The spatial distribution of phases around the fracture initiation sites is quantified in Section 8, again for the reference ensemble. In Section 9 the damage model for the hard phase is replaced by a -completely different -ductile criterion to assess the sensitivity of the results to the specific choice of damage model (for the reference ensemble). The paper ends with concluding remarks in Section 10. Microstructure To obtain a statistically meaningful representation of the microstructure, an ensemble of 256 random volume elements is used. Although the analysis is done on the ensemble as a whole, the mechanical response is computed on each of the volume elements separately, whereby periodicity eliminates boundary effects. The resulting computation is considerably more efficient compared to a single, large, volume element. Also, the computation is naturally parallelized. Each of the periodic volume elements comprises 32 × 32 square cells that represent the individual grains or particles of the material. Each individual cell is randomly assigned the properties of either the soft phase or the hard phase by comparing a random number in the range [0, 1] to a probability ϕ hard . The resulting hard phase volume fraction is close to ϕ hard and typically varies between ±5% of it. A reference case is chosen with ϕ hard = 0.25, for which a typical volume element is shown in Figure 1(a) using red for the hard phase and blue for the soft phase, but ϕ hard is also varied. The response is calculated using the finite element method. Given the assumed idealization, only cell averaged quantities are considered. The finite element discretization is chosen such that the averaged quantities are independent of it. Each cell is discretized using 2 × 2 eight-node quadratic quadrilateral finite elements. Numerical integration is performed using four Gauss-points per finite element. The considered tensor components and scalar quantities are volume averaged over all 16 Gauss-points in the cell. It has been verified that this discretization is sufficiently accurate, whereby it was found that the local relative error is 1% with respect to a reference discretization of 10 × 10 finite elements per cell in terms of the stress and (plastic) strain components. Constitutive model Both phases are assumed isotropic elasto-plastic. At the moment of the initiation of fracture, the local deformations are large, and the response can be well in the plastic regime. A constitutive model suitable for such conditions is the model due to Simo [43]. This model uses a linear relation between the Kirchhoff stress τ and the logarithmic elastic strain 1 2 ln(b e ) (where b e is the elastic Finger tensor), involving the conventional elasticity tensor with the Young's modulus E and Poisson's ratio ν. The plasticity is modeled using J 2 -plasticity. In accordance with this model an associative flow rule is used. Linear hardening is assumed, which corresponds to the following yield function: where τ eq is the equivalent stress and ε p is the accumulated equivalent plastic strain; the initial (Kirchhoff) yield stress τ y0 and the hardening modulus H are material parameters. The flow rule follows from normality. The parameters of the soft phase are kept constant throughout this paper. They are chosen loosely representative for a specific class of materials, namely dual-phase steel [6,36,[44][45][46], as follows: The hard phase differs from the soft phase only through the plastic response. The yield stress of the hard phase is related to that of the soft phase through the phase contrast factor χ, i.e. The value of χ = 2 is used as a reference, but the influence of χ is also studied. Applied deformation and stress state The periodicity of the volume element is enforced by nodal tyings along the edge of the unit cell. Only the average displacement is prescribed while all fluctuations along the boundaries and throughout the volume element are permitted. As a reference load case, macroscopic pure shear deformation is prescribed in combination with a plane strain condition for the out-of-plane direction. This corresponds to the following macroscopic logarithmic strain tensor whereε is the macroscopic equivalent logarithmic strain. It is prescribed in small increments of 0.1% until macroscopic fracture initiation is predicted (discussed in the next section). The different considered stress states are applied as a variation of the pure shear deformation. Since (4) is volume preserving, the macroscopic hydrostatic stressτ m = 0 and therefore also the macroscopic stress triaxialityη = 0. The latter is defined in terms of the macroscopic Kirchhoff stress as Different triaxialitiesη are applied, each constant throughout the deformation history. For efficiency reasons this is done by superimposing a hydrostatic component to the local stress distribution τ obtained from the pure shear simulation as follows: Since the microstructure is elastically homogeneous, the resulting stress tensor τ is in equilibrium, however, the strain is no longer fully compatible with it, since the added hydrostatic stress would result in additional volumetric strain -which is however elastic and hence small. It has been verified that the relative error in ε is less than 1%, by comparing the approximated response to a full quasi three-dimensional computation subjected to a constantη [47]. Note that in [47], the effect of the square cell shape was also found to be small when compared to hexagonal cells. Damage indicators Convincing experimental evidence revealed that in ductile materials, voids and/or cavities nucleate and grow throughout all stages of deformation. Then rapid and highly localized coalescence to global fracture occurs ( [19,27,29,33,38] and many others). This behavior is modeled using damage descriptors that identify fracture initiation in the individual cells (representing grains or particles). These individual damage events are assumed not to interact strongly up to global fracture, and are therefore not coupled to the mechanical response. Global fracture is predicted when a critical number of cells in the ensemble of microstructures have 'fractured'. The hard phase fails through cleavage fracture, for which a stress based Rankine damage descriptor is used [1][2][3]. Fracture initiates when the maximum principal stress in a cell, τ I , reaches a critical value, τ c . The damage indicator is defined accordingly as so that D = 0 initially and fracture initiation is predicted for D = 1. Naturally, τ c is a material parameter (see below). The soft phase is assumed to fail in a ductile manner, which is characterized by the Johnson-Cook model [4]. Reformulated in an incremental form, this model compares the rate of effective plastic strain in a cell, ε p (which is by definition non-negative), to a critical strain ε c as follows The critical strain ε c depends on the stress triaxiality in that cell in the following way: where the parameters A, B, and the critical plastic strain ε pc are material parameters. The material parameters for both failure mechanisms are taken from the literature in the same range as the parameters of the constitutive model [36]: For the homogeneous hard phase, this implies that maximum 5% plastic strain is allowed under uniaxial tension (see also [48]). Based on both damage indicators in Eqs. (7,8), an indicator for fracture initiation is defined on the cell level, denoted by D: D = 1 when fracture has initiated, which corresponds to any value D ≥ 1, whereas otherwise D = 0. Macroscopic fracture initiation Macroscopic fracture initiation follows in an averaged sense from local fracture initiation as described above. Macroscopic fracture is predicted when 1% of the cells in the ensemble have 'failed' -i.e. D = 1 in 1% of the 256 × 32 × 32 cells in the ensemble. Using the ensemble of different microstructures ensures that the predicted fracture initiation is representative and not due to a statistical fluctuation. Reference ensemble and load case The reference ensemble -with hard phase volume fraction ϕ hard = 0.25, phase contrast χ = 2, and applied triaxialityη = 0 -is considered first. The ensemble averaged macroscopic equivalent stress τ eq as a function of the applied macroscopic equivalent strainε is shown using a solid black line in Figure 2. For this curve, macroscopic fracture initiation is indicated using a marker and the hardening as predicted if loading is continued after fracture initiation is shown using a dashed black line. The constitutive response of the hard phase (red) and the soft phase (blue) are also included using dashed lines. As observed, macroscopic fracture initiation is predicted at a strain of ε f =ε = 0.11 1 . Compared to soft phase, the stress is increased by 16% due to the introduction of the hard phase at the onset of macroscopic fracture. This increase comes at the expense of a decrease in fracture strain of 63% compared to the homogeneous soft phase. The microscopic response is visualized in Figure 3 for one microstructure from the ensemble, at the moment of macroscopic fracture initiation. The deformed geometry clearly shows the effect of the periodic boundary conditions, where the average deformation is prescribed by (4) with extension in horizontal direction and compression in vertical direction. The individual cells are significantly deformed in all directions. The plastic strain, in Figure 3(a), is largest in the soft phase, in particular where soft cells are interconnected under ±45 degree angles and surrounded by cells of the hard phase. This explains the decrease of macroscopic ductility observed above: the soft phase has to accommodate more (plastic) deformation due to phase contrast with the hard phase. The hydrostatic stress, in Figure 3(b), has extremes in both phases, but is the largest in the hard phase. In the soft phase the highest tensile values are found where a soft cell is flanked by hard cells in the horizontal direction (e.g. on the bottom right). In the hard phase this is observed where several hard cells are linked in horizontal direction (e.g. on the top right). All predicted fracture initiation sites, in Figure 3(c), are in the soft phase. In each case the soft cell is flanked by a hard cell on one or both sides. Results The applied macroscopic stress triaxialityη is varied in the range [−0.4, 1.5]. The resulting ensemble averaged fracture strain ε f as a function of the applied triaxialityη is shown in black in Figure 4, for the reference ensemble. Also shown are solid curves that correspond to the cases where failure is modeled only in the soft phase (i.e. fracture initiation is predicted in 1% of all soft cells in the ensemble, in blue) or only in the hard phase (in red). The dashed curves correspond to the uniform soft phase (in blue) and hard phase (in red). Note that the number of 'failed' cells varies between the different microstructures ranging from 2 to 21 (e.g. Figure 3, wherein the number of 'failed' cells equals 6). Figure 4 shows that the fracture initiation strain ε f is significantly reduced compared to a specimen of uniform soft phase. The overall trend is similar: the fracture initiation strain ε f decreases for increasing triaxialityη, as is frequently observed for ductile materials. However for the composite a rather rapid decrease of fracture strain is observed at a macroscopic triaxiality of 0.2 <η < 0.5. At low triaxialities,η < 0.2, the macroscopic fracture initiation strain of the ensemble coincides with that of the soft phase (in blue), while at high triaxialities,η > 0.5, it coincides with that of hard phase failure (in red). The two failure mechanisms are thus in competition whereby the macroscopic triaxialityη plays a key role. At triaxialitiesη < 0.2 a sufficient number of soft cells fail to call macroscopic fracture before a significant amount of damage is generated in the hard cells. For triaxialitiesη > 0.5 the hard cells reach their failure criterion substantially earlier. In the range 0.2 <η < 0.5 the competition is more even and both contribute. Thus, even though fracture initiation occurs sooner at higher triaxialities for both damage mechanisms, the brittle failure mechanism becomes more pronounced due to its stress dependence that scales with the elastic modulus. The ductile failure mechanism on the other hand scales with the triaxiality through the plasticity via the much lower hardening modulus. The trend in Figure 4 can be further analyzed by inspecting one typical microscopic response. The fracture initiation indicator D is shown in Figure 5, again for the volume element of Figure 1. From left to right the triaxiality increases whereby the simulation is terminated at the relevant macroscopic fracture initiation strain ε f . Forη = 0 ( Figure 5(a)) all fracture initiation sites are in the soft phase, always directly adjacent to hard phase. In the transition regime, forη = 0.5 in Figure 5(b), several fracture initiation sites occur in hard cells. In the brittle fracture regime, forη = 1 in Figure 5(c), all fracture initiation sites are in the hard phase. Like for the soft phase, each of these sites have hard phase to the left and/or right. The morphological characteristics of the phase distribution around the fracture initiation sites are discussed in more detail in Section 8. Discussion The results indicate that the initiation of fracture is dominated by the ductile soft phase, by the brittle hard phase, or by both. The damage mechanisms in the two phases are thus in competition, whereby the outcome is determined by the macroscopic stress triaxiality, which is consistent with the literature [24-28, 49, 50]. For example, Hoefnagels et al. [49] carefully categorized and counted the damage events in a dual-phase steel Figure 1 for different values of applied triaxiality. Note that the response is shown for different stages of deformation, corresponding to the respective macroscopic fracture initiation strain ε f for each applied triaxialityη (see Figure 4). subjected to different strain paths. In the context of the results above, their most important observation is that more fracture occurs in the hard martensite phase for a microstructure subjected to bi-axial loading (with a high triaxiality) compared with the uni-axial loading case (with a low triaxiality). For multi-phase materials, few experimental studies have measured the macroscopic fracture strain as a function of stress triaxiality. A complicating factor is that different ranges of triaxialities require different sample geometries, associated with different macroscopic strain paths [51]. For sample geometries suitable for the shear regime, it has been observed that the fracture strain decreases with increasing triaxiality [28,[52][53][54][55], although the number of measurements are too few to be conclusive about the existence of a critical triaxiality upon which the ductility suddenly decreases. For sample geometries suitable for different strain paths, in particular for bi-axial tension, the fracture strain is observed to be higher than in pure shear [52,53,55]. This observation invites further research as different strain paths have not been addressed in the present study, since they necessitate costly three-dimensional computations. Another question that is not addressed in the present work is whether different propagation mechanisms are triggered in different triaxiality regimes, as is known to be the case for more homogeneous materials [51,56]. A strong limitation of the present study is the two-dimensional, plane strain, character of the microstructures. Recently similar computations have been performed on two-and three-dimensional microstructures [57]. The comparison showed that the damage distributions are similar for both cases, but the value of the damage is over-predicted using a two-dimensional model. For the present results this implies that the macroscopic fracture initiation strain, in Figure 4, is lower than in a three-dimensional microstructure. For the present model, one may also ask to what extent the results depend on the definitions of microscopic and macroscopic fracture initiation. With respect to the former, in reality the strength may vary from cell to cell. This is modeled by randomly varying the strain-to-failure from cell to cell, for both phases. This critical strain is therefore multiplied with 1 + δ which randomly varies in spaces, with zero mean and a standard deviation for which the values δ = 0, 0.01, 0.05, 0.1, and 0.20 are considered. The macroscopic fracture initiation strains as a function of the stress triaxiality are shown in Figure 6. Qualitatively they are unaffected by δ, and also quantitatively the difference is small -maximally a factor 50 smaller than the applied variation. The reason of this is that the local phase distribution controls the damage, as is discussed in more detail in Section 8. The definition for the macroscopic fracture initiation, i.e. that 1% of the cells in the ensemble failed, is examined next. The analysis has therefore been repeated for the values 0.2% and 5%. The results, in Figure 7, show that the same qualitative trend is recovered in each case: ductile and brittle fracture are in competition. Quantitatively, the difference is 0.01 in terms of fracture strain and approximately 0.18 in terms of the critical stress triaxiality at which the failure switches from being dominated by the soft phase to being dominated by the hard phase. Future research that incorporates mechanical degradation would obsolete this simple criterion, by predicting macroscopic instability as a natural outcome of damage propagation on the It is often hypothesized that since plasticity occurs in the hard phase, it fails through ductile fracture instead of brittle fracture [58][59][60][61][62]. Therefore, the analysis is also carried out with a ductile fracture initiation criterion in the hard phase in Section 9. Influence of volume fraction In this section the effect of the variation of the hard phase volume fraction is considered. Ensembles with different hard phase volume fractions, ϕ hard , are used, yet each with the same reference phase contrast χ = 2. The macroscopic fracture initiation strain ε f as a function of macroscopic stress triaxialityη is shown in Figure 8 for seven different values of ϕ hard , each indicated using a different color. The reference ensemble of Figure 4, is shown in black. Increasing the amount of hard phase promotes both the soft phase and the hard phase failure mechanisms. This is observed as a decrease of ε f in both regimes. The triaxialityη at which the transition between the two regimes takes place decreases with increasing hard phase volume fraction. The predicted macroscopic stress-strain response in pure shear (η = 0) is shown in Figure 9 for different hard phase volume fractions ϕ hard , where the fracture initiation is indicated with a marker. The increase of the hard phase volume fraction results in an increase of the macroscopic yield strength while the macroscopic hardening is more or less constant. The macroscopic fracture initiation strain ε f decreases with increasing hard phase volume fraction (as already observed in Figure 8) while at the same time the fracture initiation stress τ f eq increases. The consequence of the above observations is examined next in the context of the classical trade-off between strength and ductility. In Figure 10, the macroscopic fracture initiation stress, τ f eq , is plotted versus the fracture initiation strain ε f for different macroscopic triaxialitiesη. The arrow and the increasing size of the markers indicate the increase of the hard phase volume fraction. Forη = 0 it is observed, as in Figure 9, that for an increasing hard phase volume fraction, the strength, τ f eq , increases at the expense of decreasing ductility, ε f . However, at the highest considered triaxiality ofη = 0.6 this trade-off breaks down. At this triaxiality, an increase of the hard phase volume fraction causes the fracture to be dominated by the hard phase, leading to an almost constant strength and decreasing ductility. In line with Figure 8, this is a direct consequence of the competition between failure dominated by either the soft phase or the hard phase, whereby the latter 'wins' forη = 0.6. Figure 10. Predicted fracture initiation equivalent stress τ f eq and equivalent strain ε f for varying hard phase volume fractions ϕ hard (indicated with an arrow) at constant phase contrast χ = 2. The different curves corresponds to different macroscopic stress triaxialitiesη. The reference ensemble (ϕ hard = 0.25 and χ = 2) is highlighted using a black square. Influence of phase contrast Next, the phase contrast χ is varied using different 2 ensembles with a constant hard phase volume fraction of ϕ hard = 0.25. The macroscopic fracture initiation strain ε f as a function of macroscopic stress triaxialityη is shown in Figure 11 for seven different values of χ. Both the soft phase and the hard phase failure mechanisms are strongly promoted by increasing the phase contrast, however at a different rate. Consequently, the outcome of the competition between both failure mechanisms is different for different phase contrasts. At low phase contrast (light blue), the fracture initiation strain ε f is significantly higher than for the reference ensemble (in black) as it is dominated by ductile fracture initiation for almost all considered triaxialities. At higher phase contrast (in red) the fracture is entirely in the hard phase fracture regime. The predicted macroscopic stress-strain responses in pure shear are shown in Figure 12. The observed trend is quite different from that in Figure 9: the yield stress is constant for all χ while the hardening increases. This increase is much lower than the increase in phase contrast χ as the soft phase accommodates most of the plastic deformation. Again, the predicted macroscopic fracture initiates at lower strain with increasing χ. Note that the resulting fracture stress τ eq is almost constant due to the rapid decay in fracture strain caused by the different outcome of the competition between failure mechanisms. Figure 13 shows the strength versus ductility trade-off with contrast. For this case, a more or less constant strength is observed accompanied by a decreasing ductility, regardless of the triaxiality. Recall from Figure 11 that the increase in phase contrast at this volume fraction leads to hard phase dominated failure. Figure 11. Macroscopic fracture initiation strain ε f as a function of the applied triaxialityη, for different yield stress ratios between the hard and the soft phase, χ, at constant hard phase volume fraction ϕ hard = 0.25, cf. Figure 8. Furthermore, an increase in phase contrast χ mainly leads to an increase in hardening, not in the yield point (see Figure 12). As the fracture strain ε f rapidly decreases and approaches the yield strain, the effect of the increase in hardening is negligible. Figure 13. Predicted fracture initiation equivalent stress τ f eq and equivalent strain ε f for varying phase contrast χ (indicated with an arrow) at constant hard phase volume fraction ϕ hard = 0.25; cf. Figure 10. Results To understand the consequences of the competition between the two failure mechanisms, the outcomes of all combinations of hard phase volume fractions, phase contrasts, and stress triaxialities are collected in a mechanism map. This map is constructed using approximately 90000 finite element calculations. Due to the model's simplicity such a computation becomes feasible even on moderately sized computing cluster. Using an in-house code each volume element takes between two and three minutes to compute. The resulting mechanism map is presented in Figure 14. The different curves represent the combination of volume fractions ϕ hard and phase contrast χ for which exactly 50% of the failed cells are soft and 50% are hard, at a given value of applied triaxialityη. In the region to the lower left of the curve, that is denoted by "soft", the majority of fractured cells are soft, whereas to the upper right (denoted by "hard") the majority is hard. The result is first discussed for a triaxialityη = 0. For a low hard phase volume fraction, ϕ hard < 0.15, the initiation of fracture is dominated by the soft phase regardless of the contrast in plastic properties between the hard and the soft phase. In this regime the effect of the local incompatibility in deformability remains small as the hard phase is sufficiently dispersed in the microstructure. At the other extreme, where ϕ hard > 0.4, the initiation of fracture is dominated by the hard phase if the yield contrast is sufficiently large (χ > 2.5). In this regime, the incompatibility between the two phases has a strong influence, as the hard phase domains are close and link up. The transition between these two regimes occurs at approximately ϕ hard = 0.25. These observations also apply for the other triaxialities although the transition shifts to favor hard phase failure at higher triaxialities as observed before. Discussion The outcome of the simple numerical model in the form of this mechanism map is consistent with experimental observations from the literature. For instance, Mummery and Derby [20] observe that the fracture initiation is dominated by breaking of the brittle reinforcement particles for higher volume fractions. And for higher phase contrasts, Lee et al. [21] observe cleavage fracture. 8 Influence of the local phase distribution Analysis To reveal the correlation between the local microstructural morphology and the individual fracture initiation sites, the average microstructure around the fracture initiation sites is calculated. This approach was first introduced by De Geus et al. [37] and is summarized below. It quantifies the probability of finding the hard phase at a certain position relative to the fracture initiation sites. A value higher than the hard phase volume fraction, ϕ hard , corresponds to a positive correlation between fracture initiation and hard phase at that relative position. Vice versa a value lower than ϕ hard corresponds to a positive correlation between fracture initiation and the soft phase. The method is discussed based on a single volume element; the ensemble average trivially follows and is therefore omitted. The microstructure is described using a phase indicator I which is defined as follows: for (i, j) ∈ hard (11) whereby (i, j) represents the position of a particular cell. For the regular grid used here, it corresponds to the 'pixel' position, i.e. the row and column index of the grid of square cells. The average microstructure at a certain position (∆i, ∆j) relative to a fracture initiation site is calculated by averaging the phase indicator I weighted by the fracture initiation indicator D over all positions (i, j). I.e. The index (i, j) loops of the cells (or pixels) in the volume element. Finally, the ensemble average I D is calculated by averaging over all 256 volume elements in the ensemble. Results The result is shown in Figure 15 for two different values of the applied triaxiality -one in the regime where failure is governed by the soft phase (η = 0) and the other in the hard phase failure regime (η = 1). For both, the average arrangement of phases is computed at their respective fracture initiation strain (see Figure 4). The origin, indicated with black dashed lines, corresponds to the fracture initiation site. The colors are chosen to maximize interpretation: red corresponds to an elevated probability of the hard phase, and blue to that of the soft phase. Forη = 0, in Figure 15(a), it observed that, as expected, the fracture initiates in the soft phase: I D ≈ 0 at the origin of the diagram. Directly to the left and right, in the tensile direction, regions of the hard phase are found. This band of hard phase is interrupted by bands of the soft phase under angles close to ±45 degrees. This observation coincides with the result of [37], which however was limited to fracture initiation of the soft phase only. When the result is compared to I D taken atη = 1, in Figure 15(b), it appears that the key features do not change, except that fracture now initiates in the hard phase (observed as I D > ϕ hard in the center). Fracture thus initiates in a band of the hard phase aligned with the tensile direction (in red) there where it intersects with a band of the soft phase under an angle of close to ±45 degrees with respect to the tensile direction (in blue); in this case the hard phase band is not interrupted. This result is remarkable as both results in Figure 15 are dominated by totally different failure mechanisms. The only noticeable difference is that the probability of the regions of the soft phase is reduced (characterized by the intensity of blue in the bands at ±45 degree angles) with respect to Figure 15 Discussion The above result suggests that a region or band of hard phase aligned with the tensile direction, intersected by regions or bands of the soft phase in the direction of shear are critical for fracture initiation, regardless of whether the hard band is actually interrupted or not. Such characteristics have frequently been observed experimentally [29-34, 48, 63, 64], although experimentally the actual phase in which fracture initiates often can not be uniquely determined. Based on Figure 15 the surroundings would look similar for both fracture initiation mechanisms. Using a numerical model, Segurado and LLorca [42,65] made the observation that damage nucleation is promoted by clustering of the reinforcement phase in the tensile direction for either hard phase damage, soft phase damage, or interface decohesion. The results in Figure 15 are in accordance with these observations, the main difference of the current result is that average microstructure is obtained in a much wider region around the fracture initiation sites. For three-dimensional microstructures, using a similar analysis as presently presented, De Geus et al. [57] have found a qualitatively similar average phase distribution around fracture initiation as Figure 15 for a planar deformation (i.e. pure shear). De Geus et al. [37] considered ductile failure in the soft phase only and reasoned that the arrangement of phases in Figure 15(a) is due to (i) a phase boundary perpendicular to the tensile axis giving rise to hydrostatic stress and (ii) shear bands through the soft phase aligned with the shear axis, giving rise to high plastic deformation. From the current result, in Figure 15(b), it is observed that a combination of these mechanisms is also responsible for the high stress in the hard phase. In any case, critical for damage is (i) a band of hard phase aligned with the tensile direction with which (ii) bands of the soft phase that aligned with shear directions intersect. The orientation is determined by the applied macroscopic deformation [57]. If such a configuration could be avoided, the material's fracture properties would be enhanced. Comparison with ductile failure of the hard phase For the hard phase the Rankine model for cleavage has been used so far. It compares the maximum principal stress to a critical value, see equation (7), and it is therefore strongly dependent on the actual stress state. Since a small amount of plasticity is allowed in the hard phase and micrographs of the fracture surface often reveal dimples, it may be appropriate to consider also the hard phase ductile. In this section, the Rankine model is therefore replaced by the Johnson-Cook model according to (8), and both phases thus follow this criterion albeit with different parameter sets. The parameters of the soft phase are like before; those of the hard phase are selected such that the fracture strain is obtained for the homogeneous hard phase in uniaxial tension as for the Rankine-based model used so far. Specifically: The macroscopic fracture initiation strain ε f is shown in Figure 16 as a function of the applied triaxialitȳ η for the reference parameters in black (i.e. hard phase volume fraction ϕ hard = 0.25 and phase contrast χ = 2). The curves for failure in only one of the phases are included using a red line (only hard phase failure) and blue line (only soft phase failure). The fracture initiation strain for the uniform phases are included as dashed lines. When the black curve is compared to the result in Figure 4, it appears that the macroscopic fracture initiation strain decreases less strongly with triaxiality. The transition from fracture initiation dominated by the soft phase to that dominated by the hard phase is still observed, however at a different triaxiality. Compared to the uniform phases the same observations can be made as for Figure 4. The average microstructure around the fracture initiation sites is shown in Figure 17. Forη = 0 the result still coincides with Figure 15(a) as fracture initiation is dominated by the soft phase there. Also atη = 1, where the failure is dominated by the hard phase, the key features coincide with Figure 15(b). However, the regions of the hard phase in the direction of tension (to the left and right of the fracture initiation sites) have a lower probability. Oppositely, the regions of the soft phase in the direction of shear (±45 degrees) have a higher probability. I.e. the relative position of the soft phase around the fracture initiation site is more important, while the relative position of the hard phase is less important compared to the case of cleavage fracture. Also, the orientation of the regions of the soft phase deviates more from ±45 degrees with respect to the tensile direction. This is due to the different weighing of the plastic strain and the volumetric stress by replacing the Rankine model with the Johnson-Cook model. Concluding remarks A simple multi-scale model is proposed and exploited that uses a microscopic model restricted to the most essential micromechanics underlying the initiation of failure in a ductile two-phase material comprising a hard but brittle phase embedded in a soft and ductile matrix. A large number of microstructures are considered to capture the -statistically extreme -fracture initiation at the level of the individual grains. Using this model, the common observations in the literature on the effects of applied stress triaxiality, hard phase volume fraction, and contrast in mechanical properties are analyzed and refined. • The large number of realizations enables the characterization of the average microstructural morphology around the initiation of fracture. Around fracture initiation sites, a band of hard phase is identified in the tensile direction, intersected by bands of soft phase in the directions of shear. This configuration is critical, regardless of whether the band of hard phase is interrupted or not; i.e. regardless if the local fracture initiates in the soft phase or in the hard phase. • The macroscopic fracture initiation is dominated by the soft phase at low values of applied stress triaxiality. Above a certain critical triaxiality the balance tips to the hard phase. This critical triaxiality is a function of the hard phase volume fraction and the contrast in mechanical properties between the phases, as characterized by the mechanism map of Figure 14. • The increase in hard phase volume fraction leads to an increase in strength at the expense of ductility. Due to the different effect on the macroscopic elasto-plastic behavior, an increase in phase contrast does not lead to a significant increase in strength, while the ductility decreases. It was shown that these conclusions are relatively insensitive the exact value of the parameters. They are thus representative for different materials in this class.
2016-12-05T19:45:19.000Z
2016-03-18T00:00:00.000
{ "year": 2016, "sha1": "315ce35ca201e0a60d721c66bcae87c58ab13edc", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.ijsolstr.2016.03.029", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "315ce35ca201e0a60d721c66bcae87c58ab13edc", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
17994024
pes2o/s2orc
v3-fos-license
Banking of cryopreserved iliac artery and vein homografts: clinical uses in transplantation Iliac artery and vein homografts are critical for revascularization in living-donor liver transplantation. Since 2010, National Cardiovascular Homograft Bank and National University Hospital have collaborated in the pioneer endeavor of banking iliac vessel homografts for such surgeries in Singapore. This article aims to demonstrate that the processing, decontamination and cryopreservation techniques that our bank follow, help preserve iliac vessel homografts for a longer duration as compared to homografts preserved using short-term preservation techniques. This paper reports the first 4 years of post-operative outcome for recipients as a preliminary report for a longer-term outcome study. Criteria for donor assessment, techniques of iliac vessel homograft recovery, processing, decontamination, cryopreservation and storage according to the American Association of Tissue Banks standards are also described. From 2010 until 2013, we discovered of the iliac vessel homografts processed, 17 (94.4 %) were suitable for clinical use. Nine iliac artery grafts (64 %) and one iliac vein graft (14 %) were implanted. Irrespective of vessel type, homografts <90 mm in length were of little use. Of the nine current iliac vessel homograft recipients, eight patients (89 %) had living-donor liver transplantation and one patient (11 %) had reconstruction of the right internal carotid artery after resection of an aneurysm. Our preliminary results supports existing literatures that suggest cryopreserved iliac vessel homografts can be successfully used for revascularization in liver transplantation and reconstruction of carotid artery. Encouraging short-term post-operative patient outcomes have been achieved, with no report of adverse event attributed to implanted homografts. We believe that our processing, decontamination and cryopreservation techniques have helped preserve the homografts for longer duration as compared to homografts preserved using short-term preservation techniques. Introduction Artery or vein homografts are frequently used for vascular reconstruction in liver transplantation (Martínez et al. 1999). Recovery of vascular grafts was first recommended by Starzl et al. as a life-saving measure for an unexpected emergency; it provides additional vessel length for the recipient during organ transplantation (Starzl et al. 1979). This recovery is important because life-threatening complications that affect the hepatic artery after transplantation can result in ischemia of the liver graft, causing graft failure or even patient mortality (Vivarelli et al. 2004a, b). In most cases, vascular grafts, such as iliac vessels, are recovered together with the liver from the same deceased donor and used for the same recipient. However, iliac artery or vein homografts can also be used when the liver recipient of a living donor develops late hepatic artery aneurysm/pseudo-aneurysm or when autologous vein grafts or deceased organ donor's artery/vein homografts are unavailable (Sellers et al. 2002). Before banking of cryopreserved iliac vessel homografts started, liver transplant surgeons relied on other techniques of vascularization. These included the use of fresh or short-term preserved iliac vessels from liver donors (Starzl et al. 1979;Martínez et al. 1999;Sellers et al. 2002;Muralidharan et al. 2004), autologous vein grafts (Hwang et al. 2005), allogeneic cryopreserved vein grafts such as saphenous veins (Kuang et al. 1996), and even cryopreserved descending aortic conduit for one of our liver recipients, who was diagnosed with liver cirrhosis caused by Buddchiari syndrome. Different methods for the short-term preservation of vascular homografts have been reported. Short-term preservation is useful for emergency cases which require the repair of post-transplant vascular complications in the recipient. Martinez et al. stored iliac vessel homografts in Terasaki solution (McCoys lymphocyte culture medium, bovine fetal serum, HEPES buffer, gentamicin) at 4°C for 1-26 days (Martínez et al. 1999). Sellers et al. treated the vessels with RPMI-1640 with L-glutamine containing antibiotics (240 ug/mL cefoxitine, 120 ug/mL lincomycin, 50 ug/mL vancomycin and 100 ug/mL polymixin B) at 4-10°C for 24 h. They were subsequently kept in fresh RPMI-1640 with 2.05 L-glutamine without antibiotics, and stored at 4-10°C for a maximum of 30 days (Sellers et al. 2002). Other studies mentioned the preservation of fresh iliac arteries using University of Wisconsin solution at 4°C for \10 days (Shames et al. 2003;Ma et al. 2011). Alternative preservation media, such as Celsior solution, Euro-Collins solution and histidine-tryptophan-ketoglutarate (HTK) solution have also been reported (Vivarelli et al. 2004a, b). In addition, a few literatures described the long-term preservation processes: for instance, Mabrut et al. decontaminated their iliac vessels with antibiotic solution containing lincocine (120 ug/mL), mefoxin (200 ug/mL), colimycin (1,000 IU/mL) and vancomycin (50 ug/mL) (Mabrut et al. 2012). The European Homograft Bank used a different antibiotic regimen consisting of vancomycin, lincomycin and polymixin B, with incubation temperature of 4°C for a duration of 24 h (Jashari et al. 2013). For both centres, 10-15 % dimethyl sulfoxide (DMSO) was used as a cryoprotectant, and cryopreservation was achieved through controlled rate freezing (Mabrut et al. 2012;Jashari et al. 2013). However, despite the clinical relevance, a paucity of literature exists regarding the use of cryopreserved iliac vessels and their clinical outcome (Kuang et al. 1996;Vivarelli et al. 2004a, b;Mabrut et al. 2012). Recognizing the critical importance of iliac vessel homografts for vascularization of liver recipients in living-donor transplantation, the National Cardiovascular Homograft Bank (NCHB) and the National University Centre for Organ Transplantation have collaborated in the pioneer endeavor of banking iliac artery and vein homografts for such surgeries in Singapore since 2010. Apart from liver transplantation, iliac vessel homograft has also been used by our hospital for surgical emergencies in life-threatening conditions. This article is a description of our routine processes in recovery, processing, decontamination and cryopreservation techniques of iliac artery and vein homografts in compliance to the American Association of Tissue Banks (AATB) standards. A retrospective review of post-operative outcome for homograft recipients as a preliminary study of a longer-term outcome study is also presented. Donor assessment Donors are multi-organ donors aged 12-66 years. Evaluation of donor suitability is conducted by the clinical coordinator. This includes performing a physical assessment of the potential donor, reviewing medical records and the medical social history questionnaire. The potential donor is excluded if he/she had engaged in high-risk social behavior and/or is suspected or tested positive for transmissible diseases (AIDS, hepatitis B or C, syphilis, dengue fever, active tuberculosis), sepsis, malignancies or autoimmune diseases. He/she is also deemed unsuitable if his/her blood is suspected to be hemodiluted, as this will interfere with the accuracy of the infectious diseases test results. Plasma hemodilution can occur either when the potential donor experiences extensive blood loss or when he/she receives a transfusion of significant volumes of whole blood, erythrocytes, colloids and/or crystalloids prior to cross-clamp. The donor's blood is defined as hemodiluted when (1) the total volume of crystalloids transfused in the previous 1 h and colloids transfused in the previous 48 h is greater than the donor's total plasma volume; or (2) the total volume of crystalloids transfused in the previous 1 h and the total volume of colloids and blood products transfused in the previous 48 h is greater than the donor's total blood volume. Homograft recovery, preservation and storage Stringent aseptic techniques are upheld during the tissue recovery and processing procedures to prevent microbial and fungal contamination of the tissues. First, iliac artery and/or vein homografts are recovered by the liver surgeon during multi-organ recovery in the operating theatre. After retrieval, bench work is performed immediately. This involves the measurement of vessel length and diameter, and rinsing the tissues in cold saline (0.1-10°C) to remove any blood clots. Vessels are also inspected for patency and the presence of sclerosis or trauma. Vessels which are too short (\90 mm) or display signs of degenerative, sclerotic or traumatic abnormalities are discarded. As the AATB has mandated a standardized evaluation and classification system for homografts, the vessels' conditions are evaluated and classified into three categories: (1) tissue with no visible abnormalities, (2) tissue with imperfections and (3) non-implantable tissue. Tissues in the first two categories are deemed morphologically acceptable upon initial assessment and are transported in cold HTK solution to the NCHB laboratory for further processing. As stipulated by the AATB (2012) requirements, the total ischaemic time must be less than 48 h (AATB, 13th edition). Processing of tissues is performed in an ISO Class 5 laminar flow hood located in an ISO Class 7 cleanroom. The laminar flow hood work surface is sterile and draped according to normal surgical procedure. Before bio-burden reduction, the iliac vessel homografts are rinsed in three rounds of saline. They are then decontaminated using the same antibiotic regimen that the NCHB adopted for disinfecting cardiovascular homografts, which uses amikacin (100 ug/mL) and vancomycin (50 ug/mL) in Medium 199 (M199). The incubation condition is 2-8°C for 24-28 h. After antibiotic disinfection, the vessels are rinsed in fresh M199 without antibiotics for 12 min to remove any residual antibiotics that may remain on the tissues. Finally, the tissue is packaged individually in a sterile cryogenic bag that contains the freeze solution (10 % DMSO in M199). Prior to heat-sealing, air is removed from the bag to prevent rupture of the bag during temperature changes while thawing. The sealed bag is then inserted into a slightly larger cryogenic bag and heat-sealed, creating a double-layer package for additional sterility protection of the homografts. Controlled rate freezing of the processed homograft is attained by operating a tissue freezing profile using a controlled rate freezer. The rate of cooling is approximately -1°C per minute. The transfer of the homograft to a quarantine liquid nitrogen storage tank takes place when the temperature probe of the freezing surrogate package reaches -50°C. Quality control For the safety of recipients, the results of infectious disease tests and microbiological cultures of tissue and solution specimens are evaluated. A donor is determined to be unsuitable if the results of infectious disease tests are positive, with the exception of the venereal disease research laboratory (VDRL) test. When the VDRL result is reactive, a confirmatory treponema pallidum hemagglutination or treponema pallidum particle agglutination test is performed to exclude active syphilis infection. Although some banks exclude donors with both previous and active syphilis due to the potential histological damage caused to the arteries by the infection, AATB stated that tissue from a donor reactive of syphilis using U.S. Food and Drug Administration (FDA)-approved nontreponemal screening assay may be used for transplantation only if the donor's blood sample is tested negative using FDA-approved treponemal-specific confirmatory assay (AATB, 13th edition). Hence, due to the severe shortage of homografts, NCHB only Cell Tissue Bank (2015) 16:235-242 237 rejects donors with active syphilis to maximise the number of implantable tissues. The testing of total antibodies to hepatitis B core antigen test (anti-HBc total) is also mandatory. If the anti-HBc total is positive, an additional test for hepatitis B surface antibodies (anti-HBs) is performed to rule out current hepatitis B infection. The acceptable anti-HBs level is C10 l/mL. In addition, a dengue polymerase chain reaction test and a Mycobacterium tuberculosis culture are also performed for donors who have suspected infection, due to the prevalence of these diseases in Southeast Asia. Microbiological cultures for aerobes, anaerobes and fungi are performed on tissue and solution specimens after recovery, post-antibiotic incubation, and before freezing. Cryopreserved iliac vessel homografts are stored under quarantine until all of the infectious disease and microbiological results confirm their clinical suitability. The NCHB's quality assurance staff checks and verifies all results and donor records before submitting them for final review to the Medical Director. After review, clinically suitable homografts are transferred to a clinical liquid nitrogen storage tank and made available for transplantation. Cryopreserved tissues are stored in liquid nitrogen vapor for duration of 5 years. Thawing and implantation The homograft is shortlisted by the transplant surgeon based on its length. As homografts are mostly used in the reconstruction of segments 5 and 8 veins from right liver graft where patency is required only for about 3 weeks, ABO blood group compatibility is not mandated. On the day of transplant, the selected graft is transported to the operating theatre of the transplant hospital in a cryoshipper at a temperature below -135°C. After the transplant surgeon approves the thawing of homograft, the cryopreserved package containing the tissue is immersed in saline prewarmed to approximately 37-42°C. Saline of higher temperatures is not used; although rapid thawing of the homograft package can be achieved, this sudden temperature difference will increase the risk of breakage of the homograft package, thereby rendering the tissue non-sterile and non-implantable. After complete thawing, a circulating nurse cuts the outer freezing package. The sterile inner package containing the homograft is then handled by the scrub nurse. Subsequently, the homograft is rinsed in three rounds of lactated Ringers or HTK solution for 5 min each to remove the antibiotics and DMSO residues. Finally, it is kept in lactated Ringers or HTK solution in a sterile closed bottle. If the immediate transplantation is not achieved, the bottle containing the homograft will be placed either on a tray of ice or in the 4°C refrigerator. The final step of quality assurance involves the testing of post-thaw tissue and solution specimens for microbiological and fungal cultures. If the post-thaw result is positive, it will be treated as an adverse event regardless of the cause of contamination. The transplant surgeon will be notified immediately so that a prompt follow-up on the recipient and appropriate medical intervention can be administered. Ethical consideration The iliac vessel donors are either tissue pledgers or donation is consented from the next-of-kin for deceased donors under the Ministry of Health Singapore's (MOH) Medical (Therapy, Education and Research) Act. The MOH's guidelines on Code of Ethical Practice in Human Biomedical Research and Helsinki declaration guidelines were consulted. This study did not expose the donors or recipients to additional risk or discomfort because this article is a description of our routine processes and a retrospective review of post-operative outcome. Furthermore, since there was no identification of patients in this article and patient confidentiality was strictly observed, hence ethics approval was advised to be unnecessary by our hospital. Processing activity outcome Iliac artery and vein grafts have been recovered from six anatomical locations: (1) left iliac arteries, (2) right iliac arteries, (3) common and bilateral iliac arteries, (4) left iliac veins, (5) right iliac veins, and (6) bilateral iliac veins. Their lengths ranged from 90 to150 mm; vessels which do not fulfil the minimum length of 90 mm are of little use to the recipients. To date, all the 17 iliac vessel homografts suitable for clinical use have been evaluated and classified to be ''tissue with no visible abnormalities''. The program ceased temporarily in year 2012. As a result, no homograft recovery or implantation was facilitated during this period. In 2010, there was an equal number of iliac artery and vein grafts recovered. However, owing to the low demand for iliac vein homografts, the number of vein grafts recovered in year 2013 declined to 17 % of the total iliac grafts recovered. In contrast, the number of iliac artery homografts recovered increased. For two donors, a pair of iliac arteries was procured after the surgeons confirmed that the donors' vessels were not required for the liver recipients. In total, nine iliac artery grafts (64 %) and one iliac vein graft (14 %) were implanted (Table 1). A review of infectious disease and microbiological results revealed that 17 (94.4 %) iliac vessel homografts processed was suitable for clinical use. The initial contamination rate of post-recovery homografts was 27.7 % (5 homografts). 80 % (4 homografts) of these grafts were successfully decontaminated by the current antibiotic regimen. A fungus Malassezia furfur was isolated post-recovery in one iliac vein homograft. Although it was subsequently tested negative for postincubation culture, it was still discarded due to the pathogenicity of the micro-organism. Other common bacteria that had been isolated post-recovery included coagulase-negative Staphylococcus and non-methicillin-resistant Staphylococcus aureus ( Table 2). The former is a common skin commensal, while the latter is frequently found in the human respiratory tract. Recipients and outcome Eight (89 %) iliac vessel homograft recipients were patients who underwent living-donor liver transplantation. Of these, one recipient received a pair of iliac artery grafts from the same donor. An elderly patient who was diagnosed with a right mycotic internal carotid artery aneurysm also received an iliac vessel homograft. As a case of surgical emergency, he urgently required a vessel for reconstruction of the right internal carotid artery after resection of the aneurysm (Table 3). All the recipients recovered and are doing well post-operatively. Discussion This collaborative program has seen a higher demand for iliac artery homografts as compared to iliac vein homografts. This is because there are more advantages to using an iliac artery homograft, which includes (1) it has a thicker wall, and can retain its length, axis and shape after restoration of blood flow in the hepatic vessels, (2) the probable benefit of decreasing the incidence of early anastomotic stenosis, (3) the surgical technique for iliac artery graft is easier than for iliac vein graft, and (4) its short-term patency rate was comparable to that of an iliac vein graft (Hwang et al. 2005). We also learned that irrespective of vessel type, homografts of a length \90 mm were of little use. As a result, the homografts of shorter lengths recovered in 2009 during the initial phase of our bank's operation had to be removed from the inventory of transplantable tissues after they had reached the 5-years of maximum cryopreservation shelf-life. Henceforth, to maximize the probability of graft utilization, iliac vessel homografts of as great a length as possible (C100 mm) will be procured. Not all liver transplant patients require the implantation of an iliac vessel homograft during surgery. Previously, the absence/thrombosis of the portal vein in the recipient or abnormal hepatic arteries in the donor were contraindications to liver transplantation. However, these problems were overcome by the use of iliac artery or vein grafts from the donor. This technique has since become standard procedure (Strong 2001). It has also been discovered that recipients who have been on long-term steroid therapy pre-transplantation are more susceptible to developing a diseased wall in the hepatic artery. This is a result of steroid-induced angiopathy related to hypercholersteremia and the diabetogenic effect of steroids (Khalaf et al. 2007). For such recipients, iliac vessel homografts should always be made available during surgery. The main advantage of using cryopreserved homografts is their availability for elective surgeries. As such, it is especially beneficial for living-donor liver transplant patients. Unlike cadaveric donor liver transplant patients who can receive the liver and fresh iliac vessel from the same deceased donors, livingdonor liver transplant patients have no option of receiving fresh homografts for revascularization when artery complications are discovered peri-operatively. They can only rely on cryopreserved homografts. Usually, liver transplant surgeons can only ascertain whether iliac vessel homograft is required after assessing the condition of the recipient's vessels during surgery. Therefore, as a cost-saving measure for the recipients, our bank is activated for homograft transportation and thawing only on the day of surgery. This is usually done after the surgeons determine if there is a requirement for the reserved homograft. The significance of ABO blood group incompatibility as a risk factor that results in early homograft degeneration has been controversial. Mostly, correlations are postulated from the analysis of post-operative graft and recipient outcomes. Experiences in organ allotransplantation have shown that donor-recipient blood group incompatibility elicits a strong rejection response. Therefore, blood group cross-matching is performed in many centers (Kadner et al. 2001). In contrast, there are centers in which donor-recipient blood-group matching is considered to be unnecessary (Hwang et al. 2005;Mabrut et al. 2012). Justification derived from immunological studies, such as the one conducted by Kadner et al. who reported that cryopreserved homografts did not appear to possess an endothelial layer, as observed from a lack of CD31 expression. Moreover, the blood group antigens were not detected on cryopreserved homografts that had been Iliac artery graft Right mycotic internal carotid artery aneurysm Reconstruction of right internal carotid artery after resection of aneurysm thawed, unlike fresh tissues in which the strong expression of the antigens was detected (Kadner et al. 2001). In addition, Martinez et al. reported that the actuarial liver transplant survival rates for recipients of ABO-incompatible vessels were not statistically different from recipients of matched fresh vessels (Martínez et al. 1999). Finally, for organ recipients who are already on immunosuppressive therapy, the therapy will abrogate their immune response to the donor grafts (Shaddy and Hawkins 2002). This explains why allografts may function as well as autografts (Sellers et al. 2002) and suggests that donor-recipient blood-group matching might not be required. The implication of removing the requirement for ABO compatibility between recipient and donor homografts is an increase in the availability of cryopreserved homografts for the recipients (Mabrut et al. 2012). Despite encouraging short-term results, which revealed a 100 % survival rate and no cases of graft failure, infection or adverse events attributed to the implanted homograft among our recipients, the longterm post-operative outcome should be further studied. The long-term outcome appears to vary depending on the type of complication (thrombosis or stenosis), timing of the arterial disease (early or late presentation with respect to the time of transplantation) (Vivarelli et al. 2004a, b), the indications for liver transplantation (Khalaf et al. 2007) and the promptness of diagnosis (Vivarelli et al. 2004a, b). The initial technical success rates and longterm graft survival appear to be promising (Martínez et al. 1999;Del Gaudio et al. 2005;Khalaf et al. 2007;Jashari et al. 2013). For example, Khalaf et al. reported that the survival rate of the grafts and patients at 5 years post-transplantation was approximately 80-90 %, with a 10-year survival rate of approximately 75 % (Khalaf et al. 2007). Del Gaudio et al. reported that 1-, 3-and 5-year overall survival were approximately 70 % each (Del Gaudio et al. 2005). Liver recipients for autoimmune hepatitis were reported to show exceptionally encouraging long-term outcomes, with patient and graft survival at approximately 90 % (Khalaf et al. 2007). In contrast, Kuang et al. reported that in his institution, retrospective analysis of long-term cryopreserved iliac vein graft performance (C3 years) in portal vein reconstruction in living-donor liver transplantation revealed a high rate of late graft failures (Kuang et al. 1996). In addition, Del Gaudio et al. suggested that retransplantation, donor age and iliac artery conduit were significant risk factors contributing to poor graft survival and high incidence of early hepatic artery thrombosis (Del Gaudio et al. 2005). In our context, long-term follow-up is challenging. Firstly, our number of both donors and recipients are very small, which makes an accurate and representative study of long-term outcome difficult. Secondly, some of our foreign recipients are no longer on followup with our hospital after discharge. Thirdly, the causes of iliac vessel graft failure and patient survival are multifactorial and difficult to predict due to the complications of liver transplantation (in particular, liver rejection and the extent and effects of immunosuppression) which might affect the iliac vessels' graft performance and outcome (Khalaf et al. 2007). In conclusion, our preliminary results supports existing literatures that suggest iliac vessel homografts, especially the artery grafts, can be successfully used for arterial revascularization in living-donor liver transplantation and emergency surgery involving reconstruction of the right internal carotid artery. Encouraging short-term post-operative patient and graft outcomes have been achieved, with no clinically significant infection or adverse event attributed to the cryopreserved implanted homograft observed. Therefore, based on this study of the first 4 years of postoperative outcome for recipients, although long-term patency of the homografts remains unknown, we believe that our processing, decontamination and cryopreservation techniques help to preserve iliac vessel homografts for a longer duration as compared to homografts preserved using short-term preservation techniques as previously described.
2017-08-02T22:08:41.371Z
2014-08-24T00:00:00.000
{ "year": 2014, "sha1": "41654224e1c4824d1ebd59684442a18731afd982", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10561-014-9469-2.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "846d841f5721d422cc2c44b75636b1c228013243", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119091488
pes2o/s2orc
v3-fos-license
Rethinking Digital Forensics In the modern socially-driven, knowledge-based virtual computing environment in which organisations are operating, the current digital forensics tools and practices can no longer meet the need for scientific rigour. There has been an exponential increase in the complexity of the networks with the rise of the Internet of Things, cloud technologies and fog computing altering business operations and models. Adding to the problem are the increased capacity of storage devices and the increased diversity of devices that are attached to networks, operating autonomously. We argue that the laws and standards that have been written, the processes, procedures and tools that are in common use are increasingly not capable of ensuring the requirement for scientific integrity. This paper looks at a number of issues with current practice and discusses measures that can be taken to improve the potential of achieving scientific rigour for digital forensics in the current and developing landscape. Introduction Due to the modern socially-driven, knowledge-based virtual computing environment that organisations are operating in, we argue that the processes, procedures and tools that have been accepted and are commonly used in digital forensics can no longer meet the need for scientific rigour. The U.S. Department of Defense (DOD), in its publication Information Operations [1], has defined the Information Environment (IE) as "the aggregate of individuals, organizations and systems (resources) that collect, process, disseminate, or act on information." The document concludes that "the information environment is where humans and automated systems observe, orient, decide, and act upon information, and is therefore the principle environment for decision making". The main attributes of the modern IE are: • the physical and virtual size of it (large); • the rapid evolution as a result of the introduction of new technologies; • the great irregularity between physical and virtual boundaries of different stakeholders and legal entities; • the transparent access to and control of assets; • the speed of information and knowledge exchange involving users across boundaries; • the stealth and limited attribution because of technologies and legislation, the rapid concentration of capability allowing for rapid generation and escalation of events; • the non-serial and distributed nature that allows the parallel execution of events against multiple targets creating non-linear events. Literature Review The underlying principles that are applied to the digital forensic process were developed in the 1990s, but follow the general standard for the acceptability of evidence in a court of law that were provided as a result of the 1923 Frye v. United States case. In this case, the admissibility of a systolic blood pressure deception test as evidence was discussed. The Court in the Frye case held that expert testimony must be based on scientific methods that are sufficiently established and accepted. Later, in 1993, in the Daubert v. Merrell Dow Pharmaceuticals, Inc. United States Supreme Court case (Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 584-587.), the standards for admitting expert testimony in U.S. federal courts were determined and the Court in this case held that the enactment of the Federal Rules of Evidence implicitly overturned the Frye standard. The standard that the Court articulated is referred to as the Daubert standard. This was given as: • 'Judge is gatekeeper': Under Rule 702 (Testimony by Expert Witnesses), the task of "gatekeeping", or assuring that scientific expert testimony truly proceeds from "scientific knowledge", rests on the trial judge. • Relevance and reliability: This requires the trial judge to ensure that the expert's testimony is "relevant to the task at hand" and that it rests "on a reliable foundation". Concerns about expert testimony cannot be simply referred to the jury as a question of weight. Furthermore, the admissibility of expert testimony is governed by Rule 104(a), not Rule 104(b); thus, the Judge must find it more likely than not that the expert's methods are reliable and reliably applied to the facts at hand. • Scientific knowledge = scientific method/methodology: A conclusion will qualify as scientific knowledge if the proponent can demonstrate that it is the product of sound "scientific methodology" derived from the scientific method. • Illustrative Factors: The Court defined "scientific methodology" as the process of formulating hypotheses and then conducting experiments to prove or falsify the hypothesis, and provided a set of illustrative factors (i.e., not a "test") in determining whether these criteria are met: 1. Whether the theory or technique employed by the expert is generally accepted in the scientific community; 2. Whether it has been subjected to peer review and publication; 3. Whether it can be and has been tested; 4. Whether the known or potential rate of error is acceptable; and 5. Whether the research was conducted independent of the particular litigation or dependent on an intention to provide the proposed testimony.' After a number of other, relevant rulings, Rule 702 was amended in 2000 in an attempt to codify and structure elements embodied in the "Daubert trilogy." The rule then read as follows: • Rule 702. Testimony by Experts: If scientific, technical, or other specialized knowledge will assist the trier of fact to understand the evidence or to determine a fact in issue, a witness qualified The 'Scientific rigour of process' paradox The Daubert standard requires general acceptance of the theory and technique used in the digital forensic process to be generally accepted by the scientific community and have been peer reviewed. Considerable research has been undertaken into the theory that underpins the processes in use but, unfortunately, for the most part it is dated and has not addressed the current technologies and where it has, it has not been adopted in practice. The majority of the techniques that are used in the digital forensic process have not satisfied the criteria of known error rates. Most of the main tools that are in use are proprietary commercial products and there is no published data available on error rates. Of more concern is that on the occasions when these tools have been tested, they have been found to produce, in some cases, significantly different results [5][6][7]. The paradox is that due to lack searchable catalog of forensic tools. This enables practitioners to find tools that meet their specific technical needs." However they caveat this with a cautionary note that "tool information is provided by the vendor". • The NSRL provides file profiles computed from this software (such as MD5 and SHA-1 hashes) as a Reference Data Set (RDS) of information. The RDS can be used in the forensic examination of file systems, for example, to speed the process of identifying unknown or suspicious files. • The CFTT provides a methodology consisting of tool requirements specifications, test procedures, test criteria, test sets, and test hardware. • The CFReDS provide to an investigator a documented sets of simulated digital evidence items for examination. In 2018, NIST published a document by the Organization of Scientific Area Committees for Forensic Science (OSAC) entitled "A Framework for Harmonizing Forensic Science Practices and Digital/Multimedia Evidence" [8] which states that "Like many other specializations within forensic science, the digital/multimedia discipline has been challenged with respect to demonstrating that the processes, activities, and techniques used are sufficiently scientific" The document then goes on to detail the work carried out by the OSAC Task Group (TG). There is now a considerable level of expertise and experience in the imaging of computer hard disks. While the volume of data that they might contain continues to grow, the underlying technologies (electromechanical and solid state) have not seen significant disruption for a number of years and disks that will work in one computer can normally be expected to work in another; although there are exceptions. When dealing with mobile phones, tablet computers and other devices, there is an increasing number of issues that include the range of operating systems and the versions of them, and the number of manufacturers that do not apply common standards and, indeed, seek to differentiate themselves. There are now also an increasing range of products on the market that contain computer processors and memory that may contain potential evidence and these can be classed as Internet of Things (IoT) devices. Many of these have limited processing power and digital storage but may contribute valuable evidence to an investigation or have been used in the commission of a crime. An example of this is was reported in 2016 [9], when more than 1.5 million CCTV cameras were hijacked and used to carry a denial of service attack on a security website. The Challenges of the Paradox Some of the challenges we are now facing as a result of the characteristics of the modern information environment are: 1. Completeness. Given the increasing volume of data storage on all forms of media, with computer hard disks now at 10 plus TB, USB storage devices at more than two Tb and micro SD cards at 512 GB together with the issues created by cloud storage, the concept of collecting a 'complete' set of the data is becoming increasingly problematic due to the size of the storage media and the volume of data. There is now an issue of the time to capture and process the volumes of data and issues of privacy when the data is being collected from a server that holds the data of more than one person or organisation as a result of the disparity between the physical and virtual boundaries of the stakeholders dataset is not collected there is the potential for an accusation of selective collection of only the data that supports a case and the exclusion of exculpatory evidence. The question, if we adopt an approach of selective collection, is how do we guarantee that all relevant data has been collected and how do we ensure that the process is repeatable? More of the point, because of the developing technological environment, the speed of information exchange across logical boundaries and the non-serial and distributed nature of processing that may create non-linear events will have greatly changed after the initial collection of data, rendering our process non-applicable and outdated. 2. Live forensics. This is when the artefacts are being discovered and captured on a live running system from volatile memory. The main purpose is to acquire volatile data that would otherwise get lost if the computer system was turned off or would be overwritten if the computer system were to remain turned on for a longer period. As the size of RAM in computer devices increases, so does the potential volume of data that may be of value as evidence that it may contain. The very action of capturing the data stored in RAM is likely to result in changes to elements of the data and as a result, a second attempt to image the logical device will be on the changed data and as a result, will generate a different MD5 or SHA hash value. While the reality of this has been accepted in practice, Cloud environment as they rely on physical and unrestricted access to the relevant system and user data. This is not possible in the cloud environment due its decentralized data processing and storage. It is increasingly clear that the concepts of traditional digital forensics cannot be directly used in cloud systems. In particular, the distributed processing and multi-tenancy nature of cloud computing, as well as its highly virtualized and dynamic environment, make the identification of digital evidence and its preservation and collection difficult. In agreement with Biggs and Vidalis [11], the development of the Cloud environment was not undertaken with digital forensics and evidence integrity in mind, and as a result it is a challenging technically, logistically and legally. In cloud computing the forensic process needs to be carried out in three distinct areas; Client system forensics, Cloud forensics and Network forensics. The Client system forensic process is well understood and practiced and is the 'traditional' forensics. Cloud server forensics, although not a new concept, greatly adds to our paradox with the issues of multi-tenancy, physical inaccessibility and unknown location of the artefacts to be collected and this can lead to jurisdictional issues. The artefacts may include user data, system logs, application logs, user authentication and access information, database logs etc. In a highly decentralized and virtualized cloud environment it is quite common for data to be located in multiple data centres located in different geographic locations [12]. Traditional approach to seizing the system is not practical in the cloud environment, even if the location is known, as the effect would be disproportionate and could bring down whole data centre, affecting a large number of other users due to multi-tenancy. A number of research papers have discussed this issue and some possible solutions [12][13][14][15][16]. The problem of governance is another significant issue in cloud forensics as discussed in the European Network and Information Security Agency (ENISA) cloud computing risk assessment report, which highlights the 'loss of governance' as one of the top risks of cloud computing, especially in Infrastructures as a Service (IaaS) [17]. In IaaS, users have more control and relatively unfettered access to the system logs and data, whereas in the Platform as a Service (PaaS) model their access is limited to the application logs and any pre-defined APIs provided, and in the Software as a Service (SaaS) model the customers have little or no access to such data. As the customers increasingly rely on the CSPs to provide the functionality and services they, of necessity, give more control of their information assets to the CSPs. As the customers relinquishes control, they inevitably lose access to important data and as a result it is not available for identification and collection for any subsequent forensic needs [12]. As the degree of control decreases, there will be less data of forensic value available for investigations and as a result there is a greater dependency on the CSPs in order to gain access to such data. This will also be dependent on the Service Level Agreement data for a forensic investigation [13]. In addition, the Virtual Machine (VM) instances may be subject to movement within a data centre, outside one data centre to a different data centre in the same jurisdiction or to a data centre located in a separate jurisdiction, based upon many factors such as load balancing, business continuity etc. Such moves, carried out by the CSPs, are completely outside the control of the client. This also adds additional challenges to the cloud server-side forensics. It is not only the digital evidence itself that needs to be acceptable to any court of law, but also the processes followed in the conduct of an investigation. In the last two decades, academic researchers and forensic practitioners have proposed a significant number of digital forensic frameworks and previously published processes and frameworks have been refined, resulting in a variety of digital forensic process models and terminology. While this can be seen as a natural development to meet the changes in technology and the law, it results in a lack of standardisation in the processes and procedures adopted. At the same time the volume and diversity of devices that have digital processing and storage have continued to expand rapidly with the result that there has not been adequate research carried out on these devices to establish scientifically sound methods for the extraction of evidence. The Tools and Their Provenance Throughout the digital forensics landscape, there are a number of tools that have been widely used and accepted for use in digital forensic imaging and analysis. The National Institute of Justice, in the USA, in conjunction with another of other agencies, including NIST, has carried out some excellent work on tests on a significant number of commercial data acquisition and imaging tools and publicised reports on their operation. For the analysis and reporting phases of an investigation, the main tools in use are also, for the most part, commercially developed, and while there is de-facto acceptance of their capabilities, there is increasing concern with regard to their veracity. In a number of recent research publications, significant differences have been noted in the output of these tools, both from version to version of the tool and in comparison to the output of other tools. As these tools are commercially developed and have been well marketed and accepted as de-facto standard tools in the community, there has not been the any level of independent testing of their functionality. The practitioners have no visibility of whether it has been subjected to peer review, whether it can be and has been tested and whether there is a known or potential rate of error that would be acceptable. For the open source tools, some of the same issues are also true, although, potentially, the access to the source code would allow for experimentation and testing and the ability to determine error rates. The accepted practice to validate the evidence that is to be presented is to use the dual tool approach, where two separate tools are used to confirm that the evidence is accurate. Unfortunately, this approach has a number of issues. Without knowing the algorithms that have been used in the tools that are being used, there is no way to ascertain that they are not using the same algorithm and are, in effect, self-validating. The other, more pragmatic issue is that of resources. To use two tools for each task would double the cost and also the workload of practitioners who already cannot deal with the workloads caused by the other issues detailed above. Based on these statistics, we can assume that a typical case would require Law Enforcement Agency (LEA) Officers to collect, on average, more than 1TB of data (including CDs, DVDs, internal and external HDDs/SSDs) for each case. The automated procedures that can be used to assist in the processing of this data, such as file signature analysis and hash analysis, are employed. Apropos, a large amount of data has to be manually analysed. Even before the analysis stage, there is a lot of work to be undertaken. Forensically wiping one Samsung HD105SI 1TB drive, using a tableau TD2u, was achieving an average of a 6.6GB/min transfer rate and a projected turnaround time of 2h 30 minutes. Furthermore, in a recent disk study the authors performed, a large number of hard disks were acquired and forensically analysed. The average acquisition transfer rate that was achieved was 2.76GB/min. This translates on an average time investigators would need to spend in the acquisition phase of at least 6 hours per disk. After the acquisition of the devices, a forensic analyst will get to the analysis phase, where, depending on the case, they will perform any/all of the following activities: • Disk geometry analysis (number, size and type of partitions (deleted or not)) • Time-zone analysis Following the above, more specific analytical steps will have to be performed (the list is not meant to be comprehensive): • Deleted files recovery • Identification of USB devices that were ever connected and when they were connected • Identification of files and folders that have been exfiltrated • CD/DVDS that may have been burned Nowadays, most of the above analytical tasks have been automated. Still, depending on the datasets used, the analysis phase will take an average of two days per disk to complete. This translates in two days per disk before the forensic analyst will be able to start the manual analytical activities, the file indexing and any case-specific raw searches. It also translates in two days that physical computing resources will have to be locked down and assigned to the execution of the aforementioned tasks. Potential solutions The constant introduction and development of new technologies and their adoption in all environments means that frameworks, procedures and tools need to be constantly reviewed and developed to meet the environment in which they are required to work. Currently, ISO 17025:2017 (General requirements for the competence of testing and calibration laboratories) is being widely used to standardise the policies, processes and procedures within digital forensic laboratories. In reality, while there is logic in the use of this standard, it is not fit for purpose. As the title suggests it is for testing and calibration laboratories and was not developed with the digital forensic environment in mind. Consideration should be given to developing a specific standard to meet the current and developing environment of digital forensics. In support of CASE and UCO, we need to manage the paradox discussed in the previous section and the risks that it introduces. As with any risks, we can accept, we can mitigate, we can insure against or we can avoid completely. High frequency and high impact risks must be avoided. Low frequency low impact risks may be accepted. Low frequency high impact risks and high frequency low impact risks should be mitigated by procedural and technical solutions underlined by intelligence operations principles. Paraphrasing Clausewitz, by intelligence we mean any sort of information about the potential suspects and their operational environment (linked to actus-reus and mens-rea). Today, investigators need to have forensic intelligence [18][19], even for the simplest and most trivial computer-related crime, that can lead to forensic evidence which, when combined, can lead to a strong supporting case for a prosecution. Such intelligence can be used either in a pro-active or in a re-active manner. As a concept, this is not new. It was first introduced and discussed a number of decades ago [20][21]. For example, in the UK, ENDORSE (National Crime Agency 2015) is a nationwide forensic and law enforcement initiative to collect and analyse information from drug seizures made in the UK. Apropos, the use case for ENDORSE is limited to a specific problem and a specific crime type within one national jurisdiction. Furthermore, computer-related criminal activities can be seen as a very complex problem, combining different types of traditional criminal activities with different and innovative technologies for transcending jurisdictional boundaries. The procedural requirements for a modern digital forensic framework aligned with CASE and UCO, addressing the discussed paradox and the issues introduced by the modern information environment are: • The Officer In Charge (OIC) must be enabled to identify physical and logical boundaries (internal and external) that are within the scope of the investigation; • The OIC must be enabled to identify assets within the scope of the investigation; • The OIC must be enabled to identify, specify and direct the collection of specific types of information; www.aetic.theiaer.org • The investigative team (including relevant representatives from the environment under investigation) must be enabled to clearly communicate the requirements (priorities and essential elements of information); • The OIC must be enabled to identify systems and processes that will be used in the collection phase as these may be organisation or technology specific; • The OIC must be enabled to develop an operational collection plan with specific disciplines (HUMINT, SIGINT, OSINT) and methods for the intelligence based collection of the evidence. Additionally, any solution must not be disruptive to business and must be seen as a catalyst in ensuring business continuity. Only then businesses may fully engage and allow for a truly integrated and complete approach in the collection and analysis of data. The solution must also be modular, Conclusions While academic research is very good at developing frameworks and methodologies for digital forensic processes, it does not have the resources (or the remit) to test tools. There are more than 20 frameworks and methodologies that have been proposed over the last two decades to try and address the developing issues but, while essential, they can cause confusion as they add to the uncertainty and do not present a standardised approach. There is a need for a purpose developed digital forensic standard that will address current issues and is designed to meet the future challenges that the changes in technology will bring to enable scientific rigour to be applied to the processes. There is a need to develop processes and procedures that will facilitate the integration of law enforcement and corporate resources at an operational level to support investigations. The reality is that LEAs will increasingly have to rely on other organisations to capture data from large data stores which may be outside of their jurisdiction and which may be using operating systems and applications that are outside their knowledge area and expertise, but need to be able to guide these resources to achieve the highest levels of scientific rigour in the collection phase. There is a need for education and additional training of the management of digital forensics resources to ensure that they have the overview of the issues and potential resources and can manage an intelligence led approach. Given the characteristics of the modern information environment and the shortfalls of the current digital forensic methodologies, we should establish new procedural boundaries (supported by relevant legislation) spanning across the corporate and policing sectors. We should also fully integrate the use of intelligence into digital forensics and make use of new and emerging technologies throughout the TCP/IP stack.
2019-04-11T13:41:04.502Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "9f4f7cc9540f4fbe41062ae901d07bc870d4e701", "oa_license": "CCBY", "oa_url": "http://aetic.theiaer.org/archive/v3/v3n2/p5.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1ce8223cc9d7ea1a0bfb1c00b3db3a421a490507", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
149600429
pes2o/s2orc
v3-fos-license
Chinese Relational Self-reference Effect The self of people with different cultures has significant differences. This paper uses the experiment research of the self-reference processing paradigm to explore whether the Chinese have relational self-reference effect, and also uses the internal design of 2×2×2 participants, and three variables, such as reference conditions, adjective valence and conformity to self. The results show that the response time of reference processing of relational self is significantly shorter than the reference processing time of generalized others, thus showing that the Chinese have relational self-reference effect. INTRODUCTION Brewer & Sedikedes, et. al, [1] first proposed the concept of relational self, and then Andersen & Chen, et. al, [2] further elaborated and researched the relational self, and proposed the triple self theory in recent years. The theory believes that the individual self construction consists of three components: individual self, collective self and relational self. Individual self is a perception of individual characteristics, states, behaviors and so on (for example, "I am noble"). Collective self highlights individuals and groups. Relational self highlights interpersonal relationship of individuals, which refers to a self that is related to a particular or multiple important others in a certain situation. At home and abroad, a lot of researches have been done on individual self and collective self, but the research on relational self was insufficient, especially in China, the research on relational self was particularly insufficient. Relational self refers to the self of sharing traits of individuals and intimate others (such as partners, good friends, and family members) and their relationships. Such self reflects the self associated with his or her important others. Relational self contains two meanings, one is the self-part shared by two relational par-ties, and the other is the self-concept of the role or status of the person who contains an important relationship. Relational self is based on attachment ties, including parent-child relationships, friendships, intimate love relationships, as well as specific role relationships, such as teacher-student relationships, doctor-patient relationships and so on [3] . Due to cognition and relations between relational self and important others, so the relational self can be driven by cognitive activation of the self or relevant important others. Once the relational self is activated, the individual can not only perceive and evaluate the self when he or she is connected with the relevant important others, but also demonstrate the corresponding emotion, motivation, self-regulation and behavior response. With constant deepening of the self-study, people have a deeper and deeper understanding of the self, and also have conducted a lot of in-depth researches on the characteristics of some new components of the self, rather than remaining discussion of the self structure. Among them, the self-reference effect is one of researches. The self-reference effect was discovered and proposed by [4] . He believes that the self-reference effect refers to a phenomenon in which the processing effect is significantly better than other coding conditions when the processing materials are connected with the self [5] . Subsequently, extensive and in-depth researches on the self-reference effect prove the existence of self-reference effect. With the introduction of the relational self, does the relational self-reference effect exist? What is the relational self-reference effect? According to the predecessors' understanding of the self-reference effect, the relational self-reference effect refers to a phenomenon in which the processing effect is significantly better than other coding conditions when the processing materials (such as adjectives) are connected with the relational self. Specifically, compared with the non-relational self-reference processing (such as semantic processing, reference processing of generalized others and so on), the relational self-reference processing is more superior and the memory effect is better. Does the relational self-reference effect of the Chinese exist? Is the relational self processing better than the information of the non-relational self (such as the information of generalized others)? Is there a unique mechanism of the relational self processing in the human brain? Recent researches have found that culture has an important impact on the self-reference effect. Many countries and regions combine with the self-reference effect and local culture to carry out a large number of localized researches [6] . Under a cultural background of China that values collectivism and relationships, more localized researches are needed to explore the Chinese self in a comprehensive and in-depth manner. Therefore, this paper attempts to research the relational self-reference effect of the Chinese. Research purposes and assumptions Research purposes are followings. 1. To research and discuss whether there is a relational self-reference effect through behavioral experiments. 2. To discuss whether there is an interaction between the relational self-reference processing and the corresponding adjective valence, adjective and individual self-compliance judgment and so on. Research assumptions are followings. 1. There is a relational self-reference effect, that is, the relational self-reference processing time is shorter than the processing time of referring to the information about generalized others. 2. There is an interaction between the relational self-reference processing and the corresponding adjective valence, adjectives and individual self-compliance judgment. It is specifically manifested as follows, in the process of self-conformity reference processing of positive valence adjectives, its behavioral response is less time-consuming, and vice versa. The classification based on the field The participants are 48 college students or graduate students from Hunan and Hubei. There are 20 male students and 28 female students, with an average age of 22.6. The participants are right-handed, healthy, with a normal vision or after correction. A certain amount of compensation will be given after completion of the experiment. Materials The experimental materials are selected from 84 double-word adjectives (these words can be used to describe people) in the Chinese Affective Words System (compiled by Luo Yuejia, et al.) [7] . 84 adjectives are randomly divided into three groups, with 28 words in each group. The factor analysis of multi-dependent variable and multi-independent variable is carried out for three groups of vocabulary. The results show that p values of the valence, arousal, familiarity, stroke number and word frequency of adjectives in each group are greater than 0.1 under various reference conditions, indicating that the adjectives in each group match well and there is no significant difference between the groups. That is, adjectives have achieved a good balance under the three reference conditions (mother, friend and generalized others) in terms of valence, arousal, familiarity, stroke number and word frequency. Meanwhile, this experiment also balances the presentation orders of adjectives under various reference conditions with randomization. In addition, in order to avoid the primacy effect and recency effect, the practices of Yang Hongsheng et al. [8] are also used for reference. Six adjectives (which are not listed in 84 adjectives) are respectively added before and after the vocabularies used for learning, which are used to judge for Zhu Rongji. Design The internal design of 2×2×2 participants is used, in which the intra-group variable 1 is reference conditions (relational self, generalized others). The intra-group variable 2 is adjective valence (positive and negative), and the intra-group variable 3 is conformity judgment (conformity to reference objects, non-conformity to reference objects), and the dependent variable is the response time. Procedure The experimental operation is performed on a computer and all stimuli are presented with black words in the center of a screen under a gray background. During the experiment, the participants are told by the instructions on the computer screen that this experiment is a trial related to the response speed, and required to making an evaluation and response to 4-point scale of conformity degree of each adjective and reference object as quickly and accurately as possible. In each trial, a word "READY" is first rendered for 500 ms, followed by a question that marks different levels of coding. It includes three different types: the first is the mother reference code (is this my "mother"?), the second is the friend reference code (is this my friend "**"?), and the third is the code of generalized others (is this "**"?). The question is rendered for 2,000ms, and then an alarm signal "+" is rendered for 250ms. After elimination of the alarm signal, an empty screen is rendered for 700-900ms, followed by a double-word adjective (such as "careless") rendered for 2 seconds. The participants are asked to make a yes or no judgment on the corresponding adjectives according to the previous questions, and give an answer by pressing the corresponding keys on the keyboard. If the participants believe that "***" is consistent with or more consistent with the adjective, then they will make a "yes" judgment, and the operation mode is to press "J" key; if the participants believe that "***" is inconsistent with or completely inconsistent with the adjective, then they will make a "no" judgment, and the operation mode is to press "F" key.The time interval from words rendering to the response of participants is recorded as the response time. The rendering time of each word is 2000ms, and the participants make response during this time. If the participants do not make response within 2 second, the word will disappear automatically and then enter the next trail. There is a time interval of 500ms between each trial. The three coding conditions (mother, friend and generalized others) randomly match with 28 adjectives, so that each participant performs 3×28=84 trials at the learning phase. CONCLUSION The variance analysis of repeated measures is used for internal variance analysis of three factors (reference objects, adjective valence and conformity judgment). The average response time of the participant judgment under various conditions is shown in Table 1. DISCUSSION AND ANALYSIS China has a unique culture compared to Western society. Many Chinese people regard their relatives and friends as a part of their own body or life. Self is often difficult to distinguish between relatives and friends, and even there is no mutual distinction. "Brothers are like hands and feet", "Friends are like brothers", "If I can exchange my life for the happiness of my children, I am willing to end my life immediately", which can be seen everywhere in our culture. Relatives and friends represent the closest relationship of the Chinese people. So, is there a difference between the representation of relatives and friends and the representation of generalized others (familiar)? If there is a significant difference, it indicates that people have incorporated their important relationships into self-representation, so that we can analyze whether the Chinese have a relational self. This paper discusses the relational self of the Chinese by self-reference processing. The research results are as follows: 1. The main effect of the relational self-reference processing is significant. The response time of relational self-reference processing is significantly lower than the reference processing time of generalized others, thus showing that there is a relational self-reference effect, and also proving that the Chinese have a relational self. The main effect of whether adjectives conform to reference objects (conformity judgment) is extremely significant, and the judgment time of conformity to reference objects is significantly shorter than the judgment time of non-conformity conditions, showing that the Chinese are not good at rejection reaction in the evaluation of individuals. The main effect of adjective valence is insignificant, showing that people make a positive and negative evaluation in the evaluation of individuals, that is, the Chinese keep a modest thinking in the evaluation of others, who have strong dialectic thinking. 2. Through analysis of the second-order interaction: (1) the interaction between the reference conditions and the adjective valence is critically significant. Under the condition of relational self-reference, there is no difference in the individual response time to positive and negative valence adjectives; under the condition of generalized others, the individual response time to negative valence adjectives is significantly less than the judgment time of positive valence adjectives, which shows that people's acceptance reaction to others is more difficult than rejection reaction. (2) The interaction between the reference objects and the conformity judgment is extremely significant. Under the condition of conformity to reference objects, the response time to the relational self is insignificantly different from the judgment time of generalized others; under the condition of non-conformity to reference objects, the response time to the relational self is significantly shorter than the judgment time of generalized others, which shows that people are difficult to make judgments about rejection to others. (3) However, the interaction of the conformity and non-conformity judgment for the reference objects and the adjective valence is insignificant. The interaction of the conformity judgment and adjective valence is insignificant, which is consistent with the experiment hypothesis. 3. The third-order interaction between the reference objects, adjective valence and conformity judgment is significant. Further analysis finds that: (1) Under the condition of conformity judgment, the simple main effect of the reference object is insignificant; the simple main effect of the adjective valence is extremely significant, and the response time under the condition of positive valence is significantly shorter than the judgment reaction time under the condition of negative valence; the interaction of the reference object and the adjective valence is significant. A simple effect analysis shows that, under the condition of relational self, the response time under the condition of positive valence is significantly shorter than the judgment reaction time under the condition of negative valence; under the condition of generalized others, the response time under the condition of positive valence is significantly shorter than the judgment reaction time under the condition of negative valence. (2) Under the non-conformity condition in the conformity judgment, the simple main effect of the reference objects is significant, and the response time under the condition of relational self is significantly shorter than that judgment response time of generalized others; the main effect of the adjective valence is extremely significant, and the response time under the condition of positive valence is significantly longer than the judgment response time under the condition of negative valence; the interaction between the reference objects and the adjective valence is insignificant. According to the above analysis, the experiment confirms that the Chinese have relational self-reference effect. Meanwhile, the judgment time under the condition of conformity to reference objects is significantly shorter than the judgment time under the condition of non-conformity to reference objects, showing that individual prefers to make a response and judgment of conformity to the individual in the process of reference processing. The main effect of adjective valence is insignificant. Further analysis finds that the interaction between the reference objects and the conformity judgment is extremely significant. Under the condition of conformity to the reference objects, the difference between the response time of relational self and the judgment time of generalized others is insignificant; under the condition of non-conformity to the reference objects, the response time of relational self is significantly shorter than the judgment time of generalized others, which shows that people are difficult to make rejection judgments for generalized others. The third-order interaction between the reference objects, adjective valence and conformity judgment is significant, showing the interaction between three factors.
2019-05-12T14:24:16.843Z
2018-11-20T00:00:00.000
{ "year": 2018, "sha1": "1dfc521ab03e27fe94d5658fc494b40960b4ed5f", "oa_license": null, "oa_url": "http://dpi-proceedings.com/index.php/dtssehs/article/download/26450/25863", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0024af1e9169c1f10852cecc6755d622b91b56e3", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
91184368
pes2o/s2orc
v3-fos-license
Belousov-Zhabotinsky liquid marbles in robot control We show how to control the movement of a wheeled robot using on-board liquid marbles made of Belousov-Zhabotinsky solution droplets coated with polyethylene powder. Two stainless steel, iridium coated electrodes were inserted in a marble and the electrical potential recorded was used to control the robot's motor. We stimulated the marble with a laser beam. It responded to the stimulation by pronounced changes in the electrical potential output. The electrical output was detected by robot. The robot was changing its trajectory in response to the stimulation. The results open new horizons for applications for oscillatory chemical reactions in robotics. BZ controller for robots have been studied theoretically in the models of excitable automata lattices supplied with propulsive cilia [3]. A chemical processor to navigate a robot around obstacles in an arena has been prototyped in [1]. This processor, however, required images of the whole experimental arena to be prepared by a human operation in an off-line mode. The first real time BZ controller for a robot was designed and prototyped in [2]. In this case, a thin layer of BZ medium contained within a Petri dish was mounted onto a wheeled robot. Direction towards a source of stimulation was inferred, via an optical interface, from the 2D patterns of oxidation wave-fronts. Another example a BZ robotic controller is the closed loop control of a robotic hand with a thin layer BZ reactor [44]. The closed loop is achieved with photo-sensors placed underneath the Petri dish where the excitation of the BZ medium occurs from the movement of the robotic fingers. The way the robotic fingers react, in turn, is controlled by a micro-controller receiving data from the photo-sensors. The developed hybrid system was able to deliver highly complex behaviour by using just three sensors and three of the five fingers. Recently, the use of BZ gels was proposed to assemble millimetresized soft robots that exhibit photo-taxis, while not using any other kind of device to move around [9]. With the simulation of the chemical, along with the mechanical motion of the gels, their capabilities were unveiled. These worm-like gels can follow complicated routes based on different intensities of light, perform periodic movement resembling cilia and self-organise in groups. Previously, the BZ medium has been utilised as an isolated system that needs specialised interfaces and data processing tools. We propose a hybrid system where the chemical system provides information to conventional electronics that control the movement of the robotic system through a direct electrical connection. Thus, this is a further step towards the final goal of an autonomous next generation of soft robots. Previous prototypes of BZ controllers employed BZ in a Petri dish, which posed difficulties with manipulation and portability of the prototypes [2,44]. Therefore, to overcome these difficulties, we decided to encapsulate ferroin-catalysed BZ solution in a liquid marble. A liquid marble (LM) is microlitre liquid droplets encapsulated in a hydrophobic powder coating [4]. This approach enables us to transfer and manipulate the BZ LM controller without wetting the underlying surface. Our scoping studies showed that the BZ media encapsulated in LMs exhibits 'classical' chemical excitation wave patterns, with mainly trigger waves observed [13]. The BZ media has been reported to be sensitive to illumination. Toth et al [41] experimentally demonstrated that visible light of the appropriate frequency, in their case it was a HeNe laser with wavelength 632.8 nm, initiates oxidation in the ferroin-catalysed BZ reaction. Moreover, visible light of different wavelengths is proved to initiate or inhibit the dynamics of ferroin-or ruthenium-catalyzed BZ medium due to the photochemical properties of the catalyst [33,43,23,14,20]. We also use a laser beam to stimulate BZ LMs onboard of a Zumo robot. The BZ LMs were mounted on and electrically interfaced with the Zumo robot [29]. The alternation in the dynamics of the reagents inside the BZ LM can be monitored potentiometrically with two iridium coated stainless steel electrode. Several paradigms of studies that use electrical potential to monitor the oscillation in a BZ system were previously published [8,14,28]. The robot is attractive in its simplicity of design and control. It has been used previously in studying route-following by klinokinesis, inspired by the navigation skills of desert ants [24], randomised algorithm mimicking biased lone exploration in roaches [21], and the self optimisation procedure on a line-tracing application by using a evolutionary computing algorithm [45]. Stock solutions of 1 m malonic acid and 1 m NaBr were prepared by dissolving 1 g in 10 ml of deionised water. In a 50 ml beaker, 0.5 ml of 1 m malonic acid was added to 3 ml of the acidic NaBrO 3 solution. 0.25 ml of 1 m NaBr was then added to the beaker, which produced bromine. The solution was set aside until it was clear and colourless (ca. 3 min) before adding 0.5 ml of 0.025 m ferroin indicator. BZ LMs were prepared by pipetting a 75µL droplet of BZ solution, from a height of ca. 2 mm onto a powder bed of PE, using a method reported previously [13]. The BZ droplet was rolled on the powder bed for ca. 10 s until it was fully coated with powder. For the initial experiments, which aimed to establish the electrical potential outputs of a BZ LM, a LM was placed in Petri dish and pierced with two iridium coated stainless steel electrodes (Fig. 1a). For experiments investigating the electrical potential of a BZ LM stimulated with a laser, sub-dermal needle electrodes with twisted cables were used ( c SPES MEDICA SRL Via Buccari 21 16153 Genova, Italy). Electrical potential outputs were recorded with an ADC-24 high resolution data logger (Pico Technology, St Neots, Cambridgeshire, UK), sampling every 10 ms. BZ LMs were mounted on the robot by rolling the LMs into plastic holders, which were subsequently attached to the robot (Fig. 1b) and then the LMs pierced with two iridium coated stainless steel electrodes (Fig. 1c). The robot used was a Zumo robot [29], which was an off-the-shelf solution. The robot is developed as an Arduino shield to provide a convenient interface with its controller. The algorithm that governs the trajectory of the robot is loaded on the Arduino board and the electronics necessary to power the motors are accommodated on the robot shield (Fig. 1d). Light stimulation was performed using a green laser pointer, wavelength 532nm, 5mW , for ca. 10s (Fig. 1e). As previously reported [41], the reduced form of the catalyst in a ferroin-catalyzed BZ medium, shows an absorption peak at 510nm. As a result, the choice of a wavelength of 532nm is reasonably close to the peak to have significant impact in the dynamics of the reagents. A human operator have illuminated the BZ LM with a laser pen from a distance of approximately 20 cm. Using a FLIR ETS320 thermal camera with 0.06 o C resolution we found that the illumination does not lead to a substantial increase in temperature in the marble (even illumination for over 30 sec causes just 0.2 o C increase). For the on-board recording an analogue-to-digital converter was used (ADS1118 Texas Instruments Incorporated). This was because the Arduino could read only positive values of an electrical potential and its resolution was limited to 4.9 mV. As a result, negative values can be recorded and a higher resolution (down to 0.2 mV) was achieved. The on-board recordings were saved on to an SD-card attached to the Arduino and started 3 s after the activation of the robot due to initialisation procedures. The robot is programmed with a simple algorithm to manipulate its moves in a constant way. However, this is not limiting its capabilities. Just for illustration reasons in the experiments executed in this study the algorithms dictates the robot to move 1.2cm forward and turn to either direction at an angle of 3 degrees. Results As the oxidation wave-fronts are travelling within the BZ LM, an electrical potential that oscillates is observed in the electrodes. The dynamics of the wave-fronts and, thus, the oscillating potential are changing in response to the LM being illuminated by a laser. More specifically, one case studied was when the LM had a potential that oscillated around a negative value and was exposed to a laser beam while at the higher point of the oscillation in the positive region ( Fig. 2(a)). The respond was inhibition of the oscillating output and a decrease of the oscillations' amplitude as realised in Fig. 2(a). Another case was a sudden drop of potential with no significant changing in the oscillation characteristics ( Fig. 2(b)). Given the aforementioned observations of the effect the laser beam causes, we developed the algorithm that would navigate the robot by taking values of the potential from the BZ LM as follows. The algorithm, loaded to the Arduino board connected to the Zumo robot, reads the outputs from a BZ LM and if the value is positive then the robot turns left. Whereas, if the value read is negative the robot turns right. In order to avoid movement when the potential output of the BZ LM is too low, a condition of the absolute value being higher than 1mV was introduced. The electrical potential of the BZ LMs is read every 2 seconds and logged on an on-board SD card for further investigation. To enhance the comprehension of the results drawn from the robot experiments, the following figures are encoded as described here. The asterisks represent a positive potential value of the BZ LM and, hence, a left turn of the robot. Respectively, the squares in the graphs represent a negative potential value read and, hence, a right turn of the robot. The circles represent a lower value than the minimum threshold that does not dictate any movement by the robot. The dashed vertical lines represent the time slots when the laser beam stimulating the BZ LM was on, and the solid vertical lines when the laser beam was off. The x-axis is the time in seconds and the y-axis is the voltage amplitude of the BZ LM in volts. For the first experiment involving the robot, there was no stimulation with the laser beam. The potential output and the movement of the robot is depicted in Fig. 3. The potential output oscillates around zero. Thus, the robot moves either towards the right direction or towards the left direction. Given that sampling points are equally distributed between negative and positive values, the robot is moving roughly towards a given direction. The second experiment with the robot was executed with the interaction of the BZ LM with a laser beam. As illustrated in the results from that experiment (Fig. 4) the effects of the laser beam are altering the normal oscillation (as depicted in Fig. 3) of the BZ LM. The first point of stimulation (at the 10th second) hinders the oscillation and maintains the potential values in the positive area. As a result the robot keeps moving towards a left direction. The second moment of stimulation (at the 32nd second) reactivates the oscillation around zero and, thus, forcing the robot to swing its way towards a generally straight direction. However, the two remaining stimulations with the laser does not seem to have a detectable effect on the output potential of the BZ LM. The results of the third experiment are featured in Fig. 5. Despite the fact that all the incidents of stimulation with the laser beam have a clear effect on the oscillation and the short term amplitude of the potential, the robot moves only by turning left. The robot actually is working its way around a circle (anticlockwise), due to the fact that the potential of the BZ LM was not allowed to reach negative values, possible due to repeated initiation of oxidation wave-fronts by laser illumination. For the final experiment the results are depicted in Fig. 6. Here, the output was initially oscillating within negative values. After the first stimulation with the laser beam, the potential output is constantly increasing and reaches positive values. As a result the robot stops moving on a clockwise direction and starts an anticlockwise turn. The oscillation is now around zero. However the second stimulation hinders the oscillation, with values of electrical potential remaining positive longer and, thus, the robot moves on an anticlockwise turn once more. The electrodes are penetrating through the BZ LM and the plastic container onboard of the robot. Consequently, the BZ LM is not able to move freely in the plastic container. Given that the oscillation period of the potential is similar in experiments without movement (Fig. 2) and with movement (Figs. 3 to 6), the vibrations from the robot seem not to be enough to characterize the LM as a well-mixed system. As a result, the BZ LM can be considered as a distributed-parameter system with local concentration gradients. All the electrical potential values saved on the on-board SD card of the Arduino system were congregated and investigated. The resulted data set was used to produce the histogram presented in Fig. 7. Moreover, a fitted normal distribution of the appearances of each batch of electrical potential was plotted in Fig. 7. The mean value is 0.006 and the standard deviation 0.0159. Thus, the definition of assigning left or right turns with values around zero (which is close to the mean of 0.006) provides an almost evenly distributed motion towards both sides. Discussions This work demonstrates that the BZ reaction can be directly incorporated into the electronic circuitry of a controller for a robot. Limitations imposed by earlier prototypes of liquid phase controllers, where robots were restricted to forward speeds of ca. 1cm/s and rotation speeds of ca. 1 degree/s [2], were alleviated. The additional benefits of the BZ LM system were that no optical interfaces were required to monitor the BZ LM controller and the geometries of the oxidation wave-fronts no longer need to be analysed. Hardware and software used in previous versions of the robot [2,44] (a light placed underneath the reaction contained within a Petri dish, a serial connection to a PC and image processing algorithms) are not necessary as the BZ LMs are electrically connected to the micro-controller that delivers the trajectory of the robot. This reduction in the complexity of the controller system shows progress towards future unconventional and soft robotics. By encapsulating the BZ solution droplets in hydrophobic powder to form LMs, made the controllers re-configurable. In principle, it would be possible to mount as many BZ marbles as desired on-board of a robot and allow the LM ensembles to process information about the local environment and potentially make decisions based on the fusions of many stimuli. The properties of LMs can be tailored for a variety of applications by altering the encapsulated liquid and / or the powder coating [6,5,12,25,27,30]. This means LMs can be prepared to enable them to be manipulated using electrical and magnetic fields, in addition to mechanical manipulation. Thus, robotic BZ LM controllers can be reconfigured on-flight, during the robot is in motion. It is noteworthy that the implementation proposed here is not an ideal and ready-to-use solution. This is an initial study towards the control of robots with chemical reaction-diffusion systems through an electrical connection. It is a contribution in bringing important improvement in prototyping wet robotics bearing complex dynamics. The exact behavior of the chemical system is difficult to predict and to manipulate as noticed in the results from the experiments. Consequently, the reproduction of the results and the perfect manipulation of the potentials oscillation were not extensively analysed in the context of this study but remain as aspects of future work. Finally, the short time of the experiments with the BZ LM mounted on the robot are not because of reagents depletion, but in order to illustrate more efficiently the reaction of the marble to laser illumination.
2019-03-25T12:34:57.000Z
2019-03-25T00:00:00.000
{ "year": 2019, "sha1": "1103d914c00ec17680f2aaf2b78f73b34bd8c495", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1904.01520", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bf8455246318a1dc4ebeb9f715a20e798aa57188", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Materials Science", "Engineering" ] }
229175468
pes2o/s2orc
v3-fos-license
Use of GlideScope in Patients Undergoing NIM Thyroidectomy Objectives: Thyroidectomy and parathyroidectomy using the nerve integrity monitor (NIM) require proper placement of the endotracheal tube with electrodes aligned correctly within the larynx. The purpose of this study is to determine the percentage of patients who require positional adjustments of the endotracheal tube prior to beginning surgery and to understand the value of using the GlideScope to assure proper NIM tube placement within the larynx. Methods: This prospective study examines operative data from 297 patients who underwent NIM thyroidectomy and parathyroidectomy. After routine orotracheal intubation by an anesthesiologist and positioning of the patient for surgery, a GlideScope was used to check the position of the tube in 2 planes: depth of tube placement and rotation of the tube within the larynx assuring proper placement of the electromyogram electrodes within the glottis. Results: Tube adjustment was required for 66.5% of patients. In 48.1% of cases, tube retraction or advancement to a proper depth was needed. Tube rotation was required for 30.1% of patients, and 11.8% of patients required both adjustment of tube depth and tube rotation to properly align electrodes. Conclusions: After the anesthesiologist places the NIM endotracheal tube, and the patient is positioned for surgery, additional tube adjustment is often needed prior to the start of surgery. The GlideScope is readily available in the operating suite, its use adds little time to the procedure, and assures proper NIM tube placement. The use of the GlideScope is recommended. Introduction Recurrent laryngeal nerve (RLN) injury is one of the most devastating complications of thyroid surgery and can impact quality of life, social and occupational function, as well as swallowing and breathing. The incidence of permanent RLN injury is low and varies from 0.5% to 5% of the patients, whereas transient injuries have been documented in 1% to 30% of cases. [1][2][3] In performing thyroidectomy, the gold standard is visualization of the RLN. The nerve integrity monitor (NIM) is thought to serve as a valuable adjunct. 4 In a study of 29 998 nerves by Dralle et al, the dissections were divided into 3 groups: a first group, with no RLN identification, a second group with visual identification of the nerve, and a third group with visual identification plus NIM. They found that visualization of the RLN is the gold standard and that there is no statistical difference for nerve visualization with or without monitoring. 5 A 2011 metaanalysis of 64 699 nerves also found no statistical difference in the rate of true vocal cord paralysis while using NIM versus RLN identification without monitoring. 6 Studies conducted in more recent years have reached a similar conclusion and have found no statistically significant improvement in outcomes while using NIM. [7][8][9] However the data and potential benefits remain controversial. Despite a lack of evidence to support routine use of the NIM, RLN monitoring is valued by many surgeons for its potential benefits of faster and more reliable nerve identification, assistance in dissection, and verification of RLN function postoperatively. 4 Thus, its use is becoming more prevalent. A study conducted in 2019 showed a marked increase in intraoperative NIM use with 83% of participants reporting use in some or all cases, compared with a 2007 study that revealed only 44.9% use. 10,11 Although NIM thyroidectomy is increasing in the United States and around the globe, protocol for equipment set up and use varies greatly between providers. A guideline statement was published in 2010 to improve quality of intraoperative neuromonitoring through encouraging uniformity in equipment set up, endotracheal tube placement, and intraoperative problem-solving. 4 One of the biggest issues for NIM reliability and function is lack of a standardized approach for proper endotracheal tube placement. When positioned properly, the recording electrodes on the endotracheal tube make contact with the medial surface of the vocal cords to monitor RLN integrity. However, after intubation, tube dislocation occurs in up to 69% of patients while placing the patient in neck extension. 12 During patient positioning, the endotracheal tube can be displaced up to 2.1 cm inward and 3.3 cm outward. 13 Incorrectly positioned electrodes are the most common cause of equipment failure and can lead to dysfunction of the monitor, unreliable monitor feedback, and increase the risk for RLN injury. 14 Because of this, many experts suggest a standard protocol for tube placement and readjustment after the patient is positioned for surgery. 4 For verification, it has been suggested that video laryngoscopy be utilized to check the tube position as it provides a superior glottic view. 12,15 The GlideScope is a video laryngoscope that allows video visualization of placement of the endotracheal tube. The Glide-Scope has proven to be effective in both primary intubation and in management of the difficult airway in both adults and children and has been shown to improve the glottic view when compared with traditional laryngoscopes. [15][16][17] Thyroidectomy and parathyroidectomy using the NIM require proper placement of the endotracheal tube with the electrodes aligned correctly within the larynx. The purpose of this study is to determine the percentage of patients who require endotracheal tube readjustment prior to beginning surgery and to understand the value of using the GlideScope to assure proper NIM tube placement within the larynx. Materials and Methods This study was performed under a claim of exemption by the Investigational Review Board at the University of Mississippi Medical Center and conforms to recognized ethical standards for research (IRB #FWA00003630). Data were taken from 297 operative notes of patients who underwent NIM thyroidectomy or parathyroidectomy. At the start of each case, the anesthesiologist performed a direct laryngoscopy and intubation with the NIM endotracheal tube. The patient was positioned in neck extension for surgery, and the GlideScope was used to visualize the position of the endotracheal tube and recording electrodes within the larynx. If repositioning of the endotracheal tube was necessary, the surgeon recorded the adjustments in 2 planes: tube advancement/retraction and rotation of the tube. Once proper positioning of the endotracheal tube electrodes was confirmed with the Glidescope, the patient was prepped and the procedure commenced. Results Patient age ranges from 13 to 90 years with a mean of 55.6 years old. Men comprise 21.5% of the patient sample and women make up 78.5%. A total of 297 surgeries by an otolaryngologist-head and neck surgeon make up our surgical sample with 38 parathyroidectomies and 259 partial and total thyroidectomies. Most of the patients in this study were intubated with a traditional laryngoscope; however, a small subset of patients were intubated with a GlideScope, and their data are listed separately. For both groups, the GlideScope was used to confirm tube position after initial intubation and patient positioning. Among the 272 patients intubated with a traditional laryngoscope, NIM tube adjustment was required for 66.5% of these patients. 45.6% of patients required tube retraction, which ranged from 0.5 to 4 CM with an average displacement of 1.34 CM needed to achieve the appropriate depth. 2.5% of patients required tube advancement, which ranged from 0.5 to 2 CM with an average displacement of 1.18 CM. Rotational adjustment was needed in 30.1% of patients to achieve optimal orientation. Among these patients, counterclockwise rotation was needed in 40.2% of patients and clockwise adjustment was required for 59.8% of patients. A total of 11.8% of patients required both tube advancement/retraction and rotational adjustments prior to surgery. Among the 25 patients initially intubated with a GlideScope, 56% of these patients required NIM tube adjustment after the patient was positioned in neck extension; 44% of these patients required tube retraction and none required tube advancement to achieve and appropriate depth. On average, these patients required 0.93 CM of retraction (ranging from 0.75 to 1.25 CM). Rotational adjustment was required for 20% of patients. Among these patients, counterclockwise rotation was needed in 40% of patients and clockwise adjustment was required for 60% of patients. 8% of patients required both tube retraction and rotational adjustments prior to surgery. Of the 297 patients in this series, 3 patients were lost to follow-up or moved out of state. One patient underwent a right thyroid lobectomy and suffered a left vocal fold paresis secondary to endotracheal cuff injury, which later resolved. Another patient with a 19 Â 18 CM thyroid mass had inadvertent severance of the left RLN, which was recognized and repaired intraoperatively. There were 5 instances of transient vocal fold paresis, which all resolved spontaneously. The remaining patients had normal vocal fold function at their first postoperative follow-up examination. Discussion The use of the NIM system is fraught with several potential issues regarding the equipment set up. Most of these issues relate to the position of the endotracheal tube electrodes. This type of issue has been reported in 3.8% to 23% of patients undergoing NIM thyroidectomy. 4 It is believed that a righthanded anesthesiologist tends to rotate the tube clockwise to approximately 30 when intubating the patient. 4 This rotational error would require a counterclockwise rotation to properly align the NIM tube electrodes within the larynx. However, only 40.2% of our patients required counterclockwise tube correction. Additionally, in a study by Lu et al, among patients requiring tube adjustment after intubation, about 50% required tube advancement and 50% required retraction. 14 However, in the current series, instances of tube retraction were far more common than tube advancement. The difference between adequate NIM tube placement and optimal NIM tube placement in terms of reliability of the NIM system is not known. However, 2 separate studies revealed significantly decreased EMG amplitudes when the endotracheal tube was displaced by depth and rotation, which raises concern for a lack of reliability when the tube is malpositioned. 18,19 In the senior author's experience, the use of the GlideScope has only been utilized for the past decade. However, in several previous series of thyroidectomies where tube position was not verified, there were few instances of NIM system failure related to tube placement. 14,20 The NIM endotracheal tube has a long area wired for laryngeal contact, and it may be that this ''sweet spot'' allows the NIM system to function with tube placement that is not always optimal. Even within the small subset of patients initially intubated with the Glidescope, repositioning of the endotracheal tube was frequent. Thus, it seems that following initial intubation with a traditional laryngoscope or the Glidescope, the position of the endotracheal tube needs to be rechecked once patient positioning for surgery is completed. The true advantage of GlideScope verification and optimization of NIM tube placement is that it eliminates tube placement from the list of potential causes of a nonfunctioning nerve monitor, which may be encountered during thyroidectomy. Conclusions After initial intubation and patient positioning, NIM tube displacement occurs frequently. The use of the GlideScope permits optimal placement of the NIM tube prior to initiation of thyroidectomy. It promotes communication to assure optimal tube placement by both anesthesiologists and the otolaryngologist-head and neck surgeon. Confirming NIM tube position with the GlideScope also eliminates tube dislocation as a possible reason for a nonfunctioning NIM system. It is noninvasive, inexpensive, and usually requires less than 1 to 2 minutes to verify tube placement and adjust the tube as needed. The use of the GlideScope in this situation is recommended. Authors' Note Preliminary data were presented as a poster at: the Triological Society's Annual Meeting; April 10-13, 2013; Orlando, FL. Level of Evidence: 4 Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2020-12-15T21:58:12.723Z
2020-12-14T00:00:00.000
{ "year": 2020, "sha1": "9877b417ef34989a5ebdaf83049ceeda697a1d7f", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0145561320974829", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "6b2a757e46097fa61c494a88f4252de0fd211398", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235443111
pes2o/s2orc
v3-fos-license
Measuring salivary mesotocin in birds - Seasonal differences in ravens' peripheral mesotocin levels Oxytocin is involved in a broad array of social behaviours. While saliva has been used regularly to investigate the role of oxytocin in social behaviour of mammal species, so far, to our knowledge, no-one has tried to measure its homolog, mesotocin, in birds' saliva. Therefore, in this study we measured salivary mesotocin in common ravens (Corvus corax), and subsequently explored its link to three aspects of raven sociality. We trained ravens (n = 13) to voluntarily provide saliva samples and analysed salivary mesotocin with a commercial oxytocin enzyme-immunoassay kit, also suitable for mesotocin. After testing parallelism and recovery, we investigated the effect of bonding status, sex and season on mesotocin levels. We found that mesotocin was significantly more likely to be detected in samples taken during the breeding season (spring) than during the mating season (winter). In those samples in which mesotocin was detected, concentrations were also significantly higher during the breeding than during the mating season. In contrast, bonding status and sex were not found to relate to mesotocin detectability and concentrations. The seasonal differences in mesotocin correspond to behavioral patterns known to be associated with mesotocin/oxytocin, with ravens showing much more aggression during the mating season while being more tolerant of conspecifics in the breeding season. We show for the first time that saliva samples can be useful for the non-invasive determination of hormone levels in birds. However, the rate of successfully analysed samples was very low, and collection and analysis methods will benefit from further improvements. Introduction Behavioral endocrinology has long depended mainly on blood sampling techniques, which have advanced the knowledge in the field tremendously. However, if performed on untrained subjects, such invasive techniques are often associated with elevated stress levels and welfare issues, and can consequently lead to immediate modulation/ change in hormonal concentrations. The consequent need for noninvasive methodologies has put forward alternative approaches such as quantifying hormones out of faeces, urine, saliva, milk, hair or feathers (reviews: Behringer and Deschner, 2017;Palme, 2019). In bird studies the most commonly selected matrices for hormone analysis are blood or droppings. Both techniques have their advantages and disadvantages: While the former entails the before-mentioned invasive approach, it does allow the detection of almost immediate hormonal changes. And while the latter is non-invasive, it does create a time-lag between the hormone release into the blood and the appearance of the hormone metabolites in the droppings, which results from gut passage time (Palme, 2019). Another limitation is that certain hormones, such as oxytocin (and its homologues), or their metabolites, cannot easily be measured in faeces. Consequently, the choice of the appropriate matrix depends on the hormone to be measured, the temporal scale of the research question and the feasibility of sample collection. So far, scientific progress in ornithology has been limited due to the lack of noninvasive techniques which are able to detect immediate or short-term changes in hormone concentrations. Oxytocin (OT) is a nonapeptide that exhibits a broad range of central and peripheral effects, from the modulation of neuroendocrine reflexes to complex social behaviours (Gimpl and Fahrenholz, 2001). Depending on the species, the latter include social bonding, trust, maternal and alloparental care, cooperation, consolation, outgroup derogation and sexual behaviour (reviews: De Dreu and Kret, 2016;Goodson, 2013;Quintana and Guastella, 2020;Walum and Young, 2018). Whereas OT occurs mainly in mammals, a homologous form of it, mesotocin (MT), occurs in birds and reptiles. MT has so far, however, been much less investigated than OT, at least when it comes to measuring peripheral concentrations. Manipulation studies administering MT or OT antagonists (Duque et al., 2020(Duque et al., , 2018Kelly, 2019), or studies investigating neural substrates with immunohistochemistry (Goodson, 2013), however, do suggest that, similar to OT in mammals, MT plays an important role in social behaviour in birds. Nevertheless, there is not much information available on naturally occurring peripheral MT levels in birds and how they relate to social behaviour in natural contexts. The first two goals of the present study are therefore to test the feasibility i) of training birds to voluntarily provide saliva samples and ii) of quantifying salivary MT in those samples. Since saliva samples allow a non-invasive assessment of the endocrine response to certain stimuli with a delay of only a few minutes, this would open up a suit of opportunities to examine the role of MT in birds more directly. To do so, we tested common ravens (Corvus corax), which are well-suited for that endeavour: Because of their relatively large body size we expect them to produce more saliva than a small bird species, and their large beak facilitates saliva collection. Further, ravens have already been shown to be successful in an exchange paradigm (Massen et al., 2015;Müller et al., 2017), which could be used to collect saliva samples by exchanging swabs for a reward. Moreover, they have a relatively complex social life with individuals staying in non-breeder groups until they form pair bonds (Boucherie et al., 2019), and show marked seasonality with regard to breeding and mating, with the latter being accompanied with high levels of aggression (Braun and Bugnyar, 2012;Gwinner, 2003). In particular, in the present paper we make a clear distinction between the mating season, in winter, in which the birds are sexually highly active and the breeding season, in spring, which we refer to as the time period in which the birds usually lay eggs and care for their offspring. Our third goal is, therefore, iii) to investigate whether salivary MT levels differ seasonally and/or are associated with the ravens' bonding status. In a comparative study on different sparrow species, Goodson et al. (2012) concluded that an increase in MT innervation in certain parts of the brain is important for flocking and may reduce aggression. Moreover, it was shown that the MTergic system is involved in reproductive behaviour (e.g. incubation) in Thai hens (Gallus domesticus) (Chokchaloemwong et al., 2013;Sinpru et al., 2017). We, therefore, expected ravens' MT levels to be higher in the breeding season (spring) than in the mating season (winter). Nonapeptides have also been shown to be involved in pair bonding. In zebra finches (Taeniopygia guttata) the administration of OT antagonists decreases pair formation (Pedersen and Tomaszycki, 2012) and influences pair maintenance behaviours (Kelly, 2019), and oxytocin-like receptors mediate pair bonding (Klatt and Goodson, 2013). We, therefore, expected pair-bonded ravens to have higher MT levels than group-living ones. Finally, nonapeptide effects are often sex-specific (Goodson, 2013;Kelly and Goodson, 2014), and MT levels seem to differ between males and females in some species. In White Leghorn chickens (Gallus domesticus), for instance, males have twice as much MT, at least in their neurohypophysis, as females (Robinzon et al., 1990). Hence, we investigated the potential effect of sex on MT levels in ravens and predicted MT levels in males to be higher than in their female conspecifics. Animals and housing The study was conducted on 13 ravens (7 males, 6 females; age: 2 to 6 years). They were housed at Haidlhof Research Station, Bad Vöslau, Austria. Towards the end of the study three pair-bonded subjects were transferred to Cumberland Wildpark in Grünau, Austria. Six ravens were kept in a mixed-sex non-breeder group (aviary ~210m 2 ) and seven were pair-bonded and kept in separate aviaries (~80m 2 each). All aviaries consisted of several compartments with sheltered areas for weather protection. The subjects were fed twice a day (meat, dairy products, vegetables, fruits and cereals), water was available ad libitum. Saliva collection We trained the ravens to take a saliva swab (~2.5 cm long piece of Salimetrics SalivaBio Children Swab) into their beak, place it in their throat pouch where saliva is accumulating, and return it on command. To achieve this, we followed a training protocol based on positive reinforcement (cf. Massen et al., 2015;see ESM). Saliva samples (max. three/subject/day) were collected opportunistically from whichever bird we could get them from on a given day, either before or at least 1 h after the birds got fed. All samples were stored in Salimetrics Swab Storage Tubes at − 20 • C within 10 min after collection. Saliva samples were collected between May and June 2016 and in April and May 2017, representing the breeding seasons, and between November 2016 and January 2017, representing the mating season. Hormone analysis Salivary MT concentration was quantified using a commercially available enzyme-immunoassay (EIA) kit for oxytocin (Catalog No. K048-H1/H5, Arbor Assays, Michigan, USA), which has been used successfully to measure (salivary) OT in humans and other species (e.g. mice, Mus musculus (Ferrer-Pérez et al., 2019); dogs, Canis familiaris (Wirobski et al., 2021); gorillas, Gorilla gorilla gorilla (Leeds et al., 2018)). This assay has a cross-reactivity of 88.4% with MT and can therefore be used to measure MT. Saliva samples of the ravens were extracted following the instruction of the manufacturer and using the extraction solution provided in the kit. Briefly: To extract the samples the swabs were centrifuged at 1600 g and 4 • C for 20 min. Supernatant was pipetted into an Eppendorf tube, diluted with extraction solution (1:1.5), vortexed and incubated for 90 min at room temperature. After centrifugation (1600 g at 4 • C for 20 min), supernatant was pipetted into a glass tube and dried-down under a N2-stream. Samples were then resuspended in 210 μl assay buffer and processed following the producers EIA protocol. Final concentrations were corrected for dilution factor. All samples were analysed in duplicates. We calculated the intraassay coefficient of variation (CV) from concentrations of duplicate aliquots. Samples with an intra-assay CV above 20% were excluded from our analysis and the mean CV of duplicates of all remaining samples was 5.9 ± 4.8% (mean ± SD). Inter-assay CV was calculated by comparing the optical density (OD) of two standards (i.e., at the high and low range of the curve) between assays and were 10.3% and 11.4%, respectively. The detection limit reported by the manufacturer was 22.9 pg/ml. Prior to analysing the individual samples, we successfully conducted a serial dilution of pooled raven saliva in triplicates (1:1 up to 1:8 dilution) to exclude any possibility of matrix effects (Table 1). The CV of corrected values of the serial dilution was 3.58%. Further, we tested the recovery of known amounts of MT in saliva samples. Recoveries were obtained by spiking a total of 16 aliquots of 50 μl of pooled raven saliva with two concentrations of standard mesotocin (Arbor Assay, Cat.nr. X127) and calculating the recovery in respect to the concentration of unextracted standards, after correcting for the concentration measured for the pooled saliva. Average recoveries for standards of 2000 pg/ml and 1000 pg/ml were 119% and 94%, respectively. Statistical analysis Since the MT concentration of many samples fell below the assay's detection limit, we first fitted a binomial model (glmer) to investigate if the detectability of MT within the samples depended on the ravens' bonding status, sex, season and/or on the saliva sample volume (prior analysis showed that sample volume correlated negatively with MT concentrations; see below). This model also comprised subject as random intercept effect and was compared to the null model only including this random effect. Subsequently we investigated the effect of the same factors on ravens' salivary MT concentrations (log-transformed). We computed a linear mixed-effects model (lmer), which included bonding status, sex and season as fixed effects, sampling time since sunrise as an offset, and subject as random intercept effect. This main-effects model was compared to the null model, which included only subject as random intercept effect and time since sunrise as an offset. We visually inspected whether the model residuals were normally distributed and homogenous. We detected no multicollinearity issues (max. variance inflation factor = 1.617). Effect sizes were estimated via partial omega squared. Prior to constructing the models, we found that sample volume correlated negatively with MT concentrations (n = 20 samples, r 2 = − 0.49, p = 0.027). Since including saliva volume as fixed effect into our main effects model resulted in singularity issues, we decided not to include this factor in the final model. Instead, we ran a post-hoc analysis, which indicated that sample volume was not driving our main results (see ESM, Post-hoc analysis, Table S1). Statistical analysis were conducted in R (version 3.5.2) (R Core Team, 2018). Further details and R-packages are reported in the ESM. Results We collected 151 saliva swabs (n = 13 subjects; 11.62 mean ± 2.45 SEM samples per subject). Collected saliva volume after centrifugation of the swabs ranged between 2 and 200 μl with an average of 30.36 μl ± 37.70 SD. 73 swabs did not contain saliva, 7 contained less than 5 μl and, thus, only the remaining 71 samples which contained at least 5 μl saliva volume (n = 11 subjects) were analysed. MT could be detected in 28 samples (n = 9 subjects), but only samples, which had a duplicate CV below 20% were considered in the statistical analysis. Consequently, for the MT concentration model we had 20 samples (n = 7 subjects; see ESM: Table S2), whereas for the MT detectability model we had data on 62 samples (n = 10 subjects). Detectability of MT Overall, we found a clear effect of season (Fig. 1a) and saliva volume on the detectability of MT concentrations (binomial full-null model comparison: ΔAIC = 4.147, χ 2 = 12.147, df = 4, p = 0.016). MT was less likely to be detected in samples collected during mating season (mean probability = 0.23 ± 0.17 SD) and in samples of low volume (not detected: mean = 28.00 μl ± 27.95 SD) than in samples collected during breeding season (mean probability = 0.53 ± 0.12 SD) and in samples of high volume (detected: mean = 40.40 μl ± 46.84 SD; Table 2). Effects of bonding status, sex and season on MT levels Season also had an effect on the ravens' salivary MT levels themselves (main effects-null model comparison: ΔAIC = 5.372, χ 2 = 11.372, df = 3, p = 0.010), with lower MT concentrations occurring during mating season (mean = 355.44 pg/ml ± 226.75 SD) than during breeding season (mean 812.22 = pg/ml ± 471.83 SD; p = 0.015; Fig. 1b, Table 2 and ESM: Table S2). Neither the subject's bonding status nor sex had a significant impact on MT concentrations (Table 2). Discussion In the present study we could show i) that it is possible to collect saliva from birds in a non-invasive manner, based on the voluntary collaboration of the subjects with the experimenter, ii) that MT is detectable in ravens' saliva using a commercial enzyme-immunoassay, and iii) that ravens' salivary MT levels are linked with a biologically relevant parameter, i.e., mating vs. breeding season. This opens up new opportunities to study peripheral MT in birds. Measuring naturally occurring MT levels could particularly be useful in studies related to sociality or animal welfare and can be easily applied in research institutions and zoos. Although we were able to measure ravens' salivary MT levels, the effectiveness rate shows that some improvements are needed. To begin with, it was difficult to estimate the saliva volume soaked up by the swab during the ongoing collection procedure, resulting in many empty swabs. Additionally, we encountered constraints due to the enzymeimmunoassays detection limit. In any immunoassay, low volume samples need to be diluted to reach the required assay volume. This causes a reduction of the hormone concentration, which can fall below the assay detection limit. In the present study, samples of low saliva volume and samples that were collected during the mating season, during which MT concentrations were generally lower, were thus more likely to have a MT concentration below the detectable limit. Another issue was that several samples had a variation coefficient (CV) of duplicates higher than 20% and hence could not be used for further analysis. Future studies, thus, should focus i) on increasing the saliva volume gained by improving and intensifying the training of the birds as well as the training of the trainers/experimenters, and ii) on developing more sensitive assays. Improving the methodology is unquestionably crucial for future applications. The third goal of our study was to investigate whether we can detect biologically relevant differences in salivary MT levels. We found that MT concentrations differed between mating (winter) and breeding season (spring), but that there was no evidence for an effect of bonding status or sex. As expected, MT levels were higher in the breeding season, a time in which ravens are less aggressive than in the mating season (Braun and Bugnyar, 2012;Gwinner, 2003). A study on sparrow species suggests that an increase in MT innervation in certain parts of the brain plays an important role for flocking and might lower aggression (Goodson et al., 2012). Furthermore, studies on Thai hens (Gallus domesticus) suggest that the MTergic system is involved with reproductive behaviour (Chokchaloemwong et al., 2013;Sinpru et al., 2017). Therefore, MT in our ravens might have increased in preparation for breeding. Although the seasonal difference in MT concentrations in our ravens could be associated with social factors (related to aggression in the mating season or behavioral changes associated with reproduction in the breeding season), it could also result from abiotic environmental factors, such as temperature. Several studies show that the MT system is involved in thermoregulation (McConn et al., 2019;Robinzon et al., 1988). In chicks (Gallus domesticus) central injections of MT led to increased cloacal temperature and reduced water and food intake, suggesting that MT plays an important role in avian metabolism (McConn et al., 2019). Accordingly, changes in MT levels, like the ones predicted by the present study may facilitate metabolic adaptations to the different seasons (accompanied by different environmental temperatures). The effects of nonapeptides are often sex-specific (Goodson, 2013;Kelly and Goodson, 2014), and there is evidence for differences in nonapeptide levels between males and females in birds (Robinzon et al., 1990). However, in our study MT levels did not differ between the sexes. Neither did we find evidence for MT levels to differ between pair-bonded and group-living ravens. These results parallel a recent study in another corvid species, pinyon jays (Gymnorhinus cyanocephalus), which showed that intranasally administered MT neither affects the formation of pair bonds nor their maintenance (Duque et al., 2020), although administration of MT in the same species has been found to increase pro-social food-donations (Duque et al., 2018). However, the effects of administered hormones rely heavily on the available receptors. Studies on zebra finches Taeniopygia guttata) showed that oxytocin-like receptors do in fact mediate pair bonding (Klatt and Goodson, 2013) and that the administration of OT antagonists decreases pair formation (Pedersen and Tomaszycki, 2012). In the present study, it should be noted that the number of group-living subjects included in the model that investigated effects on MT concentrations was very low (n = 2), which may have hampered the detection of potential differences between them and the pair-bonded subjects. Moreover, the group-living subjects might have already started to form pair-bonds within their non-breeder group, which might have been reflected in similar physiological activity in the group-living birds as in the pair-bonded ravens. Finally, it is important to keep in mind that we explored bonding status as a general factor and did not investigate the effect of social interactions in a way that would allow us to ascribe causality to it (cf. e.g. Lürzel et al., 2020). To be able to study a causal link between MT and certain behaviours in a noninvasive way, we recommend specific sampling regimes as well as matched controls. The herein described procedure of salivary MT determination would allow for that more easily and precisely than existing methods, like analysing droppings. We do acknowledge that it is debated whether peripheral OT/MT concentrations reflect central concentrations, which affect the above described social correlates (Neumann, 2008). Recent studies, however, have identified a pathway through which peripheral OT can cross the blood-brain barrier and thereby affect social behaviour Yamamoto et al., 2019;Yamamoto and Higashida, 2020). This possibly explains why many of the reported differences in peripheral OT/MT have been linked to social factors (Crockford et al., 2014). In line with this, Lefevre et al. (2017) found that in primates, simultaneously collected (peripheral) plasma OT and (central) cerebrospinal fluid OT Table 2 Given are estimates, standard errors (SE), confidence intervals (CI), degrees of freedom (df), z-(binomial model) or t-values, p-values, and partial omega squared (ω 2 ) for each parameter of the (linear) mixed-effects model. correlated positively, suggesting that peripheral OT can be used as a proxy for central OT. (For further discussion about the relationship between central and peripheral OT levels and challenges for measuring OT, please, see among others Grinevich and Neumann, 2021;Higashida et al., 2019;Lefevre et al., 2017;MacLean et al., 2019;Yamamoto and Higashida, 2020.) In sum, using positive reinforcement, birds can be trained to voluntarily provide saliva samples, from which biologically meaningful variation in MT can be analysed. Ethical statement This study complies with the Austrian Animal Experiments Act ( § 2, Federal Law Gazette No. 114/2012).
2021-06-16T14:31:37.225Z
2021-06-15T00:00:00.000
{ "year": 2021, "sha1": "bcd7af098d2def77a26ab99361613f41fc397231", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.yhbeh.2021.105015", "oa_status": "HYBRID", "pdf_src": "Elsevier", "pdf_hash": "bcd7af098d2def77a26ab99361613f41fc397231", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
270153721
pes2o/s2orc
v3-fos-license
Neuro-Adipokine Crosstalk in Alzheimer’s Disease The connection between body weight alterations and Alzheimer’s disease highlights the intricate relationship between the brain and adipose tissue in the context of neurological disorders. During midlife, weight gain increases the risk of cognitive decline and dementia, whereas in late life, weight gain becomes a protective factor. Despite their substantial impact on metabolism, the role of adipokines in the transition from healthy aging to neurological disorders remains largely unexplored. We aim to investigate how the adipose tissue milieu and the secreted adipokines are involved in the transition between biological and pathological aging, highlighting the bidirectional relationship between the brain and systemic metabolism. Understanding the function of these adipokines will allow us to identify biomarkers for early detection of Alzheimer’s disease and uncover novel therapeutic options. Fat as a Clinical Feature in Neurological Disorders Neurological disorders are characterized by overall impairments in energy homeostasis and uniquely by their specific disease markers associated with the central nervous system (CNS).With the rapidly aging population, an urgent need is to uncover novel approaches for preventing and treating neurological disorders.This requires the complementation of peripheral markers to canonical central markers of disease.Obesity is a risk factor for various neurological disorders, with an altered adipose tissue milieu as a common denominator.Recent studies highlight the role of peripheral organs, particularly the white adipose tissue (WAT), in shaping brain structure and function [1,2].Understanding the role of adipose tissue in the development and progression of CNS disorders provides novel therapeutic approaches to treat the ever-increasing neurodegenerative population. Alzheimer's disease (AD) is the primary cause of dementia, with age being the main risk factor [3].The incidence rate doubles approximately every 5 years among individuals aged 65 years and older [4].AD pathogenesis involves the accumulation of amyloid beta (Aβ) and tau protein, leading to the formation of amyloid plaques and neurofibrillary tangles, respectively [5].While cognition and memory deficits are the primary clinical features of AD, body weight alterations are secondary features linked to disease progression.Midlife weight gain increases the risk of AD, with obesity-related brain changes resembling those seen in AD [6,7].Conversely, late-life weight gain may protect against the onset and progression of AD [8,9].However, most studies utilize body mass index (BMI) as a predictive measure, which does not accurately reflect regional adiposity [10].Understanding the role of specific regional fat depots and their depot-specific secretome will allow better characterization of the neuro-adipose crosstalk in biological and pathological aging [11].The conundrum in weight fluctuations underscores the importance of investigating the structure and function of WAT in relation to the prevention and treatment of AD. Remodeling of Adipose Tissue Shapes Cognitive Function The association between midlife weight gain and cognitive decline have shed light on the brain-body crosstalk in the regulation of brain health.Chronic overnutrition dramatically changes the metabolic profile of the body, altering levels of nutrients, metabolites, and hormones.In an obesogenic environment, the adipose tissue expands to compensate for the excess nutrients, recruiting immune cells and transitioning to an inflammatory milieu [12].Excessive adiposity is a risk factor for cognitive decline due to a rewiring of whole-body metabolism [13].The mechanisms of the brain-adipose axis in modulating cognitive function remains to be elucidated.A study screening human adipose tissue identified genes that are linked to cognitive performance [14].Manipulation of these genes demonstrated improvements in cognitive performance for various models including rodents and flies.In line with these results, visceral adipose tissue (VAT) NLPR3 inflammasome impairs cognitive function through an IL-1-mediated microglial activation [15].The proposed mechanism involves VAT NLRP3 induction of IL-1β secretion from local macrophages, with elevated peripheral IL-1β triggering an inflammatory response in brainresident immune cells.Whether peripheral IL-1β further recruits peripheral immune cells to the brain remains to be elucidated.Adipose tissue-derived mesenchymal stem cells (ADSCs) have been shown to improve both cognitive function and physical activity in aging mice [16].However, in the context of obesity, ADSCs can be distinguished into two categories: pro-inflammatory and anti-inflammatory.The pro-inflammatory ADSCs can increase neuroinflammation by inducing proliferation and differentiation of CD4+ and CD8+ T cells [17].Transplantation of ADSCs into a rat model of AD ameliorates cognitive impairments, enhances Aβ clearance, and suppresses apoptosis and neuroinflammation by modulating central and systemic SIRT1 levels [18,19].Intraventricular injection of ADSCs has been shown to improve AD pathogenesis by modulating central cholesterol homeostasis [20].A detailed review focusing on the role of ADSC-related therapy in CNS disorders is found elsewhere [21].Adipose tissue-derived extracellular vesicles and microRNAs in obese patients and rodents cause synaptic loss and cognitive impairment [22].These molecules enter the brain via a membrane protein-dependent manner and demonstrate a preference for neurons.In the obese brain, these molecules are enriched in the hippocampus and hypothalamus, likely contributing to the rewiring of higher-order processing and satiety regulation, respectively [22].Diabetes mellitus induces abnormal expression of exosome miRNA in multiple peripheral tissues with an altered central exosome absorption [23].One possibility is that there is an increase in exosome miRNA flux into the brain that leads to a loss of aquaporin-4 expression and redistribution in perivascular astrocyte endfeet, resulting in glymphatic dysfunction and cognitive decline [23].Downregulation of these extracellular vesicles ameliorates cognitive impairments, thereby making them novel targets for pharmaceutical intervention to treat metabolically induced cognitive dysfunction.These studies support the role of a brain-adipose axis in the regulation of cognitive function, with alterations in the adipose tissue milieu as a driver of cognitive decline.Further investigation into an altered adipose tissue milieu in chronic overnutrition can help explain the transition from biological to pathological aging and provide a link between obesity and neurological disorders. Change in Adiposity across Biological and Pathological Aging The increasing aging population has led to an unprecedented level of chronic metabolic and neurodegenerative diseases.Throughout aging, the body transitions from an anabolic state to a catabolic state, resulting in overall weight loss.Before such weight loss is manifested, most individuals experience an increase in adipose tissue mass due to overnutrition, reduced physical activity, and a lowered basal metabolic rate [24,25].This weight gain throughout early to middle adulthood increases the risk of major chronic diseases and mortality [26].Late midlife adiposity is a proposed predictor of frailty in late life [27].On the other hand, age-related weight loss can stem from various factors, including undernutrition, sarcopenia, and cachexia [28].Importantly, weight loss in late life is linked to a heightened risk of mortality, irrespective of baseline weight [24].Further research on the composition of the adipose tissue milieu throughout the transition from midlife weight gain to late-life weight loss will provide insight on its role in biological and pathological aging.Whether the initial transition of weight change stems from a central or peripheral root cause remains to be elucidated.A fundamental question that remains unanswered is whether weight loss itself exacerbates mortality or whether it is a consequence of pathological diseases that subsequently increase mortality. While weight loss is a natural outcome of biological aging, excessive weight loss serves as a clinical manifestation of AD [29].The weight loss seen in AD is attributed to multiple factors, including lower energy intake and higher resting energy expenditure [30].Patients diagnosed with mild cognitive impairment (MCI) who experience weight loss have an elevated risk of developing AD [31].Similarly, individuals with AD who undergo weight loss are at a heightened risk of disease progression, leading to increased morbidity and mortality [32].There is a positive association between cognitive performance and sarcopenic obesity in older adults with AD [33].The occurrence of late-life weight loss precedes the onset of cognitive decline in AD [34,35].An association between late-life weight loss and the development of MCI is found independent of midlife body weight [36].Interestingly, recent data suggest that both substantial late-life weight gain and weight loss are associated with a higher risk of dementia [37].Hence, it is plausible that significant shifts in either direction of the body weight set point leads to a decline in brain health due to improper compensatory mechanisms.This raises the questions of whether an altered adipose tissue milieu precedes cognitive decline in both biological and pathological aging, and what are the peripheral messengers involved in the brain-body crosstalk to mediate this shift.One possible explanation that connects the systemic alterations seen in AD is dysautonomia, a state of autonomic nervous system dysfunction [38].AD is characterized by cholinergic dysfunction, resulting in a chronic state of neuronal hyperactivity [39].Levels of cholinergic receptor binding are inversely associated with the severity of dementia [40].Excess sympathetic tone results in an overall catabolic state.For example, increased sympathetic innervation to the WAT leads to elevated lipolysis, with elevated free fatty acids resulting in ectopic accumulation of lipids and contributing to a chronic inflammatory state (Figure 1).This explains the generalizing wasting syndrome associated with AD [41].The miscommunication between the brain and adipose tissue, resulting in autonomic dysfunction, is also reported in cancer-associated cachexia [42].Further investigation into the somatosensory innervation of adipose tissue can assist in elucidation of the neuroadipose crosstalk in health and disease [43] Though late-life BMI is inversely correlated with AD risk, future studies comparing late-life weight maintenance and weight gain are needed to assess the therapeutic efficacy of late-life overnutrition.Whether midlife weight gain or late-life weight loss is a greater risk factor to higher morbidity and mortality of neurological disorders needs clarification. Adipokines as Mediators between the Brain and Body As an endocrine organ, WAT consists of preadipocytes, mature adipocytes, mesenchymal cells, and various other cell types that constitute the stromal vascular fraction [44].Hundreds of cytokines are secreted from WAT and have been identified and termed as "adipokines".Adipokines have been implicated in several aspects of brain metabolism such as leptin in food intake regulation and adiponectin in brain glucose metabolism [1].Structural and functional alterations of WAT occur throughout biological aging and influence the quantity and type of adipokines secreted [45].Aging is marked by an escalation in VAT, driven by chronic positive energy balance and a shift in lipid deposition from the subcutaneous to the visceral fat depot [46].The expansion of VAT is linked to an increase in proinflammatory adipokines and a decrease in anti-inflammatory mediators [45].Both generalized and VAT are associated with reduced cognitive function, after adjustment for cardiovascular risk factors and vascular brain injury [13].Interestingly, subcutaneous WAT and omental WAT are associated with different plasma markers and cerebrovascular health in severely obese patients [47].While subcutaneous WAT demonstrates more crown-like structures, indicative of inflammation and hypertrophy, there is no association between subcutaneous WAT parameters and brain health.On the other hand, omental WAT is positively related to greater variability in CBF, indicating vascular and perfusion abnormalities, in the parietal lobe and nucleus accumbens [47].Furthermore, aging is associated with the accumulation of ectopic fat, which is correlated with an increased risk of cardiometabolic disorders [48,49].Increased ectopic fat accumulation is associated with cognitive impairments, decreased total brain volume, and increased lateral ventricle volume [50].In rodents fed with a high-fat diet (HFD), visceral adiposity links Adipokines as Mediators between the Brain and Body As an endocrine organ, WAT consists of preadipocytes, mature adipocytes, mesenchymal cells, and various other cell types that constitute the stromal vascular fraction [44].Hundreds of cytokines are secreted from WAT and have been identified and termed as "adipokines".Adipokines have been implicated in several aspects of brain metabolism such as leptin in food intake regulation and adiponectin in brain glucose metabolism [1].Structural and functional alterations of WAT occur throughout biological aging and influence the quantity and type of adipokines secreted [45].Aging is marked by an escalation in VAT, driven by chronic positive energy balance and a shift in lipid deposition from the subcutaneous to the visceral fat depot [46].The expansion of VAT is linked to an increase in proinflammatory adipokines and a decrease in anti-inflammatory mediators [45].Both generalized and VAT are associated with reduced cognitive function, after adjustment for cardiovascular risk factors and vascular brain injury [13].Interestingly, subcutaneous WAT and omental WAT are associated with different plasma markers and cerebrovascular health in severely obese patients [47].While subcutaneous WAT demonstrates more crown-like structures, indicative of inflammation and hypertrophy, there is no association between subcutaneous WAT parameters and brain health.On the other hand, omental WAT is positively related to greater variability in CBF, indicating vascular and perfusion abnormalities, in the parietal lobe and nucleus accumbens [47].Furthermore, aging is associated with the accumulation of ectopic fat, which is correlated with an increased risk of cardiometabolic disorders [48,49].Increased ectopic fat accumulation is associated with cognitive impairments, decreased total brain volume, and increased lateral ventricle volume [50].In rodents fed with a high-fat diet (HFD), visceral adiposity links cerebrovascular rewiring to cognitive impairments [51].To sustain metabolic homeostasis, maintaining a balanced proportion of both total adiposity and regional distribution is crucial [52].Central fat distribution and the relative loss of fat-free mass are more accurate than BMI in determining various health risks associated with biological and pathological aging [53].Higher VAT metabolism is linked to greater brain Aβ burden in older subjects with cognitive impairment [54].Future studies differentiating adipose tissue composition in AD will help explain the difference in adipokine profiles.Discerning both plasma and cerebrospinal fluid (CSF) levels of adipokines in biological aging compared to pathological aging will provide insights into the molecular mechanisms behind body weight alterations and systemic metabolism in AD.In this review, we explore the potential roles of key adipokines in the development and progression of AD.We aim to offer fresh insights on the involvement of adipokines in AD and unveil potential avenues for the discovery of novel therapeutic targets and diagnostic tools.Given the abundance of adipokines implicated in neurometabolism, we focus on those that are extensively researched. Altered Adipokine Profile in Biological and Pathological Aging Leptin is secreted in proportion to the amount of adipose stores in the body [55].Its primary role is to communicate to the brain on the status of peripheral energy storage levels, making it a critical adipokine for the maintenance of body weight and a hallmark of obesity.In addition to its metabolic roles, leptin is involved in the maintenance of a proper cerebral landscape.Both leptin and leptin-receptor-deficient mice demonstrate remodeling of the cerebrovascular architecture and gliovascular unit [56,57].Peripheral nerve regeneration requires Schwann cell leptin receptor signaling, indicating a role for leptin in myelination [58].Administration of leptin in the leptin-deficient ob/ob mouse model ameliorates hypomyelination [59].Additional studies are required to differentiate whether these benefits are independent from body weight changes.Leptin ameliorates AD pathology by targeting multiple steps in the Aβ cycle, including production, clearance, degradation, and aggregation [60][61][62].Decreased plasma leptin levels are correlated with an increased risk of cognitive decline and AD [63].Plasma leptin levels are inversely correlated with CSF Aβ, with lower plasma leptin levels indicating greater brain Aβ deposition [64].Conversely, higher levels of leptin in the elderly are shown to be protective against cognitive decline, independent of comorbidities and body fat [65].The decline in leptin levels in AD patients is correlated with the severity of dementia symptoms and changes in body weight [64,66] (Figure 2).A study indicates that higher plasma leptin levels are associated with reduced risk of AD only in non-obese patients [65].Additionally, higher leptin levels are associated with enhanced cognitive performance among normalweight participants, while no such association was observed in overweight individuals [67].A potential explanation is that the development of leptin resistance in obese patients during midlife diminishes the effectiveness of leptin in late life.Although weight loss is associated with increased leptin sensitivity, chronic overnutrition in midlife could lead to an irreversible state of leptin resistance.It remains unclear whether elevated leptin levels are actively involved in the development of obesity or are markers of metabolic dysfunction due to chronic overnutrition.Whether leptin levels and sensitivity change throughout biological aging independent of adiposity remains to be elucidated.Overall, these findings suggest that low leptin levels could serve as an early biomarker for identifying AD, and the potential to leverage leptin as a therapeutic agent to treat AD pathology.Adiponectin is involved in insulin sensitization and modulation of anti-inflammatory and antioxidant activities [68].It has been shown to mediate myelination by reducing myelin lipid accumulation in myelin-laden macrophages, mitigating neuroinflammation [69,70].Plasma adiponectin levels increase with age in both men and women [71].Unlike many adipokines, the expression of adiponectin inversely correlates with adiposity [72].Reduced levels of adiponectin are correlated with metabolic syndrome [73].In young adults with metabolic syndrome, adiponectin levels are positively correlated with total macrovascular CBF.Paradoxical to plasma leptin levels, midlife obesity results in lowered plasma adiponectin levels, while late-life weight loss results in elevated adiponectin levels [74] (Figure 2).Interestingly, greater adiponectin levels in early life are associated with cardioprotective benefits; however, in midlife and late life, elevated levels are associated with poor cardiovascular outcomes [71].In late life, elevated adiponectin levels are associated with reduced physical functioning and greater all-cause mortality [75].Elevated serum adiponectin levels and insulin resistance is reported in AD patients compared to MCI patients and healthy controls [76,77].This positive correlation supports the existing theory of adiponectin resistance in AD.This is further supported by correlations between adiponectin levels and dementia [78].In MCI subjects, CSF adiponectin levels are associated with cortical glucose metabolism [79].Additionally, plasma adiponectin levels are linked to the rate of cognitive decline and cortical thinning in Aβ (+) MCI [80].Impaired central adiponectin signaling likely contributes to brain insulin resistance and altered glucose metabolism seen in AD.Whether adiponectin directly targets known AD markers such as Aβ and tau is unclear.Considering the isoforms of adiponectin, serum levels of different molecular weights have diverse implications for cognitive decline and AD.Future studies concentrating on quantifying adiponectin isoforms in biological and pathological aging will result in greater predictive value of AD diagnosis.It is unclear whether adiponectin signaling is impaired in these studies, reinforcing a need to measure adiponectin receptors and downstream signaling molecules in conjunction with insulin and insulin receptors with a temporal component to better pinpoint the development of adiponectin resistance.A hypothesis to explain the paradoxical relationship between high Adiponectin is involved in insulin sensitization and modulation of anti-inflammatory and antioxidant activities [68].It has been shown to mediate myelination by reducing myelin lipid accumulation in myelin-laden macrophages, mitigating neuroinflammation [69,70].Plasma adiponectin levels increase with age in both men and women [71].Unlike many adipokines, the expression of adiponectin inversely correlates with adiposity [72].Reduced levels of adiponectin are correlated with metabolic syndrome [73].In young adults with metabolic syndrome, adiponectin levels are positively correlated with total macrovascular CBF.Paradoxical to plasma leptin levels, midlife obesity results in lowered plasma adiponectin levels, while late-life weight loss results in elevated adiponectin levels [74] (Figure 2).Interestingly, greater adiponectin levels in early life are associated with cardioprotective benefits; however, in midlife and late life, elevated levels are associated with poor cardiovascular outcomes [71].In late life, elevated adiponectin levels are associated with reduced physical functioning and greater all-cause mortality [75].Elevated serum adiponectin levels and insulin resistance is reported in AD patients compared to MCI patients and healthy controls [76,77].This positive correlation supports the existing theory of adiponectin resistance in AD.This is further supported by correlations between adiponectin levels and dementia [78].In MCI subjects, CSF adiponectin levels are associated with cortical glucose metabolism [79].Additionally, plasma adiponectin levels are linked to the rate of cognitive decline and cortical thinning in Aβ (+) MCI [80].Impaired central adiponectin signaling likely contributes to brain insulin resistance and altered glucose metabolism seen in AD.Whether adiponectin directly targets known AD markers such as Aβ and tau is unclear.Considering the isoforms of adiponectin, serum levels of different molecular weights have diverse implications for cognitive decline and AD.Future studies concentrating on quantifying adiponectin isoforms in biological and pathological aging will result in greater predictive value of AD diagnosis.It is unclear whether adiponectin signaling is impaired in these studies, reinforcing a need to measure adiponectin receptors and downstream signaling molecules in conjunction with insulin and insulin receptors with a temporal component to better pinpoint the development of adiponectin resistance.A hypothesis to explain the paradoxical relationship between high adiponectin levels throughout aging is that elevated levels are likely a compensatory response to maintain redox homeostasis due to the systemic chronic inflammation occurring in biological aging [81].This compensatory response is exacerbated in AD, where in addition to inflammatory aging markers, greater levels of adiponectin are secreted in response to AD pathologies.Overall, the utilization of plasma adiponectin levels as an AD predictor remains premature due to the heterogeneity of clinical studies, likely attributed to the different timepoints of biological and pathological aging [82]. Plasminogen activator inhibitor-1 (PAI-1) is a serine protease inhibitor primarily linked to thrombosis and fibrinolysis, with other biological roles including regulation of cell migration, tissue remodeling, and angiogenesis [83].Plasma levels of PAI-1 exhibit a positive correlation with weight gain, and elevated levels are associated with metabolic syndrome and obesity [84].Considering that obesity is a main driving force for the incidence of stroke, alterations to PAI-1 levels and signaling are proposed as a key nexus that explains the mechanistic link between obesity and stroke [85].This link can be extended to explain the metabolically induced transition from biological aging to pathological aging.Aging correlates with increased levels of PAI-1, with aged tissues displaying greater levels of PAI-1 expression [86].An increase in PAI-1 levels correlates with cognitive decline, and patients with AD exhibit elevated levels [87].Plasma PAI-1 levels gradually increase with dementia progression, suggesting the potential of PAI-1 as an early indicator of AD [88].Whether PAI-1 directly contributes to AD pathology or serves as a compensatory mechanism remains unanswered.As an inhibitor of plasminogen, elevated levels of PAI-1 are associated with impaired Aβ clearance [89,90].One possibility is that the initial accumulation of Aβ preceding cognitive decline and dementia triggers an increase in plasmin activation and a decrease in PAI-1 levels to reduce Aβ levels.The decline in PAI-1 levels observed in early late life (preclinical AD) corresponds to early late-life weight loss, as PAI-1 strongly correlates with BMI and adiposity [91].The subsequent increase in PAI-1 seen in late life (clinical AD) may be attributed to Aβ formation, further worsening AD pathology (Figure 2).As an upstream regulator of BDNF, the PAI-1/BDNF ratio is proposed as a selective marker of AD patients with full dementia to distinguish from other prodromal AD stages and healthy controls [87].This reinforces the concept of using multiple markers and the ratios between them as indicators of disease progression, instead of solely relying on a single marker or disconnected individual markers.Longitudinal studies that focus on the changes in PAI-1 levels during the preclinical and clinical stages of AD are necessary to elucidate the role of PAI-1 in AD. Resistin is a cysteine-rich secretory protein that counteracts insulin action and impairs glucose homeostasis [92].Plasma resistin levels are positively correlated with weight gain, and link obesity to diabetes [93].Elevated levels of resistin increase the risk of developing cardiovascular disease and insulin resistance [94].Acute cerebral infarction patients exhibit higher serum resistin levels, which indicate that serum resistin levels may be a risk factor for pathological cerebrovascular remodeling [95].In elderly patients, higher levels of resistin are associated with greater risk for all-cause mortality, independently of cardiovascular risk factors [96].Both plasma and CSF levels of resistin are elevated in AD patients compared to healthy controls [97,98] (Figure 2).This correlation extends to other cognitive impairments such as MCI and all-cause dementia [99 -101].Interestingly, resistin attenuates oxidative stress induced by Aβ [102].Conversely, treatment with resistin exacerbates Aβ pathology in a rodent model of AD and metabolic syndrome [103].In the same rodent model, treatment with adiponectin ameliorates glucose metabolism and cognitive function and decreases Aβ load.A direction to explore is the interaction between adiponectin and resistin on brain insulin signaling and glucose metabolism, as both seemingly have opposing functions on AD pathology [103].One hypothesis is that acute elevated levels of resistin is a compensatory response to the accumulation of Aβ; however, chronic levels lead to both brain and peripheral insulin resistance and worsen AD pathology.This explains why simultaneous exposure to both HFD and resistin greatly worsens Aβ pathology compared to HFD alone [103].Studies tracking resistin levels throughout the different stages of AD is required to determine the beneficial and detrimental levels of resistin. Circulating levels of adipokines are promising candidates that can complement known markers of AD pathology to enhance the monitoring of AD.It is imperative for future studies to measure a variety of adipokines rather than biasedly picking a few to establish correlation with cognitive performance and AD pathology.These adipokines act dependently on each other and with known AD markers; therefore, utilization of multiple markers is required to accurately determine the stages of AD.The distinction between early late life (preclinical AD) and late life (clinical AD) requires more nuance due to the different phenotypes and interindividual variability, which can explain the opposing results in biomarker measurements seen in AD studies. Adipokines as Therapeutic Agents for Neurometabolic Dysfunction With the exponential rise in neurological disorders, it puts pressure on the healthcare field to come up with novel therapeutic strategies.The potential of using adipokines as therapeutic agents has been explored in the context of peripheral disorders often associated with obesity [104].A mechanism through which mesenchymal stem cell therapy improves metabolic health is the modulation of adipokine profiles [105].With the growing link between obesity and neurological disorders, the field of neurodegenerative treatment will benefit from viewing neurological disorders with a metabolic lens [106].Evidently, changes in adipokine profiles are seen in neurological disorders, with basic science research demonstrating neuroprotective roles.Treating neurological disorders with adipokine therapy in the clinical setting remains unexplored.This is mainly due to the lack of consensus on adipokine changes in neurological disorders.In the context of AD, there remains a lack of therapy that can efficiently mitigate AD pathology.We propose the use of combination therapy by complementing conventional treatments such as cholinesterase inhibitors with metabolic drugs that manipulate adipokine levels to enhance AD treatment.Chronic donepezil treatment in MCI and AD patients leads to a decrease in BMI and abdominal circumference with lower serum leptin levels and higher serum adiponectin levels [107].Low-dose galantamine improves oxidative stress, inflammation, insulin resistance, and autonomic regulation in patients with metabolic syndrome [108].These drugs that were originally designed to target neurological disorders can be adapted to target metabolic disorders.Trending metabolic drugs utilized for obesity and type 2 diabetes mellitus, such as glucagon-like-peptide-1 receptor agonist and metformin, are being repurposed as novel treatment strategies for neurodegenerative diseases [109,110].These treatments have a direct effect on the brain, as well as an indirect effect by improving overall metabolic health including adipokine profiles.The remodeling of adipokine profiles extend to lifestyle interventions such as diet and exercise, emphasizing both direct and indirect effects on neurological health [111,112].Dietary choline in early life impacts cognitive function in a familial AD mouse model [113].Treatment with an adiponectin mimetic rescues memory deficits by ameliorating neuronal insulin resistance in AD mice [114].It is unlikely that the manipulation of a single adipokine will have a pronounced effect on the brain; rather, the focus should be on achieving a healthy adipose tissue environment, leading to a proper balance of overall adipokine levels.Targeting the adipose tissue to repair vascular damage in AD and other neurological disorders is a novel approach that changes the landscape of viewing and treating CNS disorders [115].Furthermore, the effects of metabolic treatment on AD should expand from canonical AD pathologies such as amyloid plaques and neurofibrillary tangles to the neuro-glial-vascular landscape encompassing the neurovascular unit, blood-brain barrier, myelin sheath, and glymphatic drainage [116].Viewing AD and other neurological disorders from a brain-body perspective and treating them by targeting both central and peripheral systems will maximize the efficacy of interventions.It is generally accepted that earlier diagnosis and treatment of AD results in greater effectiveness; therefore, interventions should focus on the midlife weight gain phase and early late life preclinical phase. Concluding Remarks Understanding the brain-body connection in health and disease provides multiple viewpoints and a more comprehensive overview of a pathology compared to historical approaches to diagnosis and treatment.The utilization of adipokines as diagnostic tools and molecular targets for pharmacological treatments in neurological disorders highlights the need to view neurodegeneration with a metabolic lens.Exploring the region-specific effects of adipokines on AD pathology with a temporal component represents the next phase in this field.Instead of manipulating a single adipokine, further research should focus on identifying a comprehensive adipokine atlas in various timepoints of AD pathology.For human studies, longitudinal tracking of the adipokine profile will be more useful than cross-sectional studies due to the vast interindividual variability of AD pathology.In the context of rodent studies, it is crucial to utilize different AD models with various severities of AD pathology.For example, using a rodent model that mimics a slower pace and progression of AD allows better tracking of adipokine profiles in association with AD pathology.To effectively establish the role of adipokines in neurodegeneration, it is essential to identify the mechanisms through which these chemical messengers exert their effects.Adipokines act on various targets, including the neurovascular unit, blood-brain barrier, myelin sheath, and glymphatic system.Adipokine-induced dysfunction in one of these functional neural modules can lead to a domino effect of neural rewiring and impairments.This reinforces the concept that most chronic complications span multiple organs and, to effectively prevent and treat these complex disorders, crosstalk and collaboration between fields are required.Targeting the adipose tissue may be more practical than targeting the brain to impact cognitive function due to the accessibility of peripheral organs.The concept of targeting peripheral organs to treat CNS disorders emphasizes the importance of advancing our knowledge in the brain-body crosstalk. Figure 1 . Figure 1.SNS overactivity in Alzheimer's disease.Impaired metabolic profile and central circuit rewiring results in an elevated sympathetic tone.This leads to a catabolic state, driving adipose tissue lipolysis with elevated free fatty acids further promoting positive feedback.Figure created using Biorender. Figure 1 . Figure 1.SNS overactivity in Alzheimer's disease.Impaired metabolic profile and central circuit rewiring results in an elevated sympathetic tone.This leads to a catabolic state, driving adipose tissue lipolysis with elevated free fatty acids further promoting positive feedback.Figure created using Biorender. Figure 2 . Figure 2. Alzheimer's stages in the context of adipokines.The transition from midlife to late life leads to weight loss and alterations in the structure and function of adipose tissue.This leads to an altered adipose tissue secretome, resulting in a pathological adipokine profile that enhances AD pathology.Figure created using Biorender. Figure 2 . Figure 2. Alzheimer's stages in the context of adipokines.The transition from midlife to late life leads to weight loss and alterations in the structure and function of adipose tissue.This leads to an altered adipose tissue secretome, resulting in a pathological adipokine profile that enhances AD pathology.Figure created using Biorender.
2024-06-01T15:17:09.310Z
2024-05-29T00:00:00.000
{ "year": 2024, "sha1": "a81ab58ee0509d13c00d0aa71e92069639d8dd37", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/25/11/5932/pdf?version=1716972063", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e1e5c9e4a96c06953816891b6c01d8816fec7c30", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246035632
pes2o/s2orc
v3-fos-license
Can’t Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders Self-supervised representation learning techniques have been developing rapidly to make full use of unlabeled images. They encode images into rich features that are oblivious to downstream tasks. Behind their revolutionary representation power, the requirements for dedicated model designs and a massive amount of computation resources expose image encoders to the risks of potential model stealing attacks - a cheap way to mimic the well-trained encoder performance while circumventing the demanding requirements. Yet conventional attacks only target supervised classifiers given their predicted labels and/or posteriors, which leaves the vulnerability of unsupervised encoders unexplored. In this paper, we first instantiate the conventional stealing attacks against encoders and demonstrate their severer vulnerability compared with downstream classifiers. To better leverage the rich representation of encoders, we further propose Cont-Steal, a contrastive-learning-based attack, and validate its improved stealing effectiveness in various experiment settings. As a takeaway, we appeal to our community’s attention to the intellectual property protection of representation learning techniques, especially to the defenses against encoder stealing attacks like ours. 1 Introduction Recent years have witnessed the great success of applying deep learning (DL) to computer vision tasks. Different from supervised DL models, self-supervised learning which transforms unlabeled data samples into rich representations, has gained more and more popularity. Behind its powerful representation, it is non-trivial to obtain a state-of-the-art image encoder. For instance, Sim-CLR [6] uses 128 TPU v3 cores to pre-train a ResNet-50 encoder with a batch size of 4096. Therefore, many big companies provide cloud-based self-supervised learning encoder services for users. For instance, Cohere, 2 OpenAI, 3 and Clarifai 4 provide the embedding API of images and texts for commercial usage. There are many works [10,26,30] exploring security issues of encoder-based API. Therefore, it is a very Figure 1: Model stealing attacks against classifiers (previous) v.s. model stealing attacks against encoders (ours). Previous works aim to steal a whole classifier using the predicted label or posteriors of a target model. In our work, we aim to steal the target encoder using its embeddings. The target encoder (E t ) is pre-trained and fixed, as shown in the solid frame. The surrogate encoder (E s ) is trainable by the adversary, as shown in the dashed frame. important and urgent problem. These kinds of service leave the possibility of model stealing attacks [5,25,28,37,44,48,50,51]. In these attacks, the adversary aims to steal the parameters or functionalities of target models with only query access to them. A successful model stealing attack does not only threaten the intellectual property of the target model, but also serves as a stepping stone for further attacks such as adversarial examples [3,4,16,38,47,52], backdoor attacks [7,26,41,43], and membership inference attacks [22][23][24]30,33,34,40,42,45,46]. So far, model stealing attacks concentrate on the supervised classifiers, i.e., the model responses are prediction posteriors or labels for a specific downstream task. The vulnerability of unsupervised image encoders is unfortunately unexplored. Our Work. To fill this gap, we pioneer the systematic investigation of model stealing attacks against image encoders. In this work, the adversary's goal is to steal the functionalities of the target model. See Figure 1 for an overview and a comparison with previous works. More specifically, we focus on encoders trained by contrastive learning, which is one of the most cutting-edge unsupervised representation learn-ing strategies that unleash the information of unlabeled data. We first instantiate the conventional stealing attacks against encoders and expose their vulnerability. Given an input image, the target encoder outputs its representation (referred to as embedding). Similar to model stealing attacks against classifiers, we consider the embedding as the "ground truth" label to guide the training procedure of a surrogate encoder on the adversary side. To measure the effectiveness of stealing attacks, we train an extra linear layer for the target and surrogate encoders towards the same downstream classification task. Preferably, the surrogate model should achieve both high classification accuracy and high agreement with the target predictions. We evaluate our attacks on five datasets against four contrastive learning encoders. Our results demonstrate that the conventional attacks are more effective against encoders than against downstream classifiers. For instance, when we steal the downstream classifier pre-trained by SimCLR on CI-FAR10 (with posteriors as its responses) using STL10 as the surrogate dataset, the adversary can only achieve an accuracy of 0.359. The accuracy, however, increases to 0.500 instead when we steal its encoder (with the embedding as its response). Despite its encouraging performance, conventional attacks are not the most suitable ones against encoders. This is because they treat each image-embedding pair individually without interacting across pairs. Different embeddings are beneficial to each other as they can serve as anchors to better locate the position of the other embeddings in their space. Contrastive learning [6,8,17,20,27,49,53] is a straightforward idea to achieve this goal. It is formulated to enforce the embeddings of different augmentations of the same images closer and those of different images further. In a similar spirit, we propose Cont-Steal, a contrastivelearning-based model stealing attack against the encoder. The goal of Cont-Steal is to enforce the surrogate embedding of an image close to its target embedding (defined as a positive pair) and also push away embeddings of different images irrespective of being generated by the target or the surrogate encoders (defined as negative pairs). The comprehensive evaluation shows that Cont-Steal outperforms the conventional model stealing attacks to a large extent. For instance, when CIFAR10 is the target dataset, Cont-Steal achieves an accuracy of 0.714 on the SimCLR encoder pretrained on CIFAR10 with the surrogate dataset and downstream dataset being STL10, while the conventional attack only achieves 0.457 accuracy. Also, Cont-Steal is more query-efficient and dataset-independent (see Figure 9 for more details). This is because Cont-Steal leverages higher-order information across samples to mimic the functionality of the target encoder. To mitigate the attacks, we evaluate different defense mechanisms including noise, top-k, rounding, and watermark. Our evaluations show that in most cases, these mechanisms cannot effectively defend against Cont-Steal. Among them, top-k can reduce the attack performance to the largest extent. However, it also strongly limits the target model's utility. As a takeaway, our attack further exposes the severe vul-nerability of pre-trained encoders. We appeal to our community's attention to the intellectual property protection of representation learning techniques, especially to the defenses against encoder stealing attacks like ours. Threat Model In this work, for the encoder pre-trained with images, we consider image classification as the downstream task. We refer to the encoder as the target encoder. Then we treat both the encoder and the linear layer trained for the downstream task together as the target model. We first introduce the adversary's goal and then characterize different background knowledge that the adversary might have. Adversary's Goal. Following previous work [25,28,44], we taxonomize the adversary's goal into two dimensions, i.e., theft and utility. The theft adversary aims to build a surrogate encoder that has similar performance on the downstream tasks as the target encoder. Different from the thief adversary, the goal of the utility adversary is to construct a surrogate encoder that behaves normally on different downstream tasks. In this case, the surrogate encoder not only faithfully "copies" the behaviors of the target encoder, but also serves as a stepping stone to conduct other attacks. Adversary's Background Knowledge. We categorize the adversary's background knowledge into two dimensions, i.e., the knowledge of the target encoder and the distribution of the surrogate dataset. Regarding knowledge of the target encoder, we assume that the adversary only has black-box access to it, which means that they can only query the target encoder with an input image and obtain the corresponding output, i.e., the embedding of the input image. Regarding the surrogate dataset that is used to train the surrogate encoder, we consider two cases. First, we assume the adversary has the same training dataset as the target encoder. However, such an assumption may be hard to achieve as such datasets are usually private and protected by the model owner. In a more extreme case, we assume that the adversary has totally no information about the target encoder's training dataset, which means that they can only use a different distribution dataset to conduct the model stealing attacks. We later show that the adversary can still launch effective model stealing attacks against the target encoder given a surrogate dataset that is distributed differently compared to the target dataset. For the model architecture that is used to train the surrogate encoder, we consider two cases. First, we assume the adversary is aware of the target encoder's architecture and can train the same architecture surrogate encoder. Then we relax our assumption that the adversary uses different architectures to train the surrogate encoder. Our evaluation shows that the choice of architecture does not have much impact on the attack performance (see Table 3), which makes the attack more realistic. Note that we also compare our attacks against the encoders to the traditional model stealing attacks that focus on the whole classifier (which has an encoder and a linear layer). If the attack targets a whole classifier, we assume the adversary may obtain the posteriors or the predicted label for an input image. Model Stealing Attacks In this section, we first describe the conventional attacks against the encoders. Then, we propose a novel contrastive stealing framework, Cont-Steal, to steal the encoders more effectively. Conventional Attacks Against Encoders The adversary takes two steps to conduct the model stealing attacks against the target encoder and one step for further evaluation. Obtain the Surrogate Dataset. To conduct model stealing attacks, the adversary first needs to obtain a surrogate dataset. Based on the knowledge of the target classifier's training dataset (target dataset), we consider two cases. If the adversary has full knowledge of the target dataset, they can directly leverage the target dataset itself as the surrogate dataset. Or the adversary has no knowledge of the target dataset, which means that they can only construct the surrogate dataset, which is distributed differently from the target dataset. Train the Surrogate Encoder. Slightly different from the classifier, the response of the encoder is an embedding, which is a feature vector. In this case, the adversary can still leverage a similar loss function to optimize the surrogate encoder, which can be defined as follows: where E T (·)/E S (·) is the target/surrogate encoder, N is the total number of samples on the surrogate dataset, and l(·) is the MSE loss. Apply the Surrogate Encoder to Downstream Tasks. To evaluate the effectiveness of model stealing attacks against the encoder, the adversary can leverage the same downstream task to both the target and surrogate encoders. Concretely, the adversary trains an extra linear layer for the target and surrogate encoders, respectively. Note that we refer to the target/surrogate encoder and the extra linear layer as the target/surrogate classifiers. Then, the adversary quantifies the attack effectiveness by measuring the performance of the target/surrogate classifier on the downstream tasks. Cont-Steal Attacks Against Encoders To better leverage the rich information from the embeddings, we propose Cont-Steal, a contrastive learning-based model stealing attacks against encoders, which leverages contrastive learning to enhance the stealing performance. Concretely, Cont-Steal aims to enforce the surrogate embedding of an image to get close to its target embedding (defined as a positive pair), and also push away embeddings of different images regardless of being generated by the target or the surrogate encoders (defined as negative pairs). There are three steps for the adversary to conduct contrastive stealing attacks against encoders and one step for further evaluation. Obtain the Surrogate Dataset. The adversary follows the same strategy as Section 3.1 to obtain the surrogate dataset. Data Augmentation. Our proposed Cont-Steal leverages data augmentation to transform an input image into its two augmented views. In this paper, we leverage RandAugment [11] as the augmentation method, which is made up of a group of advanced augmentation operations. Concretely, we set n = 2 and m = 14 following Cubuk et al. [11] where n denotes the number of transformations to a given sample and m represents the magnitude of global distortion. Train the Surrogate Encoder. Instead of querying the encoders with the original images, the adversary queries the encoders with the augmented views of them. Concretely, for an input image x i , we generate two augmented views of it, i.e., x i,s and x i,t , where x i,s / x i,t is used to query the surrogate/target encoder. We consider ( x i,s , x j,t ) as a positive pair if i = j, and otherwise a negative pair. Given a mini-batch of N samples, we generate N augmented views as the input of the target encoder and another N augmented views as the input of the surrogate encoders. Concretely, the loss of Cont-Steal can be formulated as follows: where E S (·) and E T (·) denotes the surrogate and target encoder, sim(u, v) = u T v/||u||||v|| represents the cosine similarity between u and v, and τ is parameter to control the temperature. As illustrated in Figure 2, the conventional attack treats each embedding individually without interacting across pairs. However, different embeddings are beneficial to each other as they can serve as anchors to better locate the position of the other embeddings in their space. Cont-Steal maximizes the similarity of embeddings generated from the target and surrogate encoders for a positive pair ( x i,s , x i,t ) (orange arrows in Figure 2). For the embedding generated from the target and surrogate encoders for any pair ( x i,s , x j,t ), contrastive stealing aims to make them more distant (green arrows in Figure 2). Besides, as pointed out by Chen et al. [6], contrastive learning benefits larger negative samples. To achieve this goal, we also consider the embeddings generated from the surrogate encoder for augmented views of different images, i.e., ( x i,s , x j,s ), as negative pairs minimize their similarity (blue arrows in Figure 2). We later show that such design can enhance the performance of contrastive stealing (see Table 5). Apply the Surrogate Encoder to Downstream Tasks. We follow Section 3.1 to evaluate the effectiveness of model stealing on downstream tasks. Experiments In this section, we first describe the experimental setup in Section 4.1. Then we show the performance of the target encoders on the downstream tasks. Next, we summarize the performance of conventional attacks against classifiers and encoders in Section 4.2. Lastly, we evaluate the performance of Cont-Steal and conduct ablation studies to demonstrate its effectiveness under different settings in Section 4.3. Agreement and accuracy are used as metrics to evaluate the model stealing attack's performance. The agreement will evaluate the similarity of surrogate encoders and target encoders in downstream tasks. The accuracy will evaluate the utility of surrogate models on downstream tasks. For each metric, a larger value is more desirable. During the stealing process, we set the batch size as 128 and the learning rate as 0.001. We show more results on the impact of hyperparameters in Supplementary Material in Section A.2. Performance of Conventional Attacks We first show the target encoder's performance in various downstream tasks. The results are summarized in Figure 3. We conduct our experiments to explore whether the encoders are more vulnerable to model stealing attacks. We show our results of target encoders and downstream classifiers both trained on CIFAR10 in Figure 4. In all cases, the adversary can get better attack performance by stealing encoders rather than classifiers. This gap becomes especially apparent when the adversary has absolutely no knowledge of the train data. This is because the rich information in embeddings can better facilitate the learning process of surrogate encoders. For instance, when the surrogate dataset is CIFAR10 (the same as the target downstream dataset), stealing SimCLR's embeddings can achieve 0.785 agreement, while stealing predicted labels can achieve 0.712 agreement. However, when the surrogate dataset is totally different from the downstream target dataset, e.g., SVHN, stealing embeddings from SimCLR can still achieve 0.507 agreement while the agreement of stealing predicted labels drops to 0.192. We show more results in Supplementary Material Section A.4 due to the page limitation. We also find that all model stealing attacks' accuracy and agreement are highly correlated. As shown in Figure 5, the agreement is highly correlated with the accuracy. This indicates that besides accuracy, the agreement can also be used as a metric to evaluate the performance of model stealing attacks. We show the result on Figure 5. It can be obviously seen that agreement is highly related to accuracy. We use the linear regression method to describe the relationship between agreement and accuracy and find that the relation function is y = 0.940x. Performance of Cont-Steal As shown in Section 4.2, encoders are more vulnerable to model stealing attacks since the embedding usually contains richer information compared to the predicted label or posteriors. We then show that our proposed Cont-Steal can achieve better attack performance by making deeper use of embeddings' information. Figure 7 shows the attack performance when the target pre-training dataset is CIFAR10. Note that we also show the attack performance on other settings in the appendix. We discover that compared to conventional attacks against encoders, Cont-Steal can consistently achieve better performance. For instance, as shown in Figure 7d, when the target encoder is MoCo trained on CIFAR10, if the adversary uses STL10 to conduct model stealing attacks against encoders, the surrogate encoder can achieve 0.841 agreement in CIFAR10 downstream tasks with the Cont-Steal but only 0.479 with conventional attacks. Another finding is that compared to the same distribution surrogate dataset, our Cont-Steal can better enhance the performance when the surrogate dataset comes from a different distribution from the pre-trained dataset. For instance, when the target encoder is SimCLR trained on CIFAR10, Cont-Steal outperforms conventional attack by 0.055 agreement when the surrogate dataset is also CIFAR10, while the improvement increases to 0.207 and 0.214 when the surrogate dataset is STL10. We show more comparing results in Supplementary Material Section A.5. Note that our Cont-Steal also has great performance on other recent state-of-the-art visual models ( To better understand why Cont-Steal can always achieve better performance, we extract samples' embeddings gener- ated by different encoders, i.e., the target encoder, surrogate encoder trained with the conventional attack, and surrogate encoder trained with the Cont-Steal, and project them into a 2-dimensional space using t-SNE. From the results summarized in Figure 6, we find that Cont-Steal can effectively mimic the pattern of the embeddings as the target encoder. However, the conventional attack fails to capture such patterns for a number of input samples, e.g., the outer circle in Figure 8c. This further demonstrates that Cont-Steal benefits from jointly considering different embeddings as they can serve as anchors to better locate the position of the other embeddings in their space. We also show some ablation study results on Supplementary Material Section A.2 to show that with the less surrogate dataset, less training epoch, and different model architecture, Cont-Steal can still achieve much better results than conventional steal. Also, we show further attacks based on the stole models on Supplementary Material Section A.3 to show that Cont-Steal can be used as a springboard for other attacks. Cost Analysis As we mentioned before, pre-train a state-of-the-art encoder is time-consuming and resource-demanding. We wonder if the model stealing attacks can steal the functionality of the encoder with much less cost. To this end, we evaluate the time and monetary cost of training an encoder from scratch or stealing a pre-trained encoder via Cont-Steal. The monetary cost of model stealing includes querying the target model and training the surrogate model. We refer to the query price as $1 for 1,000 queries based on AWS. 6 Our experiment is conducted on 1 NVIDIA A100 whose price is $2.934 per hour based on google cloud. 7 The monetary and time cost is shown in Table 1. We observe that Cont-Steal can obtain a surrogate encoder with much less money and time cost than training the encoder from scratch. For instance, a ResNet18 trained by SimCLR on CIFAR10 takes 20.01 hours and 58.68$ on 1 NVIDIA A100 GPU, while Cont-Steal only takes 0.62 hours and 11.83$ to steal an encoder that performs similarly on downstream tasks. The results demonstrate that Cont-Steal is able to construct surrogate encoders that perform similarly to the target encoders but with much less time and monetary cost. Defenses In this section, we will consider different defenses against model stealing attacks on encoders to evaluate the robustness of our proposed attack. We divided all defenses into three categories: perturbation-based defense [37] and watermarkbased defense [2]. Perturbation-based Defense. In this defense setting, the defender aims to perturb the output of the target model to limit the information the adversary can obtain. The common practice of this kind of defense includes adding noise [37], top-k [37], and feature rounding [48]. Adding noise means that the defender will introduce noise value to the original output of the model. In our case, we consider adding Gaussian noise to the embeddings generated by the target encoder. We set the mean value to 0, and different noise levels represent different standard deviations of the Gaussian distribution. For Top-k, the defender will only output the first k largest number of each embedding (and set the rest as 0). In this way, the high-dimensional information of the image contained in embeddings can be appropriately reduced. Regarding feature rounding, the defender will truncate the values in the embedding to a specific digit. As a case study, we consider a ResNet18 encoder pre-trained on CIFAR10 with SimCLR and take STL10 to train its downstream classifier. The experimental results are summarized in Figure 8. We can observe that while adding noise and top-k can reduce the model stealing attacks' performance, it may also degrade the target model performance to a large extent. For instance, when the noise increases from 0 to 10, the attack performance of Cont-Steal decreases from 0.729 to 0.410, while the target encoder's performance drops from 0.734 to 0.098. On the other hand, rounding only has a limited effect on both target model performance and attack performance. This indicates that perturbation-based defense cannot defend against the encoder's model stealing attack effectively since they cannot reach a good trade-off between attack performance and model utility. Watermark-based Defense. Watermark-based defense is also one of the most popular defense methods against model stealing attacks [2]. Watermark provides copyright protection by adding some specific identification to the target model. If the surrogate model is stolen from the watermarked target model, then ideally, it will contain the same watermark as well. Adi et al. [2] show that backdoor tech-nology can be used as the watermark to protect the model. In that sense, BadEncoder [26], a backdoor mechanism against the encoder, can be leveraged as a watermarking technology for our target encoder as well. The defenders first will train the watermarked (backdoored) encoder, where images with a certain trigger will cause misclassification. Then, if they find the surrogate model can also misclassify images with the same trigger, the defenders can claim ownership of the surrogate model. In our experiments, we leverage BadEncoder to watermark the encoder pre-trained on CIFAR10 by SimCLR, and leverage different downstream datasets to perform different tasks. We assume a strong adversary that has the same downstream dataset as the surrogate dataset. Also, we consider the baseline cases where the trigger samples are fed into the clean model to calculate the watermark rate (wr). As shown in Table 2), the watermark cannot be preserved as the surrogate models constructed by Cont-Steal have similar wr as the baseline model. For instance, when the downstream dataset is CIFAR10, Cont-Steal builds a surrogate model with 0.769 accuracy while only 0.130 wr, which is close to the baseline model. This indicates that Cont-Steal can bypass the watermarking technique as it can reach similar utility while reducing the wr to a large extent. Note that there is also another work to protect contrastive learning models from model stealing attacks using dataset inference [15]. We show in Supplementary Material in Section A.8 that this kind of defense can be easily bypassed by Cont-Steal. Related Work Contrastive Learning. Contrastive learning is one of the most popular methods to train encoders. Current works [6,8,17,20,49,53] propose different advanced contrastive learning algorithms. SimCLR, MoCo, BYOL, SimSiam are currently the mainstream frameworks of contrastive learning. Thus, we concentrate on them in this paper. There are many works on evaluating the security and privacy risks of contrastive learning. Previous works [23,26,30] propose membership inference attacks, attribution inference attacks, and backdoor attacks on contrastive learning. All proposed attacks show that contrastive-based models are vulnerable to popular attacks. Therefore, the security issues of self-supervised learning deserve more attention. Model Stealing Attack. In model stealing, the adversary's goal is to steal part of the target model. These works often have relatively strong assumptions, such as the model family is known and the victim's data is partly available while we conduct model stealing attacks against en-coders and relax the above assumption. Conclusion In this paper, we conduct the first model stealing risk assessment towards image encoders. Our evaluation shows that the encoder is more vulnerable to model stealing attacks compared to the classifier. This is because the embedding provided by the encoder contains richer information than the posteriors or predicted labels from whole classifiers. To better unleash the power from the embeddings, we propose Cont-Steal, a contrastive learning-based model stealing method against encoders. Concretely, Cont-Steal introduces different types of negative pairs as "anchors" to better navigate the surrogate encoder and learn the functionality of the target encoder. Extensive evaluations show that Cont-Steal consistently performs better than conventional attacks against encoders. And such an advantage is further amplified when the adversary has no information on the target dataset, a limited amount of data, and restricted query budgets. Our work points out that the threat of model stealing attacks against encoders is largely underestimated, which prompts the need for more effective intellectual property protection of representation learning techniques. Generate augmented data samples: We conduct ablation studies here to better illustrate the effectiveness of Cont-Steal. Concretely, we investigate whether conventional attacks and Cont-Steal are still effective under limited surrogate dataset size and the number of training epochs. Ideally, we consider the attack that can reach similar performance but with less surrogate dataset size and fewer training epochs as a better attack as it requires less query and monetary costs. As shown in Figure 9, we observe that both conventional attacks and Cont-Steal can have better performance with a larger surrogate dataset size and more training epochs. For instance, Cont-Steal reaches 0.675 agreement when the surrogate encoder is trained with 10% surrogate dataset for 50 epochs, while the agreement increase to 0.812 with 100% surrogate dataset and 100 training epochs. The second observation is that Cont-Steal outperforms conventional attacks even with limited data and training epochs. For instance, even with only 10% surrogate dataset and 10 training epochs, the surrogate encoder built by Cont-Steal can reach 0.562 agreement, while the conventional attack can only achieve 0.479 agreement with the full surrogate dataset and 100 training epochs. As we mentioned before, this is because Cont-Steal can enforce the surrogate embedding of an image close to its target embedding and also push away embeddings of different images irrespective of being generated by the target or the surrogate encoders (see also less on the surrogate dataset's distribution and can always achieve stable performance. We plot the attack agreement in Figure 10 where the target encoders and downstream classifiers are trained on CIFAR10. We can see that when the adversary conducts a conventional attack against the classifier, the adversary's knowledge of target training data is crucial. For example, when the adversary can only get the predicted label from the target model, he/she can only achieve 0.182 agreement when using F-MNIST to attack the model trained by SimCLR, while it can achieve 0.711 agreement when using CIFAR10 as the surrogate dataset, which is same as target dataset. However, compared to the predicted label or posterior as the response, embedding depends less on the surrogate dataset distribution, and Cont-Steal can better leverage the embedding information, contributing to the less dependent on the surrogate dataset's distribution. For instance, when the target model is trained by SimCLR, Cont-Steal can achieve 0.832 agreement when the surrogate dataset is STL10, which is even better than the best conventional attack (0.781) using the exact same target training dataset as the surrogate dataset and embedding as the response. Such observation better im-plies that Cont-Steal can always achieve good performance regardless of the surrogate dataset's distribution and can also achieve more generalized performance in practice. Impact of Hyperparameters. In our experiments, we set batch size as 128 and learning rate as 0.001. We show in Table 4 that with reasonable batch size and learning rate, our Cont-Steal can have stable performance. Impact of Negative Pairs Generated From the Surrogate Encoder. In Cont-Steal's loss functions, besides D − encoder , we also consider the distance of negative pairs generated from the surrogate encoder itself, i.e., D − sel f . To evaluate the necessity of D − sel f , we take the target encoder trained by BYOL on CIFAR10 and the downstream task on STL10 as an example and study the attack performance with and without D − sel f . The results are summarized in Table 5. We find that adding D − sel f greatly improves the attack performance in both accuracy and agreement. For instance, when the surrogate dataset is STL10, the surrogate model stolen by Cont-Steal with D − sel f achieves 0.817 agreement while only 0.314 if without D − sel f . The reason behind this is that the negative pairs generated from the surrogate encoder can serve as extra "anchors" to better locate the position of the embedding, which leads to higher agreement. Such observation demonstrates that it is important to introduce D − sel f in Cont-Steal as well. A.3 Further Attacks Based on Cont-Steal As we have mentioned in the introduction part, model stealing can be used as a stepping stone for further attacks. In this section, we select adversary sample attacks as a case study to show the importance of model stealing for further attacks on the target model. Normally, the adversary can not obtain the gradient from the target model. But to conduct adversary sample attacks, the adversary needs to obtain the gradient in most attack scenarios. Therefore, the adversary can construct a surrogate model to generate the adversary sample and transfer it to the target model to perform the attack. We consider three widely used mechanisms to generate adversarial examples, including Fast Gradient Sign Attack ( A.4 More Results on Conventional Attacks Figure 11, Figure 12, and Figure 13 show the results of the conventional attacks on target models whose encoders are pre-trained on CIFAR10 and downstream classifiers are trained on STL10, F-MNIST, and SVHN, respectively. Figure 14, Figure 15, and Figure 16 show the results of the conventional attacks on classifiers whose encoders are pre-trained on ImageNet100 and downstream classifiers are trained on STL10, F-MNIST, and SVHN, respectively. A.5 More Results on Cont-Steal Figure 17, Figure 18, and Figure 19 show the results of the Cont-Steal on target models whose encoders are pretrained on CIFAR10 and downstream classifiers are trained on STL10, F-MNIST, and SVHN, respectively. Figure 20, Figure 21, Figure 22 show the results of the Cont-Steal on target models whose encoders are pre-trained on Im-ageNet100 and downstream classifiers are trained on CI-FAR10, STL10, and SVHN, respectively. A.6 Attacks Performance on Other Visual Models Apart from four contrastive models we tried in the paper, we also conduct our Cont-Steal on other large, state-of-the-art models such as ViT and CLIP. We show that Cont-Steal can perform very well on ViT, MAE, and the image encoder of CLIP in Table 7. The results demonstrate the scalability of Cont-Steal. A.7 Compare With Other Existing Works Note that we are the first work to systematically propose model stealing attacks against image encoders. There are also some parallel and follow-up works on this domain proposed after our work. Here, we compare our works with other existing methods. The main difference between our work and recent works is our designed contrastive steal loss and the usage of data augmentation. Compared to , our loss focuses on the comparison of positive and negative samples, while StolenEncoder focuses on the combination of augmentation and non-augmentation loss. The main difference between our work and the methods listed in [14] is that 1) we leverage data augmentation as part of the methods, and 2) we design the loss function ourselves to consider more negative examples compared to the INFONCE loss. We show in Table 9 that our method works better. Note that KL divergence is also a loss function used by knowledge distillation. As knowledge distillation is a similar task to model stealing, we also report the results of KL divergence. A.8 More Defenses. We implement the dataset inference defense in [15] (see Table 8). S(·, E T )/C(·, E T ) represents the mutual information/cosine similarity between the given model and the target model (the higher, the more similar). Note that the surrogate encoder will be fine-tuned for downstream tasks. We find the fine-tuning process [18, 55] will disable the defense. Normally, the open-source encoders are trained on very large public datasets instead of limited private datasets, which makes the defense less practical.
2022-01-20T02:16:27.998Z
2022-01-19T00:00:00.000
{ "year": 2023, "sha1": "7da6c9273a14eb8681824d0c3ee84e05366c5627", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7da6c9273a14eb8681824d0c3ee84e05366c5627", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
18997612
pes2o/s2orc
v3-fos-license
HMGB1 and HMGB2 proteins up-regulate cellular expression of human topoisomerase IIα Topoisomerase IIα (topo IIα) is a nuclear enzyme involved in several critical processes, including chromosome replication, segregation and recombination. Previously we have shown that chromosomal protein HMGB1 interacts with topo IIα, and stimulates its catalytic activity. Here we show the effect of HMGB1 on the activity of the human topo IIα gene promoter in different cell lines. We demonstrate that HMGB1, but not a mutant of HMGB1 incapable of DNA bending, up-regulates the activity of the topo IIα promoter in human cells that lack functional retinoblastoma protein pRb. Transient over-expression of pRb in pRb-negative Saos-2 cells inhibits the ability of HMGB1 to activate the topo IIα promoter. The involvement of HMGB1 and its close relative, HMGB2, in modulation of activity of the topo IIα gene is further supported by knock-down of HMGB1/2, as evidenced by significantly decreased levels of topo IIα mRNA and protein. Our experiments suggest a mechanism of up-regulation of cellular expression of topo IIα by HMGB1/2 in pRb-negative cells by modulation of binding of transcription factor NF-Y to the topo IIα promoter, and the results are discussed in the framework of previously observed pRb-inactivation, and increased levels of HMGB1/2 and topo IIα in tumors. INTRODUCTION DNA topoisomerase II (topo II) is an essential and ubiquitous enzyme for proliferation of eukaryotic cells (1). It can alter the topological state of DNA and untangle DNA knots and catenanes (interlocked rings) via ATP-dependent passing of an intact double helix through a transient double-stranded break generated in a separate DNA segment, followed by religation and enzyme turnover (2). In mammalian cells, topo II exists in two isoforms, a (170 kDa) and b (180 kDa), both having similar primary structure and almost identical catalytic properties, but differing in their production during the cell cycle (1,3). Topo II is the target of a number of drugs currently used in the treatment of human malignancies, such as etoposide, teniposide, doxorubicine and mitoxantrone (3). These drugs (also termed topo II poisons) can stabilize the covalent enzyme-associated complexes and shift the DNA cleavage/religation equilibrium of the enzyme reaction toward the cleavage state, converting biological intermediates of topo II activity into lethal ones ultimately leading to triggering of programmed cell death pathways (1,3,4). HMGB-type proteins are relatively abundant and evolutionarily highly conserved non-histone chromatinassociated proteins in mammals. There are three HMGB variants in human and mice, HMGB1, HMGB2 and HMGB3. While HMGB1-3 proteins are expressed in early mice embryos, HMGB2 and HMGB3 are down-regulated during embryonic development (5). The abundant HMGB1 protein ($1 molecule per 10-15 nucleosomes) is highly conserved among mammals, and it continues to be ubiquitously expressed in adults. HMGB1 and HMGB2 function in a number of fundamental cellular processes such as transcription, replication, DNA repair and recombination (5)(6)(7)(8). HMGB1 is associated with chromosomes in mitosis and due to its extreme mobility in the cell the protein is continuously exchanged between nucleus and cytoplasm [(8) and refs therein]. HMGB1, but not HMGB2, also exhibits an important extracellular function in mediation of inflammation mechanisms, tumor growth and metastasis (6,8). HMGB1, like HMGB2-3, has a tripartite domain organization, consisting of two DNA-binding domains, the HMG-boxes A and B, and acidic C-terminal tails of variable length. While the two HMG-boxes interact with DNA (exhibiting a high affinity for distorted DNA conformations (9)(10)(11)(12), the C-tail usually decreases the affinity of the protein for DNA (5,7). Binding of HMGB1 to DNA causes local distortions by bending/looping and *To whom correspondence should be addressed. Tel: +420 541 517 183; Fax: +420 541 211 293; Email: stros@ibp.cz changes in DNA topology (7,13,14). HMGB1 also interacts weakly with a number of proteins, including transcriptional factors, site-specific recombination and DNA repair proteins (8). The importance of HMGB1 for life is supported by the phenotype of the HMGB1 knockout mice, which die 24 h after birth due to hypoglycemia and exhibit a defect in the transcriptional function of the glucocorticoid receptor (15). Lack of HMGB1 in primary mouse embryonic fibroblasts correlates with higher rates of DNA damage after UV irradiation, and the cytogenetic analyses revealed high levels of aneuploidy and spontaneous chromosome aberrations, decreased activity of telomerase and shortening of telomere lengths, suggesting that HMGB1 plays an important role in promoting genomic stability (5,16). Previously we have reported that HMGB1 could interact with topo IIa and stimulate its enzymatic activity (11). In the present study we studied an impact of overexpression of HMGB1 and its close relative, HMGB2, on the activity of human topo II promoter. Using luciferase gene reporter assay we have demonstrated that HMGB1, but not a mutant of HMGB1 incapable of DNA bending, up-regulated the activity of human topo II promoter in human cells that lack functional retinoblastoma protein pRb. Transient over-expression of pRb in Rb-minus cells inhibited the transactivation potential of HMGB1 over the topo II promoter. In agreement with the above data, up-regulation of the topo II promoter by HMGB1/2 was very low in cells with functional pRb. The involvement of HMGB1 and HMGB2 in modulation of cellular activity of the topo II gene was also supported by silencing of HMGB1/2 expression by plasmid-encoded specific shRNA resulting in diminished expression of topo IIa. Our experiments allowed us to propose a mechanism of HMGB1-mediated transactivation of the topo II promoter by modulation of transcriptional factor NF-Y binding to the promoter. The obtained results are discussed in the framework of previously observed increased levels of HMGB1 and topo IIa in tumors (17). Plasmids Each of the supercoiled DNA plasmids was isolated by alkaline lysis method, followed by purification by two rounds of cesium chloride gradients or by the Qiagen plasmid kits. All purified plasmids exhibited ratios A 260 /A 280 higher than 1.85. DNA circularization assay T4 DNA ligase-mediated circularization was carried out as described earlier (18) with the following modifications. Briefly, ligation was carried out at DNA concentration of $1 nM and the concentrations of HMGB1 protein and peptides were 0.5-6 mM. The DNA probe was a gel-purified 123-bp AvaI DNA fragment (prepared by AvaI digestion of the 123-bp DNA lather, BRL) that was 32 P-labeled at 5 0 -ends by T4 DNA kinase and [g-32 P]ATP (Amersham Biotech). Ligation reactions were initiated by addition of 0.1 U of T4 DNA ligase (Promega) and were allowed to proceed at 308C for 30 min. The ligation products were then deproteinised, followed by their resolution by electrophoresis on 5% polyacrylamide gels in 0.5Â TBE as detailed in (19). The amount of monomer DNA circles was determined from quantitative analyses of dried gels on a Molecular Dynamics Storm PhosphorImager using the ImageQuant software. Electrophoreic mobility shift experiments (EMSA) The oligonucleotides for EMSA were derived from human topo II promoter ( The oligonucleotides were 32 P-labeled at their 5 0 -termini by T4 DNA kinase and [g-32 P]ATP (Amersham Biotech) and annealed with their complementary strands to form blunt-ended DNA duplexes. Reaction mixtures for EMSA contained control nuclear extract (5-10 mg of proteins) or extract pre-incubated with HMGB1 (wildtype or F38A/F103A mutant, typically 1-4 mM) and/or His-pRb(wt) [0.1-0.4 mg, purified from BL21(DE3)pLysS Escherichia coli cells harboring the plasmid pRSET(his-Rb); the plasmid was kindly provided by Ronen Marmorstein from the Wistar Institute, Philadelphia, USA], 1 mg of poly (dI.dC) as a non-specific competitor, and $3 ng of 32 P-labeled DNA duplexes in EMSA buffer (20 mM HEPES, pH 7.6, 4% Ficoll, 50 mM KCl, 0.1% Nonidet P-40, 0.2 mM EDTA and 0.5 mM DTT) in a total volume of 20 ml. DNA and proteins from nuclear extract were pre-incubated on ice for 20 min. Identity of transcription factor NF-Y within the retarded DNA-protein complexes was verified by pre-incubation of the protein-DNA complexes with 0.5 mg of polyclonal a-NF-YB antibodies [kindly provided by Roberto Mantovani, Dipartimento di Scienze Biomolecolari e Biotecnologie, Universita`di Milano, Milano, Italy; (9)] at 308C for 20 min. The protein-DNA complexes were resolved on 5% polyacrylamide gels in 0.5Â TBE buffer at 200 V for $2-3 h (48C). The gels were dried and the labeled DNA was imagined by PhosporImager Storm (Molecular Probes). Cloning of HMGB1 plasmids HMGB1 (residues 1-215), HMGB1 domain A (HMG-box A, residues 1-88), HMGB1 domain B (HMG-box B, residues 85-180) and HMGB1 mutants were derived from rat or human HMGB1 cDNAs (the amino-acid sequence of the rat HMGB1 protein is identical to that of the human HMGB1 protein). The DNA sequences coding for the HMGB1 protein, mutants and domains were inserted into BamHI and SalI sites of the vector pQE-80L (Qiagen), which allows tightly regulated N-terminal 6Â His-tagged protein expression in E. coli. Alanine mutagenesis of intercalating residues F38, F103 or I122 of isolated HMGB1 domains A or B, and mutagenesis of F38/F103 (double mutant or F/F) of the full-length HMGB1 was carried out by the protocol using 'chimeras' (Sˇtros, unpublished results). The introduced mutations were verified by dideoxi-sequencing of both strands. For protein expression in E. coli, the HMGB1 and truncated HMGB1 cDNAs were re-cloned into the pQE-80L vector (His-tagged proteins). For transfections, the HMGB1 and HMGB2 cDNAs were cloned into the Flag-pcDNA3 or pcDNA3 vectors (20). Cell transfections Cells were detached by trypsine treatment at 80-90% confluence, and transfected with plasmids either by the use of Amaxa Nucleofector II (Amaxa, Germany) or Fugene HD (Roche). To select for stably transfected cells, Geneticin (BRL) or Zeocin (Invitrogen) were added following 48 h of transfection. shRNA-mediated gene silencing of HMGB1 and HMGB2 Plasmid pcDNA-Zeo(À)-U6 under the control of a human U6 promoter was used to express two different short hairpin RNAs (shRNAs) that specifically cleave HMGB1 (construct #A) and HMGB1/2 (construct #B) mRNAs in human cells (the plasmids were kindly provided by Stephen Lippard, Dept. Chemistry, MIT, Cambridge). The following hHMGB1 (#A: GGAGAAC ATCCTGGCCTGT) or hHMGB2 (#B: AGTGAACACC CTGGCCTAT) sequences were used to create DNA cassettes in pcDNA-Zeo(À)-U6 vector expressing specific shRNAs for silencing of HMGB1 or HMGB1/2 expression in human cells, respectively. Scrambled human HMGB2 sequence (not related to any human sequence) was used as a control (GAGAGGACAAGAGATGTA TT). For the purpose of preparation of stably transfected cells, the growth media contained 300 mg/ml Zeocin (Invitrogen) and the cells were selected for 2-3 weeks. Luciferase reporter gene assays Transient transfections for luciferease gene reporter assays were carried out in 24-well plates using Fugene HD according to manufacturer instructions (Roche). When using the Amaxa Nucleofector II (Amaxa, Germany), transfections were carried out in six-well plates (2 Â 10 5 cells/well) using the plasmid amounts as indicated below. Transfection mixtures contained Flag-pcDNA3 expression vectors encoding full-length HMGB1 (residues 1-214), HMGB1-ÁC (residues 1-179) or double-mutant HMGB1 (F38A/F103A), and/or human wild-type pRb were used: plasmid pGL3-Basic containing firefly Photinus pyralis cDNA (Promega) linked with sequences derived from human topo II gene promoter: plasmid pTIIa-617 containing either wild-type human topo II promoter or mutated promoter within the ICE2 sequence [the promoter sequences were from bp À617 to +90 and the plasmids were kindly provided by Karin M. Stowell, Institute of Molecular BioSciences, Palmerston North, New Zealand; (23)]; or one of the plasmids pTIIa-32, pTIIa-90, pTIIa-142, pTIIa-252 and pTIIa-557 containing varying lengths of the human topo II promoter as indicated [kindly provided by D. Parker Suttle, Department of Pharmacology, College of Medicine, University of Tennessee, Tennessee, USA; (24)]. Mutation of the ICE2 within the plasmid pTIIa-142 was carried out by the QuickChange II Site-Directed Mutagenesis Kit (Stratagene) using the following primers: top, 5 0 -GATATAAAAGGCAAGCTACGGTGGATTC TTCTGGACGGAGACG-3 0 ; low, 5 0 -CGTCTCCGTCC AGAAGAATCCACCGTAGCTTGCCTTTTATATC-3 0 (the mutated ICE2 is highlighted). The introduced mutations within the ICE2 (ATTGG mutated to GTGGA) of the topo IIa promoter (plasmid pTIIa-142_ICE2 mut) were confirmed by dideoxy sequencing. In control luciferase assays, the topo IIa promoter-pGL3 constructs were replaced by the empty vector pGL3-Basic. The equal amount of plasmid DNAs in each transfection mixtures was achieved by adding corresponding amounts of empty (promoter-less) plasmid vectors to final 3.6 mg of the transfection mixture when using Amaxa Nucleofector II for transfection. The luciferase activity was measured 40-44 h after transfection using a dual-luciferase reporter assay system (Promega). Results are presented as changes of transactivation of the topo II promoter constructs relative to the original activity of these promoter constructs in cells not transfected with plasmids expressing (HMGB or pRb) proteins. In some luciferase reporter gene assays, the pRL family Renilla luciferase control reporter vector (0.4 mg) with the cDNA encoding Renilla luciferase under the control of the herpes simplex virus thymidine kinase promoter (pRL-TK, Promega) was used. However, measuring the luciferase activity of the firefly Photinus pyralis proved to be unreliable due to partial activation of the herpes simplex virus thymidine kinase promoter of the pRL-TK by HMGB1/2 [in agreement with reported effects of HMGB1 on transactivation of various gene promoters (8)]. Therefore, the luciferase activity in each cellular lysates was normalized by determining the protein concentrations using the Coomassie G-250 protein-dye assay (Bio-Rad). The involvement of HMGB1/HMGB2 proteins on activity of the human topo II gene promoter was studied using plasmid -700TOP2B-pGL3 (plasmid pGL3-Basic containing firefly Photinus pyralis cDNA linked with sequence À700 to +193 nt of the human topo II gene promoter; kindly provided by Susan P.C. Colle (Queen's University, Cancer Research Institute, Division of Cancer Biology & Genetics, Kingston, Canada). Quantitative real-time PCR of TOP2A gene Total RNA was isolated from 2 Â 10 5 Saos-2 cells by RNeasy Mini Kit (Qiagen). First-strand cDNA synthesis was performed using the SuperScript TM II with oligo(dT) [12][13][14][15][16][17][18] primer (Invitrogen). Approximatedly fivehundred nanograms of RNA was used for each 20 ml of RT-reaction. Each cDNA sample was analyzed in triplicates using TaqMan Õ Gene Expression Assay (Applied Biosystems) according to manufacturer's instructions. Amplification was detected using 7300 Real Time PCR System (Applied Biosystems). The TaqMan probe used hybridized between exons 10 and 11 of the TOP2A gene. Data were analyzed using Sequence Detection System (SDS) software, version 1.3.1. Results were obtained as the cycle number when the fluorescence reaches a set threshold (C T ). The relative expression of TOP2A was quantified as a percentage of HPRT expression. The difference in C T values between the TOP2A and HPRT reactions (ÁC T ) was converted into relative TOP2A expression using a formula 2 ÀÁC T Â 100%. Chromatin immunoprecipitation (ChIP) assays Stably transfected Saos-2 cells (control or HMGB1/2 silencing; see shRNA-mediated gene silencing of HMGB1 and HMGB2 in the Materials and methods section) were grown to $80% confluence in 150 cm 2 tissue culture flasks. Cells were then washed in minimal culture medium (without FBS and antibiotics) and fixed with 1% formaldehyde in the same medium on a shaking platform at room temperature for 10 min. Preparation of chromatin and immunoprecipitation ($1.5 Â 10 6 cells/reaction) was carried out using the ChiP-IT Enzymatic Express Kit (Active Motif, Rixensart, Belgium). ChIP-grade NF-YB rabbit polyclonal antibody (PAb001 from GeneSpin, Milano, Italy) were at 1 mg/reaction. Purified immunoprecipitated DNA was subjected to semi-quantitative PCR with primers (F w : 5 0 -GGTGCCTTTTGAAGCCTCTCT AG-3 0 , R ev : 5 0 -GCTCCACTTGAACCTTCCTTTAG C-3 0 ) specific for a region À215 to À21 of the human topo II promoter, generating a 195 bp product encompassing three NF-Y-binding sites (ICEs1-3). GoTaq Hot Start DNA polymerase (Promega; 2.5 U/100 ml reaction) was used in 1Â Green Flexi buffer/2 mM MgCl 2 (Promega). The amplification program consisted of denaturation at 948C for 2 min, followed by 30-34 cycles of 948C for 60 s, 588C for 30 s and 728C for 30 s. As a negative control, 1 mg of control IgG polyclonal antibody (or 1 mg of a-RNA pol II antibody using primers specific for human GADPH gene as a positive control) was used (ChIP-IT Control Kit-Human, Active Motif). Preparation of nuclear extract Nuclear extract was prepared using the published protocol (25) with some modifications. Cells were twice washed with 1Â phosphate buffered saline (PBS), trypsinized, followed by washing with 1Â PBS containing 10% FCS. After final wash with PBS, the pellet was re-suspended in 1 ml of ice-cold buffer A (10 mM HEPES pH 7.9, 10 mM KCl, 0.1 mM EDTA, 0.1 mM EGTA, 1 mM DTT) containing protease inhibitors (1 mg/ul aprotinin, 10 mg/ul leupeptin, 1 mg/ul pepstatin A, 100 mg/ul trypsin inhibitor, 0.1 mM TLCK and 20 mM benzamidine) and allowed to swell on ice for 15 min. Then 10% Nonidet P-40 was added to final 0.5% and the incubation was continued for more 5 min. Following the incubation, the cellular suspension was spun down (1200 g, 5 min), the nuclei (pellet) were then re-suspended in 1 ml of buffer A (lacking Nonidet P-40), and spun-down as above. The pellet (nuclei) was re-suspended in ice-cold buffer B (20 mM HEPES pH 7.9, 0.4 M NaCl, 1 mM EDTA, 1 mM EGTA) containing protease inhibitors, vortexed for 20 s and rocked at 48C for 30 min. The suspension was clarified by centrifugation (15 000 Â g, 10 min) at 48C, and the clear supernatant containing nuclear extract was aliquoted, quick frozen in liquid nitrogen and stored at À708C. For the purpose of the immunoprecipitation, the nuclear extracts were not frozen and used immediately. Western blotting and immunological detection of proteins Nuclear extract was prepared from transfected or untransfected cells as above. The protein concentrations were determined by the Bradford Coomassie G-250 assay (Bio-Rad) using bovine serum albumin (BSA) as a standard. Equal amounts of proteins from different nuclear extracts were then separated by SDS-PAGE on SDS/ 7.5% or 10%-polyacrylamide gels, and the resolved proteins were transferred onto the PVDF membrane (Bio-Rad) using a semi-dry blotting apparatus (Biometra). Detection of human topo IIa was carried out by overnight incubation of the membrane with monoclonal anti-topo IIa antibody (1:500 dilution, Topogen), followed by extensive washing the membrane in 1Â PBS-T (PBS with 0.1% Tween-20) and incubation of the membrane with horseradish peroxidase conjugated anti-mouse antibody (IgG-HRP) (1:2000 dilution, GE Healthcare) for 1 h. Detection of HMGB1 or HMGB2 was carried out by overnight incubation of the membrane with polyclonal a-HMGB1 or HMGB2 antibodies (1:500 dilution, BD Pharmingen or Aviva), followed by washing the membrane in 1Â PBS-T and incubation of the membrane with goat anti-rabbit IgG-HRP antibody (1:2000 dilution). Equal loading of transferred proteins was verified by detection of actin using mouse monoclonal Pan-actin antibody (Dako, 1:2000 dilution), followed by incubation of the membrane with horseradish peroxidase conjugated anti-mouse antibody (IgG-HRP) (1:2000 dilution, GE Healthcare) as detailed above. Visualization of proteins was performed using West Dura Extended Duration Signal Kit (Pierce). The intensities of signals were quantified using Multi Gauge software of the imaging system LAS-3000 (Fuji). Immunoprecipitation of pRb and HMGB1 Nuclear extract from H1299 cells (0.5 mg per each experiment) was pre-cleared by rotation with 40 ml of protein-A+G (1:1) agarose beads (1:1 v/v slurry) in 1 ml of 1Â ECB buffer (0.3 M NaCl, 50 mM HEPES, pH 7.9, 0.2 mM EDTA, 0. 5 mM DTT, 2% Nonidet P-40 plus freshly added protease and phophatase inhibitors) at 48C for 1 h. The beads were removed by centrifugation (10 000 Â g for 30 s), the supernatant (pre-cleared nuclear extract) was mixed with 10 mg of monoclonal anti-HMGB1 (IgG2a, clone #KS1, Stressgen) or 10 mg of pre-immune mouse IgG (Sigma), and rotated in 0.5 ml of ECB buffer at 48C for 1 h. Then, 40 ml of pre-cleared 1:1 slurry of protein-A+G agarose beads (pre-incubated with acetylated bovine serum albumin at 0.1 mg/ml in ECB buffer for 1 h, followed by extensive washing in ECB buffer) was added and rotation was continued by more 2 h. The beads were then spin down (10 000 Â g for 30 s) and extensively washed (five times) in 1 ml of ECB buffer containing 0.5% Nonidet P-40. The washed beads were then mixed with 20 ml of 10Â loading buffer, boiled for 3-4 min and loaded onto SDS/7.5% polyacrylamide gels. The resolved proteins were transferred onto the PVDF membrane (Bio-Rad), and the membrane was incubated with polyclonal a-pRb antibody (1:500 dilution; C-15, Santa-Cruz), followed by washing the membrane in 1Â PBS-T and incubation of the membrane with horseradish peroxidase conjugated anti-rabbit antibody (IgG-HRP) (1:2000 dilution, GE Healthcare) for 1 h. Visualization of detected proteins was performed by ECL detection kit (GE Healthcare). HMGB1 up-regulates topo IIa gene promoter To find out whether HMGB1 could modulate activity of the human topo II gene promoter, luciferase reporter gene assay was employed by using constructs containing sequences from bp À617 (or À557) to +90 of the human topo II gene promoter (referred as to the full-length promoter, pTIIa-617 or pTIIa-557). In some cases, HMGB1 was over-expressed from plasmid encoding the protein fused with a seven amino acids FLAG-tag at the N-terminus to distinguish it on western blots from the endogenous HMGB1 ( Figure 1A, inset). However, the luciferase reporter assays using the Flag-tagged and untagged HMGB1 were indistinguishable (not shown). As shown in Figure 1, over-expression of HMGB1 resulted in $10-20-fold increase of luciferase activity, depending on individual transfection experiments, rather than on slight differences in the size of the topo II promoter constructs used (À617 or À557 bp). These findings demonstrated that HMGB1 could up-regulate the activity of the human topo II promoter, and that the activation of the topo II promoter increased with the amount of HMGB1 plasmid ( Figure 2D; increased levels of ectopically expressed HMGB1 in nuclear lysates were confirmed by immunodetection using specific anti-HMGB1 antibody, not shown). The known regulatory elements of the À557 topo II gene promoter (pTIIa-557) construct involve five inverted CCAAT boxes (referred as to ICEs, Inverted CCAAT Elements), an Activating TF-binding site (ATF), and one GC1 box (the second GC box, GC2, is present only within the ÀpTIIa-617) and no functional TATA box (26), Figure 1A. To identify sequences of the human topo II promoter involved in HMGB1-mediated up-regulation of the promoter, Saos-2 cells were co-transfected with the HMGB1 expression plasmid and the luciferase reporter constructs containing varying lengths of the topo II gene promoter. Deletions of the sequences from bp À557 to À142 (encompassing ICE5, ICE4, ICE3 and ATF) had very little, if any, effect on the ability of HMGB1 to up-regulate the activity of the promoter [albeit stepwise deletion of the 5 0 -promoter sequences from bp À557 to À142 of the full promoter resulted in up to $20-fold decrease in luciferase activity in the absence of over-expressed HMGB1, in agreement with previously published data (24)]. However, truncation of the topo II promoter from bp À142 to À90 including Figure 1. HMGB1 up-regulates activity of human topo II promoter. (A) Identification of elements within human topo II gene promoter involved in HMGB1-mediated up-regulation of the promoter. Saos-2 cells were co-transfected with the plasmid constructs containing the -557 bp human topo II promoter (plasmid pTIIa-557) or one of the truncated topo II promoter constructs linked to the luciferase reporter gene (as detailed in Materials and methods section), and plasmid encoding HMGB1 (or empty vector). Inset, over-expression of Flag-tagged HMGB1. Lane 1, cells transfected with empty vector; lane 2, cells transfected with plasmid encoding Flag-HMGB1. Immunodetection was carried out using specific a-HMGB1 antibody. (B) Effect of mutation of ICE2 on transactivation of topo II promoter by HMGB1. Saos-2 cells were co-transfected with the plasmids harboring the full-length topo II promoter (plasmids pTIIa-617 or pTIIa-617_ICE2mt) or truncated topo II promoter (plasmid pTIIa-142 or pTIIa-142_ICE2mt) containing the wild-type (WT) or mutated ICE2 (ICE2mt), and either empty vector or plasmid encoding HMGB1. Luciferase activity (referred as to transactivation) in panels (A) and (B) is presented relative to the luciferase activity of the topo II promoter in cellular lysates without HMGB1 over-expression (taken as 1). Each transfection experiment was repeated at least five times with triplicate samples. Values are presented as the mean AE 1 SD (error bars) of luciferase activities from triplicate samples in a representative experiment. the ICE2 (pTIIa-90 containing only ICE1) markedly decreased the ability of HMGB1 to up-regulate the topo II gene promoter ( Figure 1A, pTIIa-142 or pTIIa-90). A slight activation of the topo II promoter lacking all known regulatory sequences (pTIIa-32) by HMGB1 was to be explained by the reported effect of the protein on the basal transcription (5,8,27). The above data suggested the importance of the ICE2 for the HMGB1-mediated transactivation of the topo II promoter. The latter conclusion seemed to be confirmed by mutation of the ICE2 (ATTGG mutated to GTGGA) within the À142 promoter construct resulting in inability of HMGB1 to transactivate the À142 bp topo II promoter (pTIIa-142_ICE2mt, Figure 1B, right). However, while the ICE2 was crucial for the HMGB1-mediated transactivation of the À142 promoter ( Figure 1A), it was much less important in the context of the full-length topo II promoter as revealed by the luciferase gene reporter assay using the À617 promoter construct with mutated ICE2 (pTIIa-617_ICE2mt, Figure 1B, left). These results questioned the central role of the ICE2 in the HMGB1-mediated up-regulation of the human topo II promoter, and pointed out the possible importance of multiple ICEs for the HMGB1-mediated up-regulation of the topo II promoter (see Discussion section). It is generally assumed that HMGB1 could facilitate binding of a number of transcription factors (such as steroid hormone receptors, homeitic HOX proteins, recombination activating gene protein RAG1, octamer transcription factors Oct-1/Oct-2, human TATA-binding protein TBP, TFIID and proteins of p53 family) to their cognate sites by binding to the TFs and delivering them to the corresponding DNA sequences (5,7,8). NF-Y (nuclear factor-Y) is a transcription factor comprising NF-YA, NF-YB, and NF-YC subunits. NF-Y specifically recognizes a CCAAT box motif which is found in the promoter and enhancer regions of many genes, including the human topo II gene (26,29). The NF-YA subunit associates only with the NF-YB:NF-YC heterodimer creating a functional NF-Y (trimer) CCAAT box DNAbinding complex. NF-Y binds to the human ICE1-4, but not to ICE5 (30). The effect of NF-Y on cellular levels of the human topo IIa (and subsequently on the regulation of the topo II gene promoter) is elusive due to conflicting reports (23,31,32). Thus, a simple explanation for the enhancement of the topo II promoter activity by HMGB1 could be that HMGB1 could promote binding of crucial activators, such as NF-Y, to their DNA-binding sites. Electrophoretic Mobility Shift Assay (EMSA) was therefore used to determine whether HMGB1 could enhance binding of NF-Y to short DNA duplexes containing ICE1, ICE2 or ICE3 sequences derived from the human topo II promoter. Addition of HMGB1 to nuclear lysate from Saos-2 cells could enhance ($2-3-fold) binding of NF-Y to the ICE2 (Figure 2A), and also to the ICE1 and ICE3 (data not shown), demonstrating the ability of HMGB1 to promote NF-Y binding to various ICEs. The ability of HMGB1 to promote in vitro binding of NF-Y to DNA duplexes containing inverted CCAAT elements (Figure 2A) could indicate that the protein could modulate NF-Y binding to the human topo II gene promoter in vivo. To find out whether HMGB1/2 could affect binding of NF-Y to the topo II promoter, chromatin immunoprecipitation (ChIP) was used. For this purpose, control Saos-2 cells and cells with inhibited HMGB1/2 expression were fixed with formaldehyde, chromatin sheared by enzymatic digestion and immunoprecipitated with anti-NF-YB or control antibodies. Purified immunoprecipitated DNA was subjected to semi-quantitative PCR with primers specific for a region À215 to À21 of the human topo II promoter, amplifying a 195 bp product encompassing three NF-Y-binding sites (ICEs1-3). As shown in Figure 2C, the ChIP analysis revealed that inhibition of HMGB1/2 expression resulted in up to $3-fold reduction in NF-Y binding to the topo II gene promoter relative to the control cells. These data provided evidence that HMGB1/2 could modulate NF-Y binding to the human topo II promoter in vivo. To find out whether HMGB1 could interact with NF-Y, in vitro 'pull-down' (PD) assay was used. GST or GST-HMGB1 were immobilized on glutathione Sepharose beads and incubated with nuclear extract from Saos-2 cells. After extensive washings, the resin-bound proteins were separated by electrophoresis, followed by western blotting and immunological detection by anti-NF-YB antibody. As shown in Figure 2D (top), a significant fraction of NF-YB was found to be associated with HMGB1 relative to control GST-bound beads (equal amounts of GST or GST-HMGB1 immobilized on agarose beads were verified by gel electrophoresis, Figure 2D, bottom). Interaction between HMGB1 and NF-Y was not facilitated via DNA as similar results were obtained from PD assay in the presence of DNA intercalator ethidium bromide. The above results demonstrated the ability of HMGB1 to interact with NF-Y, and provided a possible explanation for the observed HMGB1-mediated enhancement of NF-Y binding to ICEs (Figure 2A), a basis for understanding of the effect of the protein on the activity of the topo II gene promoter (Figure 1). Intercalating residues of HMGB1 are required for DNA bending and up-regulation of topo IIa gene promoter Previous experiments with HMGB proteins from Sacharomyces cerevisae (NHP6A) and Drosophila melanogaster (HMG-D), which are highly related to human HMGB1 protein, revealed that mutation of DNA intercalating residues impaired their DNA-bending properties (33,34). Using ligase-mediated circularization assay we have studied whether the intercalating residues (Phe38 of the domain A, and Phe103/Ile122 of the domain B, Figure 3A) are also required for DNA bending by human HMGB1. This assay measures the efficiency with which T4 DNA ligase forms minicircles from fragments of DNA that are shorter than $150 bp. In the absence of internal curvature, the stiffness of a short DNA fragment (<150 bp) prevents intra-molecular alignment of its ends so that minicircles are detected only in the presence a protein that bends DNA. HMGB1 and individual domains, A and B ( Figure 3A), have been mutated by alanine mutagenesis, and the proteins were expressed in E. coli and purified to near homogeneity ( Figure 3B). To understand the importance of individual intercalating amino acids of HMGB1 for the ability of the protein to bend DNA, single HMGboxes, domains A and B, and their mutants were first investigated. In agreement with previous studies (9,10,18,35,36), the domain B seemed to be more efficient than the domain A ( Figure 3C . HMGB1 interacts with transcription factor NF-Y and modulates its binding to human topo II promoter. (A) HMGB1 enhances binding of NF-Y to ICE2. Radioactively labeled 30-bp DNA duplex containing ICE2 was mixed with nuclear extract from Saos-2 cells (lane 1) or nuclear extract containing increasing concentrations of HMGB1 (1, 2, 3, 4 and 6 mM, lanes 2-5, respectively), followed by separation of unbound ICE2 DNA duplex (free probe) and DNA-protein complexes on 5% non-denaturing polyacrylamide gels (EMSA). The presence of NF-Y within the retarded complex (arrow) was verified by super-shifting of the retarded complex (arrow head) with specific polyclonal antibodies against NF-YB. Specificity of NF-Y binding from nuclear extracts to DNA duplexes containing the wild-type ICE2 was also demonstrated by competition experiments with unlabeled DNA duplexes containing the wild-type or mutated sequence of the ICE2 (not shown). (B) HMGB1 mutant, incapable of DNA bending, cannot enhance binding of NF-Y to ICEs. EMSA experiment was carried out as in (A). Lane 1, no added HMGB1; lane 2, wild-type HMGB1 (2 mM); lane 3, HMGB1(F38A/F103A), 2 mM. Arrow indicate the mobility of (NF-Y)-DNA complexes. (C) ChIP assays using anti-NF-YB, control IgG antibodies or without antibodies were performed using chromatin from Saos-2 cells (control or upon silencing of HMGB1/2 expression, see Figure 6) as detailed in Materials and methods section. NF-Y binding to the topo II promoter was detected by gel staining (ethidium bromide) after PCR amplification using primers corresponding to the promoter as described in Materials and methods section. For semi-quantitative PCR, amplification reactions were carried out within the linear range of amplification (typically 30-34 cycles). The ChiP data were calculated as a ratio of PCR signal from ChIPed DNA to the input PCR signal, the final data were obtained from comparison of both cell lines (control or HMGB1/2 sil) and expressed as -fold change. Control, stably transfected Saos-2 cells (vector); HMGB1/2 sil, stably transfected Saos-2 cells with inhibited HMGB1/ 2 expression (shRNA-mediated gene silencing of HMGB1/2, Figure 6). The specific PCR product of 195-bp encompassing ICEs 1-3 is indicated by an arrow. M, DNA size marker; ICEs, inverted CCAAT elements. (D) Detection of NF-Y binding to HMGB1 or pRb in vitro. (Top) Equal amounts of GST, GST-HMGB1 or GST-pRb were immobilized on glutathione Sepharose beads and incubated with nuclear extract from Saos-2 cells. After extensive washings, the resin-bound proteins were eluted and separated by SDS-polyacrylamide gel electrophoresis, followed by W. blotting and immunological detection by anti-NF-YB antibody as detailed in the Material and methods section. EtBr, ethidium bromide. IN, 25% of the nuclear extract from Saos-2 cells used for the 'pull-down' assay (input). (Bottom) GST or GST-tagged proteins immobilized on agarose beads for the 'pull-down'assay (Coomassie blue R-250 staining). ability of HMGB1 to bend DNA was to be achieved only when both intercalating phenylalanine residues, Phe38 and Phe103, had been mutated to alanine ( Figure 3C, a double mutant F38/F103, upper panel). These results indicated that the domain B could largely substitute for the DNA-bending potential of the domain A within the full-length HMGB1, suggesting the major role of the domain B in mediating DNA bending by HMGB1 [see also (37)]. In agreement with previous studies (18,36), the DNA-bending potential of the two HMG-box domains of HMGB1 was down-regulated by its acidic C-tail as revealed by enhanced formation of minicircles by HMGB1 lacking the acidic C-tail (referred as to HMGB1-ÁC), Figure 3C (upper panel). We have found that single or multiple alanine mutagenesis of Phe38 and Phe103 had only a relative small effect on the binding properties of the full-length HMGB1 protein to linear DNA [see also (12,34,38,39,56)]. In order to find out whether the ability of HMGB1 to bend DNA was essential for the observed up-regulation of human topo II promoter, the impact of over-expression of either HMGB1 lacking the acidic C-tail (HMGB1-ÁC) or a double mutant HMGB1(F38A/F103A) has been investigated. As shown in Figure 3D, over-expression of HMGB1-ÁC in Saos-2 cells resulted in higher transactivation of the topo II promoter than that by the full-length HMGB1 ($30-fold and $20-fold, respectively), suggesting that the acidic C-tail of HMGB1 inhibited the ability of the protein to activate the topo II promoter. The higher transactivation potential of HMGB1-ÁC was consistent with reported enhanced DNA-bending and -binding properties of the A+B di-domain (14,18,35,37). The inability of the HMGB1 mutant to transactivate the topo II promoter ( Figure 3D) may be related to impaired DNAbending properties of the mutant (Figure 3C, top; similar levels of ectopically expressed HMGB1 and mutants The DNA-protein complexes were ligated with T4 DNA ligase and the deproteinised DNA samples were then separated on 5% non-denaturing polyacrylamide gels. (D) Transactivation of the topo II gene promoter by the double-mutant of HMGB1 or HMGB1-ÁC. Plasmids encoding HMGB1 or mutants were co-transformed with the plasmid pTIIa-617 into the Saos-2 cells, and the luciferase activity (transactivation) was measured as in Figure 1. were verified by gel electrophoresis and immunodetection using specific antibodies, not shown). The latter idea was in agreement with the EMSA experiments demonstrating that HMGB1 incapable of DNA bending could no longer enhance NF-Y binding to ICEs ( Figure 2B). Up-regulation of topo IIa gene promoter by HMGB1 is inhibited by retinoblastoma protein pRb In order to find out whether HMGB1 modulates activity of the human topo II gene promoter in other cell lines than the Saos-2 (p53 À/À /Rb À/À ) cells, comparative transfection experiments were performed with non-small lung cells carcinoma H1299 (p53 À/À /Rb +/+ ), human breast cancer MCF-7 (p53 +/+ /Rb +/+ ) or human bladder carcinoma 5637 (Rb À/À ) cells. While over-expression of HMGB1 could significantly up-regulate the topo II promoter both in Saos-2 ( Figures 1A and 4A) and 5637 cells (data not shown), over-expression of HMGB1 had a relatively little (<2-3-fold) effect on the activity of the À617 bp human gene topo II promoter in MCF-7 or H1299 cells ( Figure 4A). The above results led us to hypothesize that the distinct effects of HMGB1 on the activity of the topo II promoter in different cell lines could be explained by the presence of retinoblastoma susceptibility protein pRb. pRb is a negative regulator of cellular proliferation and a prototypic tumor suppressor protein (40). pRb is also required for efficient activation of cell cycle checkpoints in response to a variety of DNA lesions, including those induced by topo IIa poisons (41). To find out whether the absence of functional pRb could explain why HMGB1 up-regulated the topo II promoter much more efficiently in pRb-negative as compared to pRb-positive cells, the impact of HMGB1 over-expression on the activity of the promoter was studied in the luciferase gene reporter assays using two different approaches: pRb was (i) over-expressed in pRbnegative Saos-2 cells or (ii) silenced by specific siRNAs in pRb-positive H1299 cells. (i) pRb over-expression. pRb was ectopically expressed in Saos-2 cells and the luciferase activity was measured. As shown in Figure 4B, the activity of the topo II promoter was significantly inhibited (up to $3-4-fold) when pRb alone was over-expressed in Saos-2 cells. When HMGB1 (plasmid-encoded) was over-expressed in the Saos-2 cells transiently expressing pRb, very little, if any, transactivation of the topo II promoter was observed ( Figure 4B). Similar results were obtained when HMGB1 was transiently over-expressed in stable Saos-2 cell derivatives in which the activity of the Rb gene was controlled by the tetracycline regulated promoter (data not shown). These results indicated that ectopically-expressed pRb could counteract the stimulatory effect of HMGB1 over the topo II promoter. (ii) Knock-down of pRb. Silencing of pRb expression in H1299 cells by specific siRNAs ( Figure 4C) could significantly enhance (up to $3-fold) the activity of the topo II promoter in a luciferase gene reporter assay ( Figure 4D), in agreement with previously published microarray study demonstrating an enhanced activity of the topo II gene upon silencing of pRb in H1299 cells (44). Co-transfection of the cells with silenced expression of pRb with plasmid encoding HMGB1 could increase the activity of the promoter in a luciferase gene reporter assay up to $12-fold ( Figure 4D). This was in contrast to the gene reporter assay using either no siRNA or control siRNA. While the control siRNA had no effect on the activity of the topo II promoter, co-transfection of these cells with plasmid encoding HMGB1 could increase the activity of the promoter only $2-fold ( Figure 4D). The above results provided strong evidence that the up-regulation of the topo II promoter by HMGB1 was inhibited by pRb. pRb protein is functionally inactivated in most human neoplasms either by direct mutation/deletion (such as occurs in osteosarcoma, retinoblastoma and small cell lung carcinoma) or indirectly through altered expression/ activity of upstream regulators (42). pRb is a 928 aminoacid protein with functionally distinct protein-binding domains. The A/B pocket (residues 379-792), comprised of structural domains A and B ( Figure 4F, upper panel), is a hot-spot for mutation in human cancers, and is the binding site of several viral oncoproteins that inactivate pRb (42). We have therefore tested whether mutation of pRb within the A/B pocket (amino-acid 706 C ! F) could abolish the inhibitory effect of pRb on the HMGB1-mediated up-regulation of the topo II promoter. As shown in Figure 4B, the transcriptional activity of the topo II promoter in Saos-2 cells (in the absence of over-expressed HMGB1) was very little (if any) affected by transient expression of the mutant pRb. Similarly, the mutant pRb could not inhibit the HMGB1-mediated up-regulation of the activity of the topo II promoter in Saos-2 cells ( Figure 4B). A large number of proteins have been demonstrated to interact with pRb, the majority of which are involved in transcriptional repression such as histone deacetylases, components of the mammalian SWI/SNF chromatinremodeling complex, histone methyltransferases, heterochromatin proteins, DNA methyltransferases and Polycomb group proteins (42). Therefore, we have used PD assay to study whether pRb could interact with factors affecting the activity of the human topo II promoter, HMGB1 or NF-Y. Previous PD experiments revealed that HMGB1 and recombinant pRb (unphosphorylated form) physically interact in free solution (28). In order to clarify whether HMGB1 could interact with pRb in the cell, nuclear lysate from pRb-positive H1299 cells was immunoprecipitated by specific anti-HMGB1 antibody. As shown in Figure 4E, a weak band of co-immunoprecipitated pRb from H1299 lysate with HMGB1 was apparent, suggesting an interaction of HMGB1 and pRb in vivo (co-immunoprecipitation of pRb with HMGB1 was also evident by immunoprecipitation experiments using nuclear lysate from human erythroleukemia K562 cells, unpublished). pRb was not co-immunoprecipitated with pre-immune rabbit IgG ( Figure 4E). The electrophoretic mobility of the co-immunoprecipitated pRb corresponded to the nonphosphorylated form of the pRb protein [ Figure 4E, see also (43)], although we have not yet determined directly the phosphorylation state of pRb involved in the interaction with HMGB1. To find out whether the different effect of the wild-type and mutant pRb on the inhibition of the HMGB1-mediated transactivation of the topo II promoter ( Figure 4B) could be explained by the different interaction of HMGB1 with the wild-type pRb and mutant pRb, the PD assay was performed. Although HMGB1 could bind to both the wild-type and mutant pRb (amino-acid 706 C ! Y within the A/B pocket domain) ( Figure 4F), a higher fraction of HMGB1 ($2-3-fold) was associated with the wild-type pRb The plasmid pTIIa-617 bearing the human topo II gene promoter was co-transfected with either vector (empty bars) or plasmid encoding HMGB1 (black bars) into pRb-negative Saos-2 cells or pRbpositive H1299 or MCF-7 cells. (B) Up-regulation of activity of human topo II promoter by HMGB1 is inhibited by wild-type but not mutant pRb. Saos-2 cells were transiently transfected with the plasmid pTopoIIa-617 and plasmids encoding the wild-type retinoblastoma protein pRb, mutant pRb (amino-acid 706 C!F) or human HMGB2. Luciferase activity (transactivation) in (A) and (B) was measured and presented as in Figure 1. (C) Knock-down of pRb in H1299 cells. Nuclear lysates from untransfected cells (lane 1), cells transfected with control siRNA (lane 2) or Rb specific siRNAs (lane 3) were resolved by SDS-polyacrylamide gel electrophoresis, and the proteins were then transferred onto the membrane and immunodetected using anti-pRb or anti-actin antibodies. (D) HMGB1 significantly stimulates activity of topo II gene promoter in H1299 cells upon knock-down of pRb. Control siRNA or Rb specific siRNAs were transiently co-transfected with plasmid pTIIa-617 and either empty pcDNA3 vector or plasmid encoding HMGB1 into pRb-positive H1299 cells as detailed in the Material and methods section. Luciferase activity (transactivation) was measured and presented as in Figure 1. (E) Interaction of HMGB1 with pRb in H1299 cells. Cellular lysates from pRb-positive Saos-2 cells were immunoprecipitated (IP) with monoclonal a-HMGB1 antibody or control (pre-immune) IgG, followed by separation of the precipitated proteins by SDS-polyacrylamide gel electrophoresis, western blotting and immunodetection with polyclonal anti-pRb antibody as detailed in the Materials and methods section. (F) Schematic structures of pRb and pRb constructs (upper panel) used for the 'pull-down' assay (upper panel). GST or GST-pRb (wild-type or mutant) were immobilized on glutathione Sepharose beads and incubated with purified HMGB1 protein. After extensive washing of the beads, the proteins associated with the beads were resolved electrophoresis, followed by western blotting and immunological detection by a-HMGB1 antibody as detailed in Material and methods section. Equal amounts of GST-pRb immobilized on beads were confirmed by immunodetection using anti-pRb (C-15) antibody (lanes 2 and 3). Input, 10% of the HMGB1 used for the pull-down assay. than with the mutant pRb (equal amounts of GST-pRb immobilized on the glutathione-Sepharose beads were confirmed by immunodetection using anti-pRb antibody; lanes 1 and 2 in Figure 4E). Thus, the distinct effects of HMGB1 on the activity of the topo II promoter in pRb-negative or pRb-positive cells ( Figure 4A) could not be explained solely by different affinities of the wild-type and mutant pRb for HMGB1 ( Figure 4F). In order to understand the mechanism of pRb-mediated inhibition of the activity of the human topo II promoter, we have asked whether pRb could interact with NF-Y in vitro. As shown in Figure 2D (top), incubation of nuclear lysate from Saos-2 cells with GST-pRb immobilized on glutathione-agarose beads resulted in association of NF-Y with pRb. Similarly to the NF-Y binding to HMGB1, the interaction between NF-Y and pRb was not facilitated via DNA as similar results were obtained by PD assays in the presence or absence of ethidium bromide. The detection of pRb-(NF-Y) interactions prompted us to investigate using EMSA whether pRb could affect NF-Y binding to the specific ICE sites in vitro. Results shown in Figure 5A (lanes 2-5) demonstrated that binding of NF-Y to a short DNA duplex containing the ICE2 was visibly compromised by preincubation of nuclear lysate from Saos-2 cells with pRb (similar results were obtained from EMSA experiments using the ICE1, not shown). In addition, the HMGB1-mediated enhancement of NF-Y binding to the ICE2 was also reduced by preincubation of HMGB1 with pRb ( Figure 5A; compare lanes 6, 7 and 8). We have verified that the observed reduction of NF-Y binding by pRb ( Figure 5A) was not unspecific, as titration with un unrelated protein (GST) in comparable amounts to pRb had no effect (not shown). The above results suggested that pRb could counteract NF-Y binding to DNA duplexes containing inverted CCAAT elements. HMGB-type family consists of three member-HMGB1, HMGB2 and HMGB3 with an 80% aminoacid identity among the three proteins. While HMGB1 is ubiquitously expressed in all mammalian cells ($10 6 molecules per cell), expression of the other two family members is more restricted: HMGB2 is widely expressed during embryonic development, and HMGB3 is only expressed to a significant amount during embryogenesis (5,8). As both HMGB1 and HMGB2 relatively abundant in human cells (including Saos-2 cells), we have studied whether HMGB2 could also up-regulate the topo II gene promoter. As shown in Figure 4B, over-expression of HMGB2 in Saos-2 cells resulted in up to $2-fold higher transactivation of the topo II gene promoter relative to the effect of HMGB1. The latter was not due to different over-expression of HMGB1 and HMGB2 as similar levels of the two ectopically expressed proteins were immunodetected in transfected Saos-2 cells (not shown). Thus, it is likely that both HMGB1 and HMGB2 could transactivate the topo II gene in the cell. The two isoforms of topoisomerase II, a and b, exhibit very similar catalytic activities, but differ in their production during the cell cycle (1)(2)(3)(4). While the activity of the human topo II gene promoter was low in resting cells and enhanced during proliferation (such as in tumors), the activity of the human topo II gene promoter remained constant throughout the cell cycle (49). Despite a distinct regulation of the two topo II gene promoters (49), NF-Y could bind to ICEs of both types of promoters [(49) and refs therein]. Therefore we were wondering whether HMGB1 and/or HMGB2 could also modulate transcriptional activity of the topo II promoter. As shown in Figure 5B, the activity of the topo II promoter in Saos-2 cells was dose-dependently up-regulated by HMGB1. Similarly to the topo II promoter ( Figure 4B), the transactivation of the topo II promoter was more prominent upon over-expression of HMGB2 ( Figure 5B). Thus, the activity of both the topo II and topo II promoters could be up-regulated by HMGB1/2. HMGB1 and HMGB2 modulate cellular expression of topoisomerase IIa Results of this article indicated that HMGB1 and 2 could up-regulate the topo II promoter in the luciferase gene reporter assay (Figures 1 and 4), suggesting a possibility that HMGB1 and HMGB2 could affect cellular expression of topo IIa. To find out whether the cellular levels of topo IIa were affected by HMGB1 and/or HMGB2 in Saos-2 or H1299 cells, stable cell lines were generated producing plasmid-encoded short hairpin RNAs (shRNAs) specific for either HMGB1 or HMGB1/HMGB2 mRNAs. In order to verify silencing of HMGB1 and/or HMGB2 expression, proteins from nuclear lysates were resolved by electrophoresis, transferred onto the membrane and detected with specific antibodies recognizing HMGB1 or HMGB2. As shown in Figure 6, the expression of HMGB1 was inhibited in H1299 cells by >90% by shRNAs specific for HMGB1 mRNA (lane 3; H1299) or for HMGB1 and HMGB2 mRNAs (lane 4; H1299). The latter approach resulted in simultaneous knock-down of HMGB1 and HMGB2 expression. Similar results were obtained in Saos-2 cells although silencing of HMGB1 expression was only $60-70% ( Figure 6, lane 3; Saos-2). Control silencing experiments revealed that the levels of HMGB1 and HMGB2 were unaffected in both H1299 and Saos-2 cells transfected with plasmid producing unrelated shRNA ( Figure 6, lanes 2 as compared to lanes 1 for untransfected cells). The cellular levels of topo IIa in cells with silenced HMGB1 or HMGB1/2 expression were then determined by western blotting and immunodetection. No significant changes in topo IIa protein levels were observed in pRbpositive H1299 cells upon silencing of HMGB1 or HMGB1/2 expression (Figure 6, lanes 3 and 4; H1299), albeit QRT-PCR analysis revealed minor decrease in topo II mRNA levels in H1299 cells with silenced HMGB1/2 expression (not shown). On the other hand, the topo IIa protein levels in pRb-negative Saos-2 cells with silenced HMGB1/HMGB2 expression were decreased up to $3-fold relative to control cells ( Figure 6, lane 4; Saos-2). This finding was also supported by QRT-PCR analysis revealing $3-fold decrease in topo II mRNA levels in Saos-2 cells with silenced HMGB1/2 expression (data not shown). The fact that only very little (<20%) decrease in topo IIa protein levels was observed when solely HMGB1 expression was silenced in Saos-2 cells ( Figure 6, lane 3; Saos-2) may be related to possible functional redundancy of these two closely related HMGB-type proteins (see Discussion section). In conclusion, we have found that chromosomal proteins HMGB1 and HMGB2 could modulate cellular expression of topo IIa in cells lacking functional retinoblastoma gene susceptibility protein pRb by promoting binding of NF-Y to the topo II gene promoter. Our findings are discussed in the view of reported over-expression of HMGB1/2 proteins in tumors [(17) and refs therein], as well as the fact that the Rb gene is deleted/mutated in >50% human cancers (42). DISCUSSION Recently we have reported that chromosomal protein HMGB1 could stimulate catalytic activities of human topoisomerase IIa by enhancing DNA binding and cleavage by the enzyme (11). Here we report that HMGB1 and its close relative, HMGB2, could also up-regulate the activity of the human topo II gene promoter and cellular expression of the enzyme. Our results suggest that HMGB1 and HMGB2 proteins may serve as positive regulators of the cellular activity of the topo II gene (this article) as well as that of the enzyme (11). HMGB1 and HMGB2 proteins have been implicated in regulation of chromatin structure and DNA metabolic processes such as transcription, replication, recombination and repair (5,8). The lack of HMGB1 in knockout mice results in death within a few hours after birth due to inefficient activation of glucocorticoid receptor responsive genes, but HMGB2 knockout mice are viable (15,47). HMGB2 protein could not substitute in mice for the loss of HMGB1 (15,47), suggesting that HMGB1 and HMGB2 proteins may have distinct roles, albeit most of the reported DNA-binding studies indicated that the in vitro DNA-binding properties of HMGB1 and HMGB2 were indistinguishable (5,7,8). Murine and human cells were viable without functional HMGB1, indicating that HMGB2 could partially compensate for the loss of HMGB1 in the cell (48). The possible functional redundancy of HMGB1 and HMGB2 (at least in some biological processes) may also explain our finding that the double knock-down of HMGB1 and HMGB2 in pRb-negative cells was necessary in order to observe a significant decrease in cellular levels of topoisomerase IIa. However, as we have not succeeded in silencing of HMGB2 expression only and/or without concomitant silencing of HMGB1 expression (using various shRNA constructs directed against different parts of hHMGB2 mRNA, unpublished results) we could not determine the exact contribution of the individual HMGB-type proteins to the cellular expression of topo IIa. HMGB1 is an abundant and most mobile nuclear protein ($1 Â 10 6 molecules per nucleus), but despite its abundance the protein may be limiting within cells as evident from transient over-expression of HMGB1 resulting in transactivation of numerous genes (5,48). Previous transfection experiments revealed that HMGB1, but not HMGB1 lacking the acidic C-tail (HMGB1-ÁC), could stimulate the Gal4VP16-mediated reporter gene expression (52,53), or the p53-dependent transactivation of the p53-responsive promoter in H1299 cells (54). Transactivation of the human topoII gene promoter by HMGB1 did not require the acidic C-tail of HMGB1 suggesting that the C-tail did not function as an activator of transcription of the topoII promoter (this article). Higher DNA binding and/or bending/looping by HMGB1-ÁC relative to the full-length HMGB1 Figure 6. Knock-down of HMGB1 and HMGB2 results in inhibition of topo IIa expression. Nuclear lysates from untransfected (lanes 1) and mock-transfected (lanes 2) cells, as well as from cells transfected with plasmids producing shRNAs specific for HMGB1 (lanes 3, construct #A) or HMGB1/HMGB2 (lanes 4, construct #B) were resolved on SDS-polyacrylamide gels and transferred onto the membranes by western blotting. Proteins were detected by specific antibodies as detailed in the Materials and methods section. [see also (14,18,35)] may explain why the truncated HMGB1 protein activated the topo II gene promoter more efficiently than the full-length HMGB1. Thus, shielding of positively charged A+B di-domain of HMGB1 by the acidic C-tail (13,14,55) may account for lower ability of the full-length protein to up-regulate the topo II gene promoter. We have demonstrated that intercalating residues of human HMGB1 (Phe 38 and Phe 103)-that were essential for the ability of HMGB1 to bend DNA [see also (12,33,34,56)]-were required for the ability of HMGB1 to enhance binding of NF-Y to ICEs and for the up-regulation of the topo II gene promoter (this article). Interestingly, the intercalating residues of HMGB1 proved to be dispensable for protein-protein interactions such as binding of HMGB1 to topoisomerase IIa (11 and unpublished results) or progesterone receptor (56), as well as for a specific binding of progesterone receptor to its cognate sites (56). The mechanism by which HMGB1/2 proteins stimulate activity of the topo II promoter most likely involves enhancement of transcription factors binding, possibly by pre-bending of the DNA promoter sequences. Numerous transcription factors have been reported to interact with HMGB1 and/or HMGB2, including TBP, Oct and Sp1 (8,27). Binding of NF-Y to short DNA duplexes containing ICEs1-4-binding sites (this article), as well as Oct-1/2 (74) and Sp1 (20) to their DNA cognate sites was enhanced by HMGB1. NF-Y, Sp1 and ICBP90 have previously been shown to be required for efficient transcription from human topo II and topo II promoters [ (23,49,50,51,58) and refs therein). The ability of HMGB1/2 to enhance NF-Y binding to the topo II promoter in vivo was to be explained by HMGB1-(NF-Y) interactions, and possibly also by HMGB1-mediated DNA pre-bending of the DNAbinding sites for NF-Y. The fact that no ternary HMGB1-(NF-Y)-DNA complex was detected did not necessarily mean the lack of existence of such a complex. It is possible that HMGB1 could 'deliver' NF-Y to its DNA-binding site by forming a ternary complex which is transient and unstable. This idea is in agreement with previous reports demonstrating the ability of HMGB1 to enhance binding a plethora of sequence-specific proteins to their cognate DNA sites without formation of ternary complexes, despite the detection of the corresponding protein-HMGB1 interactions in 'pull-down' assays (5,8,11). The existence of interactions of pRb with NF-Y (and also with HMGB1) may constitute a basis for understanding of pRb-mediated inhibition of the activity of the topo II promoter. The observed decrease in binding of NF-Y to ICEs by pRb, as well as inhibition of the ability of HMGB1 to promote binding of NF-Y binding to ICEs (this report), suggested that pRb could down-regulate the activity of the topo II promoter by preventing NF-Y binding to inverted CCAAT elements. The exact nature of the interactions involving NF-Y, HMGB1, pRb (phosphorylated or non-phosphorylated?) as well as other binding partners on the the topo II promoter is unknown. In addition, interaction of HMGB1 to NF-Y, as well as stimulation of NF-Y binding to several ICEs in vitro, may also explain our finding that sequential deletion of ICEs5-3 from the topo II promoter had very little effect on the ability of HMGB1 to up-regulate the promoter. Thus, our results could indicate possible functional redundancy of ICEs for the activation of the topo II promoter by HMGB1. Results of this article demonstrated that HMGB1 and HMGB2 proteins could significantly up-regulate the human topo II gene promoter only in cells lacking functional retinoblastoma protein pRb. The exact reason for the inability of HMGB1/2 to significantly up-regulate the activity of the human topo II gene promoter in pRb-positive cells is unknown. However, the detectable (albeit slight) transactivation of the topo II promoter in pRb-positive cells could be related to weakening of the inhibitory effect of pRb on the promoter. This could be due to HMGB1 binding to pRb which could partially counteract the inhibitory effect of pRb on NF-Y binding to ICEs of the topo II promoter (see also the above paragraph). On the other hand, the fact that HMGB1 could significantly (>10-fold) enhance activity of the topo II promoter in Saos-2 cells transiently expressing mutant pRb (within the A/B pocket) was most likely related to the inability of the mutant to inhibit the activity of the topo II promoter, rather than to the pRb(mut)-HMGB1 interactions [see also (43)]. The possibility that the failure of the pRb mutant to inhibit the activity of the topo II promoter was related to the inability of the mutant to interfere with NF-Y binding to ICEs remains to be investigated. Previous reports indicated that p53 could also inhibit activity of the topo II gene promoter despite the lack of clear consensus p53-binding sites within the human topo II gene promoter (26). This inhibition was explained as consequence of an interference of p53 with NF-Y binding to regulatory sequences of the promoter (25,46). HMGB1 and HMGB2 proteins have previously been reported to promote binding of p53 (or its close relative, p73) to their specific DNA-binding sites by bending and specific p53-HMGB1/2 interactions (20,59). Whether the latter interactions could counteract or even enhance the reported p53-mediated inhibition of the topo II gene promoter activity (20,24,45,60) is unclear. Interestingly, the catalytic activity of topo IIa was stimulated by p53 (46), albeit by a mechanism different from that reported by HMGB1 (11). Contrary to the action of pRb or p53, the HMGB-type proteins could both transactivate the topo II promoter and stimulate the catalytic activity of the enzyme (11). A possible mechanism summarizing interplay of pRb, p53 and HMGB1/2 on the activity of the topo II gene and topoisomerase IIa, is outlined in Figure 7. The two isoforms of topoisomerase II, a and b, are encoded by distinct genes on chromosomes 17q21-22 and 3p24, respectively (61,62). While the activity of the human topo II gene promoter is low in resting cells and enhanced during proliferation (such as in tumors), the activity of the human topo II gene promoter is constant throughout the cell cycle (49). Both isoforms differ in their production during the cell cycle (1-4), but they exhibit very similar catalytic activities, including the sensitivity to anticancer drugs specifically targeting topo II. The latter may be interesting in the view of our finding that the activities of both the topo II and topo II gene promoters were up-regulated by HMGB1/2. Whether the HMGB1/2-mediated activation of the topo II gene promoter is also affected by pRb, awaits further investigation. Although HMGB1 is expressed throughout the cell cycle with no significant variations (5), the protein is clearly over-expressed in most human tumors, including breast carcinoma, melanoma, gastrointestinal stromal tumors, colon carcinomas and acute myeloblastic leukaemia [ (5,8,17,63) and references herein]. HMGB1 overexpression has also been observed upon administration of hormones (estrogen and/or progesterone), resulting in increased potency of anticancer drugs cisplatin and its analogue carboplatin (64). Recently HMGB2 was also found to be over-expressed in cancer (http://expres sion.gnf.org/cgi-bin/index.cgi#Q; Gene Expression Atlas, 38065_at). Most of the human tumors have excess DNA copies of the TOP2A (encoding topoisomerase IIa) gene [ (65,66) and refs therein], and the TOP2A gene amplification may explain the significantly enhanced cellular levels of topo IIa in tumors (67,68). Topo IIa over-expression has been reported to be associated with poor cancer-specific survival and presence of metastases (66). Up-regulation of topo IIa could significantly affect responsiveness of tumors to drugs specifically targeting the enzyme (topo II poisons) (69,70). The fact that the HMGB1/2-mediated up-regulation of cellular levels of topo IIa was most prominent in Rb-minus cells (this article) raises the question of a possible relevance of simultaneous HMGB1/2 overexpression and Rb deletions in tumors in respect to clinical prognosis of patients treated with topo II poisons. While over-expression of HMGB1/2 proteins could enhance cellular levels (and/or activity) of topo IIa, reduced transcriptional activity of HMGB1/2 genes could diminish the cellular levels (and/or activity) of the enzyme. Several polymorphisms and/or mutations were recently identified in the HMGB1 gene with a potential regulatory impact on HMGB1 transcription (71). In this respect, the B-cell chronic lymphocytic leukemia (B-CLL), a malignant disease with highly variable clinical course, could be of interest due to deletions of the 13q14 locus (72). About 30-50% of patients with 13q14 deletions bear Rb deletions (some of them even biallelic, unpublished results). Interestingly, the HMGB1 gene is localized in the 13q12 locus (5,73). Furthermore, research is necessary to fully understand the involvement of HMGB-type proteins and their possible interplay with tumor suppressor proteins, pRb and p53, in modulation of cellular activity of topo IIa. Figure 7. Modulation of activity of human topo II gene and topoisomerase IIa by pRb and HMGB1/2 (a hypothesis). HMGB1 and HMGB2 proteins up-regulate the expression of the human topo II gene in pRb-negative cells by enhancement of binding of transcription factor NF-Y to its specific DNA-binding sites (ICEs) within the topo II gene promoter. Binding of NF-Y to the ICEs is facilitated by prebending the DNA sequences by HMGB1/2. pRb can inhibit the activity of the topo II gene, as well as the ability of HMGB1 to up-regulate the gene, possibly by reducing binding of NF-Y to ICEs (this article). HMGB1/2 promote cellular expression of topoisomerase IIa (this article), and the proteins have also the potential of enhancing catalytic activity of the enzyme, as previously demonstrated in vitro (11 and unpublished results). p53 inhibits activity of the topo II gene by compromising NF-Y binding (24,45). Unlike the inhibitory effect of pRb on the catalytic activity of topo IIa (22), the enzyme is stimulated by p53 (46). is greatly appreciated. We also thank Lenka Juracˇkova´for excellent technical assistance, Jitka Malcˇı´kova´for QRT-PCR results evaluation, and Franc¸ois Strauss (UniversiteṔ ierre et Marie Curie, Paris) for critical reading of the manuscript and many helpful suggestions.
2018-04-03T02:36:04.197Z
2009-02-17T00:00:00.000
{ "year": 2009, "sha1": "0af5f3f8f52096f1175377cfc36cdae59e93aff9", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/37/7/2070/7698465/gkp067.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd6fbf563fda1bdae249342f1ad76058748ae919", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
70777750
pes2o/s2orc
v3-fos-license
Brain Mapping of Developmental Coordination Disorder This chapter discusses the brain mapping of developmental coordination disorder (DCD). DCD is a neurological disorder characterised by impaired motor coordination and impaired performance of daily activities that require motor skills. In the Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV) [1], DCD is included in the Learning Disorders and the Motor Skills Disorders sections [1]. DCD is one of the most common disorders in childhood, and it affects 5% to 6% of school-age children. Introduction This chapter discusses the brain mapping of developmental coordination disorder (DCD).DCD is a neurological disorder characterised by impaired motor coordination and impaired performance of daily activities that require motor skills.In the Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV) [1], DCD is included in the Learning Disorders and the Motor Skills Disorders sections [1].DCD is one of the most common disorders in childhood, and it affects 5% to 6% of school-age children. DCD is a heterogeneous disorder, and its manifestations are varied and often complex.A meta-analysis of DCD literature that was published between 1974 and 1996 showed that the greatest deficiency in these patients was in visual-spatial processing [2].The latest metaanalysis of 128 studies suggested that children with DCD show underlying problems in the visual-motor translation (namely inverse modelling) of movements that are directed within and outside peripersonal space, adaptive postural control, and the use of predictive control (namely forward modelling), which impacts their ability to adjust movement to changing constraints in real time [3].The underlying cognitive mechanisms are still a matter of discussion. Previous clinical and experimental studies have indicated that motor skill difficulties in DCD children may be related to dysfunction in the parietal lobe [4], the cerebellum (CB) [5], the basal ganglia (BG) [6], the hippocampus [7] and the corpus callosum [8].However, because the motor system is highly complex, this is not a given conclusion. Neuroimaging, including functional magnetic resonance imaging (fMRI), will create a new standard in the understanding of the complex cognitive functions in a child's brain.Therefore, it is useful to review the data from current DCD neuroimaging studies as the next critical step in enhancing our understanding of DCD.Clarifying DCD pathogenesis will be beneficial to clinicians as well as to children suffering from DCD. Neuroimaging studies of DCD We researched the Medline database with the terms 'neuroimaging' and 'DCD' for original research articles that were written in English.There were few DCD neuroimaging studies, and only 6 neuroimaging studies that involved the direct identification of the neural substrates responsible for DCD were available (Table 1 [9] reported that DCD children exhibited abnormal brain hemispheric specialisation during development when performing a go/no-go task.Connectivity analyses in the middle frontal cortex-anterior cingulate cortex-inferior parietal cortex (IPC) network indicated that children with DCD are less able than healthy children to easily or promptly switch between go and no-go motor responses.This was the first fMRI study to clarify the attentional brain network of DCD children. [No. 2] In 2009, Kashiwagi et al. [10] (our group) showed poor performance and less activation in the left superior parietal lobe (SPL), the left inferior parietal lobe (IPL), and the left post-Functional Brain Mapping and the Endeavor to Understand the Working Brain central gyrus in DCD children during visuomotor tasks.This was the first fMRI study to elucidate the neural underpinnings of DCD children by using a visuomotor task.Furthermore, a connection between the brain activity in the left IPL and task performance that represented clumsiness was suggested. [No. 3] In 2010, Zwicker et al. [11] demonstrated that DCD children activate different brain regions compared to control children when performing the same trail-tracing task.They found that a correlation of the activation of the right middle frontal gyrus with the number of traces indicated cognitive effort in the children with DCD. [No. 4] In 2011, Zwicker et al. [12] found that DCD children demonstrated decreased activation in cerebellar-parietal and cerebellar-prefrontal networks as well as in brain regions associated with visuospatial learning.This was the first study in DCD children to examine changes in the patterns of brain activation that were associated with skilled motor practice.[No. 5] In 2010, Marien et al. [13] reported that the CB is crucially implicated in the pathophysiological mechanisms of DCD, and this reflects a disruption of the cerebello-cerebral network that is involved in executing planned actions, visuospatial cognition, and affective regulation.This was the first single-photon emission computed tomography study of children with DCD.[No. 6] In 2012, Zwicker et al. [14] showed that the mean diffusivity of motor and sensory pathways is lower in DCD children.In addition, differences in the intrinsic characteristics of axons or in the extra-axonal/extracellular space may underlie some of the deficits that are observed in DCD children.This was the first DTI study in children with DCD. Different patterns of activation of cerebral areas in DCD patients compared to controls in fMRI motor control tasks In order to elucidate the main mechanisms underlying the impaired motor skills in DCD patients, we have to examine brain activities that are related to motor performances during motor control tasks.There were 3 fMRI studies (No. 2, 3, and 4) on motor control tasks in DCD patients.One study included a motor learning task.The cerebral areas listed below showed significant differences in activation between DCD children and control children during the motor control task and motor learning task and the functions of those areas. Study design and conditions The experiment was designed in a block manner and consisted of the following 3 conditions: 1) Tracking condition (TC): tracking the moving blue target by manipulating the joystick, 2) Watching condition (WC): watching the moving red target and white cursor without hand manipulation and 3) Resting condition (RC): looking at a fixation cross. Each condition lasted for 24 s and was repeated 6 times in a pseudo-randomised order (Figure 1).All of the participants were trained through 40 trials of tracking before scanning.The participants achieved their best performance after several trials.Task performance was represented by the distance (pixels) between the centre of the target and the cursor.We recorded 6 sets of data on the distance and the velocity changes for each participant, and the effects of the group and the participants (within group) on these data were analysed by a twofactor nested design analysis of variance.Furthermore, the effects of the trial numbers and the participants on the task performance during the final 6 training trials and 6 scanning trials were analysed with a factorial two-way analysis of variance.2) design and conditions.The experiment was designed in a block manner and consisted of 3 conditions.Each condition lasted 24 s and was repeated 6 times in a pseudo-randomised order.performance was represented by the distance (pixels) between the centre of the target and the cursor.We recorded 6 sets of data on the distance and the velocity changes for each participant, and the effects of the group and the participants (within group) on these data were analysed by a two-factor nested design analysis of variance.Furthermore, the effects of the trial numbers and the participants on the task performance during the final 6 training trials and 6 scanning trials were analysed with a factorial two-way analysis of variance. Behavioural data Figure 2a shows the behavioural results for a DCD child and a control child.The DCD child showed much error at the return point and particularly at the beginning point compared to the control child. Imaging data In the comparison of the watching condition versus the resting condition (WC -RC), both the DCD-greater-than-control and control-greater-than-DCD comparisons did not reveal significant differences in the activation maps between the groups.In the Imaging data In the comparison of the watching condition versus the resting condition (WC -RC), both the DCD-greater-than-control and control-greater-than-DCD comparisons did not reveal significant differences in the activation maps between the groups.In the comparison of the tracking condition versus the watching condition [(TC -RC) -(WC -RC)], greater activation was not observed in the DCD-greater-than-control comparison.Inversely, the control-greater-than-DCD comparison showed differences in the activation in the left hemisphere. Different brain activation in the comparison of the tracking condition versus the watching condition [(TC -RC) -(WC -RC)] in the visually guided tracking task between DCD patients and controls (DCD < control only) Brain Mapping of Developmental Coordination Disorder http://dx.doi.org/10.5772/56496 No. 4: Zwicker et al. study Different brain activation of the retention condition versus the early condition in the trailtracing task between DCD patients and controls (DCD < control only) Right cerebellar crus I: working memory and executive functions [40].Left cerebellar lobule VI: part of the sensorimotor network of the CB [40], spatial processing [41], performance of a variety of tasks, including serial reaction time tasks [42], motor sequence learning [43], reaching tasks [44] and planned, discretely aimed arm movements [45] as well as the magnitude of motor correction during visuomotor learning [46].Left cerebellar lobule IX: unclear [40]. What is DCD? Historical perspectives At the beginning of the 20th century, an awareness of different levels of motor performance was clearly described in studies that identified the motor abilities of children as very clever, clever, medium, awkward or very awkward [55].As early as 1926, Lippitt was concerned specifically with poor muscular coordination in children [56].Orton's (1937) discussion of developmental apraxia or abnormal clumsiness was strongly influenced by ideas about adult apraxia and damage to the dominant hemisphere [57].Since the early 1960s, many terms have been used to describe children whose motor difficulties interfere with daily living, and these include developmental apraxia and agnosia, minimal cerebral dysfunction (Wigglesworth, 1963) [58], minimal brain dysfunction (Clements, 1966) [59], minimal cerebral palsy (Kong, 1963) [60] and developmental dyspraxia [61]. At a 1994 consensus meeting in London, Ontario (Polatajko et al., 1995) [62], a multidisciplinary group of internationally recognised researchers who work with children with motor clumsiness agreed to use the term developmental coordination disorder as described by the American Psychiatric Association (APA) in the DSM-IIIR (APA, 1987) and revised in DSM-IV (APA, 1994). What is clumsiness? What is dexterity? Clumsiness is defined by Morris and Whiting as a maladaptive motor behaviour in relation to expected or required movement performance [63].The antonym of clumsiness is dexterity. Dexterity is the ability to find a motor solution for any external situation or to adequately solve any emerging motor problem correctly (adequately and accurately), quickly (with respect to both decision making and achieving a correct result), rationally (expediently and economically) and resourcefully (quick-wittedly and initiatively).In many movements and actions, there are no absolutely unpredictable events, but these movements nevertheless require quick and accurate movement adaptation to external events that cannot be predicted with certainty.This accurate movement adaptation is important for dexterity.The heart of the problem is to quickly and correctly find a solution in conditions of an unexpectedly changed environment.Dexterity apparently is not in the motor action itself but is revealed by its interaction with changing external conditions, including the uncontrolled and unpredicted influences from the environment.The established essential feature of dexterity is that it always refers to the external world.Moreover, dexterity is a complex activity.Real-life movements have an element of adaption to various, although perhaps minor, unexpected events [64]. Quick and correct motion is fundamental to dexterity performance.Quick motion means the rapid initiation of action and fleetness of the performance itself.Accurate motion implies spatially and temporally accurate performance.As we move more rapidly, we become more inaccurate in terms of the goal we are trying to achieve.The adage haste makes waste has been a long-standing viewpoint about motor skills. Identifying optimal measurements of skill learning is not trivial [65].Previous studies have typically defined skill acquisition in terms of a reduction in the speed of movement execution or reaction times, increases in accuracy or decreases in movement variability.Yet, these measurements are often interdependent, in that, faster movements can be performed at the cost of reduced accuracy and vice versa, which is a phenomenon which has often been referred to as the speed-accuracy trade-off.The principles of speed-accuracy trade-offs, which are known as Fitts' law, are specific to the goal and nature of the movement tasks [66].One solution to this issue is through the assessment of changes in the speed-accuracy trade-off functions.Therefore, we should assess task performances with both speed and accuracy.The visually guided tracking task that we adopted in our fMRI study has been experimentally used for evaluating motor skills.We assessed task performance as the change in the velocity of the cursor for speed and the distance between the target and the cursor for accuracy. What is motor learning? Children with DCD have difficulties with motor performance and motor learning [67].Most clinicians and researchers agree that difficulty with motor learning is a key feature of DCD. Motor learning depends on maturation, experience, and active learning.Motor learning has been described as a set of processes that are associated with practice or experience and that lead to relatively permanent changes in the capabilities for producing skilled actions [68]. Motor skill learning means, in other words, dexterity learning.Therefore, as mentioned above, accurate movement adaptation is an important fact in motor skill learning. For motor learning, 3 main theories apply.Fitts and Posner (1967) distinguished the following 3 phases of motor learning: cognitive, associative and autonomous [69]. Hikosaka and colleagues proposed a model of motor skill learning.According to this model, 2 parallel loop circuits operate in the learning of the spatial and motor features of sequences.Whereas the learning of spatial coordinates is supported by the frontoparietal associative BG-CB circuit, the learning of motor coordinates is supported by the primary motor cortex-sensorimotor BG-CB circuit.According to this model, transformations between the 2 coordinate systems rely on the contribution of the supplementary motor area (SMA), the pre-supplementary motor area (preSMA) and the pre-motor cortices.Importantly, it has been suggested that the learning of spatial coordinates is faster, yet requires additional attentional and executive resources that are putatively provided by prefrontal cortical regions [70] (Figure 4. (a)). Similarly, on the basis of brain imaging studies, Doyon and Ungerleider (Doyon and Ungerleider's model of motor skill learning) [71] proposed that cerebral plasticity is important within the cortico-striatal and cortico-cerebellar systems during the course of learning a new sequence of movements (motor sequence learning) or the adaptation to environmental perturbations (motor adaptation). This model proposes that, depending upon the nature of the cognitive processes that are required during learning, both motor sequence and motor adaptation tasks recruit the following similar cerebral structures early in the learning phase: the striatum, CB, motor cortical regions, in addition to prefrontal, parietal, and limbic areas.Dynamic interactions between these structures are likely to be crucial in establishing the motor routines that are necessary for the learning of the skilled motor behaviour.A shift of the motor representation from the associative to the sensorimotor striatal territory can be seen during sequence learning, whereas additional representation of the skill can be observed in the cerebellar nuclei after practice in a motor adaptation task.When consolidation has occurred, the subject has achieved asymptotic performance, and their performance has become automatic; however, the neural representation of a new motor skill at that stage is believed to be distributed in a network of structures that involves the cortico-striatal or cortico-cerebellar circuit, depending on the type of motor learning acquired.At this stage, the model suggests that the striatum is no longer necessary for the retention and execution of the acquired skill for motor adaptation; regions representing the skill at this stage include the CB and related cortical regions.In contrast, a reverse pattern of plasticity is thought to occur in motor sequence learning, such that the CB is no longer essential with extended practice, and the long-lasting retention of the skill is believed at this stage to involve representational changes in the striatum and the associated motor cortical regions ( Both models share the view that motor skill learning involves interactions between distinct cortical and subcortical circuits that are crucial for the unique cognitive and control demands that are associated with this stage of skill acquisition [65]. Where is brain area associated with DCD? The parietal lobe The parietal lobe plays a critical role in numerous cognitive functions, particularly in the sensory control of action [72].As we know, lesions in the left posterior parietal cortex (PPC) are associated with apraxia, which is a higher order motor disorder, whereas lesions in the right PPC are associated with unilateral neglect, which is an attentional disorder [73]. The results of a meta-analysis of the information processing deficits that are associated with DCD children showed that DCD children have significantly poorer visual spatial processing than healthy controls [2,3].This evidence suggests that the parietal lobe may be implicated in DCD children because of its primary role in the processing of visual spatial information [74]. In addition, DCD children are less competent in their ability to recognise emotion [75], which has been linked to parietal lobe involvement [76].Some clinical studies have supported the notion that the parietal lobe is associated with the mechanisms underlying the impaired motor skills in DCD children.Wilson et al. [77] conducted a study on procedural learning in DCD children and stated that the neurocognitive underpinnings of the disorder may be located in the parietal lobe and not in the BG.Another study involving mental rotation tasks indicated that DCD children might have dysfunction in the parietal lobe, which is involved in the internal representation of the movement [78].In a recent study, Hyde et al. found that children with DCD show a similar response pattern as patients with lesions of the PPC on a number of paradigms that assess aspects of internal modelling.This has led to the hypothesis that DCD may be attributable to dysfunction at the level of the PPC [79]. Furthermore, a study on imagined motor sequences revealed that the performance of real and imagined tasks are dissociated in DCD children; this finding indicates that a disruption in the motor networks of the parietal lobe is associated with the generation of the internal representations of motor acts [80].In addition, this group found that the ability of motor imagery in DCD children varied according to their level of motor impairment [81], and motor imagery training ameliorated the clumsiness in DCD children [82].Recommendations of the definition, diagnosis, and intervention of DCD by the European Academy for Childhood Disability [3] only refer to the fact that Katschmarsky considered parietal dysfunction an underlying organic defect in DCD children from their study [4]. The cerebellum The CB is related to motor skill learning.Given the CB's role in motor coordination and postural control, it may be involved in the neuropathology of DCD [74].Geuze reported that the major characteristics of poor control in DCD are the inconsistent timing of muscle activation sequences, co-contraction, a lack of automation and the slowness of response.Converging evidence indicates that cerebellar dysfunction contributes to the motor problems of children with DCD [83].Motor adaptation, which is also thought to reflect cerebellar function [71], has been demonstrated in children with DCD [84].Waelvelde reported that the parameterization of movement execution in the Rhythmic Movement Test in children with DCD was significantly less accurate both in time and in space than the performance of same-aged typically developing children.The data of that study support the notion that some children with DCD manifest impairments in the generation of internal representations of motor actions and support the hypothesis that there is some form of cerebellar dysfunction in some children with DCD [85]. The basal ganglia The BG is involved in motor control and motor skill learning.Clumsiness is a term that is associated in childhood with problems in the learning and execution of skilful movements, the neuronal basis of which is, however, poorly understood.Groenewegen reported that, as far as deficient motor programming is involved, the BG probably plays a role [86].Wilson et al. did not identify any evidence that the BG is implicated in DCD [77]. The hippocampus Hippocampal, cortico-cerebellar, and cortico-striatal structures are crucial for building the motor memory trace [71].Neural structures, such as the hippocampus, parietal cortex, and CB, have been proposed to contribute to the process of learning new motor sequences.Gheysen et al. found that the sequence learning problems of DCD children might be located at the stage of motor planning rather than at sequence acquisition [87].The fact that the hippocampus and CB could be involved in the neuropathology of DCD has been frequently proposed given their function in motor coordination and adaptation [71,88]. The corpus callosum Sigmundsson reported that only DCD children showed significant performance differences in favour of the preferred hand in visual/proprioceptive or proprioceptive conditions.This finding was thought to suggest that the developmental lag that is exhibited by DCD children might have pathological overtones that are possibly related to the development of the corpus callosum [89]. Recent neuroimaging studies The parietal lobe and CB are key brain regions that have been highlighted in recent neuroimaging studies of the visuomotor performance of children with DCD.The brain functions of these 2 regions are known to involve the motor adaptation of motor learning in the past 2 models of motor skill learning.In addition, the parietal lobe and striatum are known to be involved in the motor sequence learning of motor learning.Accordingly, the parietal lobe is a region that is associated with sensory input, motor output, motor adaptation, and motor sequence learning. In our results, parietal dysfunction reflected the difference in brain activities between DCD and control children during the phase of automation.The task in our study was easy to master, and, therefore, the performances of DCD patients and controls had already reached their plateau before the scanning trials.Thus, this study did not involve motor learning effects.In our fMRI task, the speed of the target was changed sinusoidally during its 12-s round trip.Consequently, we studied both motor sequence learning and motor adaptation in our fMRI task.We reported that DCD children showed poor performance and less activation in the left PPC and postcentral gyrus during the visuomotor task.Thus, a connection was suggested between brain activity of the left PPC and clumsiness. In the results from other studies, dysfunction of the CB may reflect the different brain activities of consolidation conditions versus cognitive processes between DCD children and controls during the early-slow learning phases.In that study, tracing accuracy in control children improves from early practice to consolidation and shows increased activation in several brain regions.In contrast, the DCD children did not show any improvements in tracing accuracy.The authors noted that further work with a larger sample is needed to confirm the hypothesis that these areas of brain activation may contribute to improved motor performance.In this study design, the results mainly showed motor adaptation. Current conclusion: Why are DCD children clumsy? DCD is a disorder of impaired performances in daily activities that require motor skill.The movement parameters of daily activities appear to be encoded by delayed recall and require easy motor skills.Even though DCD children can learn easy motor skills, why they usually require more practice than healthy children and their quality of movement may be compromised is a pressing question. From the viewpoints of the recent model of motor skill learning, previous studies, and the recent neuroimaging studies, DCD children have some difficulties with the cognitive processes in the fast learning phase, consolidation in the slow learning phase and automation in the retention phase during simple and easy motor skill learning (motor sequence learning and motor adaptation).Accordingly, it is not always easy for DCD children to perfectly acquire even simple motor skills.In fact, real-life movements that are required for daily simple and easy activities require an element of adaption to various, although perhaps minor, unexpected events.Therefore, we assumed that DCD children always seem to be required to adapt to the daily simple and easy activities as unexpected new motor learning every day because it is hard for DCD children to perfectly consolidate motor skills.In addition, DCD children show that the more a task demands the integration and adaptation of different information, the more vulnerable it is.Accordingly, we considered that motor adaptation is more important than motor sequence learning for DCD children. Given that the main clinical finding in DCD children is motor adaptation, dysfunction in the parietal lobe and CB contributes to the mechanism underlying DCD.In addition, considering that DCD children have problems with sensory input and motor output, we conclude that the parietal lobe is the main neural substrate that is responsible for DCD. 4.3. Future studies of DCD (mirror neuron system, functional connectivity approaches, default mode network, intervention, and motor imagery training) The mirror neuron system hypothesis DCD includes impairments in motor skills, motor learning, and imitation.A better understanding of the neural correlates of the motor and imitation impairments in DCD children holds the potential for informing the development of treatment approaches that can address these impairments.In recent years, the discovery of a frontoparietal circuit, which is known as the mirror neuron system (MNS), has enabled researchers to better understand imitation, general motor functions, and aspects of social cognition.Given its involvement in imitation and other motor functions, they propose that dysfunction in the MNS may underlie the characteristic impairments of DCD [90]. Functional connectivity approaches Most past studies of brain function have built on the concept of the localisation of function, in that different brain regions support different forms of information processing.Yet, no brain region exists in isolation.Information flows between the regions through the action potentials that are conducted by axons, which are bundled into large fibre tracts.For more than a century, neuroanatomists have mapped the anatomical connections between brain regions in an attempt to understand the structural connectivity of the brain.While much remains to be discovered, the study of the anatomical connections between brain regions has provided a cornerstone for neuroscience research. Despite the value of this anatomical research, the knowledge of the structural connections between brain regions can only provide a limited picture of information flow in the brain.Descriptions of functional connectivity or of how the activity of one brain region influences activity in another brain region are also needed.Many researchers who are interested in functional connectivity have adopted fMRI techniques because of their utility for measuring changes in activation throughout the entire brain.This approach is useful for the brain mapping of DCD [91] Default mode network Functional brain imaging studies with fMRI in normal human subjects have consistently revealed expected task-induced increases in regional brain activity during goal-directed behaviours.These changes are detected when comparisons are made between a task state, which is designed to place demands on the brain, and a resting state with a set of demands that are uniquely different from those of the task state.Functional imaging studies should consider the need to obtain information about the baseline. Researchers have also frequently encountered task-induced decreases in regional brain activity, even when the control state consists of the subject lying quietly with their eyes closed or passively viewing a stimulus.Whereas cortical increases in activity have been shown to be task specific and therefore to vary in location depending on the task demands, many decreases appear to be largely task independent and to vary little in their location across a wide range of tasks.This consistency with which certain areas of the brain participate in these decreases makes us wonder whether there might be an organised mode of brain function that is present as a baseline or default state [92].Spatial patterns of spontaneous fluctuations in blood oxygenation level-dependent signals reflect the underlying neural architecture.The study of the brain networks that are based on these self-organised patterns is termed resting-state fMRI. The notion of a default mode of brain function (DMN) has taken on certain relevance in human neuroimaging studies and in relation to a network of lateral parietal and midline cortical regions that show prominent activity fluctuations during the resting state [93].The DMN is a prominent large-scale brain network that includes the ventral medial prefrontal cortex, the posterior cingulate/retrospenial cortex, the IPL, the lateral temporal cortex, dorsal medial prefrontal cortex, and hippocampal formation [94].The parietal lobe is also an important area in the DMN.The DMN is unique in terms of its high resting metabolism, deactivation profile during cognitively demanding tasks and increased activity during the resting state and highlevel social cognitive tasks.There is growing scientific interest in understanding the DMN underlying the resting state and higher-level cognition in humans.A recent study found that a goodness-of-fit analysis applied at the individual subject level suggested that the activity in the default-mode network might ultimately prove to be a sensitive and specific biomarker for incipient Alzheimer's disease [95]. The functional and structural maturation of networks that are comprised of discrete regions is an important aspect of brain development.The putative functions of the DMN, as well as the maturation of cognitive control mechanisms, develop relatively late in children, and they are often compromised in neurodevelopmental disorders, such as autism spectrum disorders and attention-deficit/hyperactivity disorder [96].The relationship between DMN structure and function in DCD children is not known.Examining the developmental trajectory of the DMN is important not only for the understanding of how the structures of the brain change during development and impact the development of key functional brain circuits, but also for understanding the ontogeny of cognitive processes that are subserved by the DMN [97].These multimodal imaging analyses will be important for a better understanding of how local and large-scale anatomical changes shape and constrain typical and atypical functional development.Future research should systematically explore the developmental trajectory of the DMN in a normal population and compare this with the maturation of the DMN in DCD children. Intervention Can dexterity be individually developed?Is it an exercisable capacity?The answer is positive and multifaceted.It is obvious that natural, inborn, and constitutional prerequisites for dexterity are and will be as different in different persons as their other psychophysical abilities. The attainable individual peaks of development, the degrees of difficulty, and the necessary amount of time for achieving a certain result will inevitably cause great individual variations. It is much more important to state that all natural prerequisites for dexterity can be developed.Both aspects of the structural complex that result in use dexterity can be exercised and developed. In a systematic review of interventions on DCD children, Hillier generally concluded that an intervention for DCD is better than no intervention [98].Independently, the guideline group performed a systematic literature search of studies that were published from 1995 to 2010. There is sufficient evidence that physiotherapy and/or occupational therapy intervention are better than no interventions for DCD children [3]. There are many different treatment approaches for DCD.The approaches to interventions are divided into the following 2 categories: process-oriented or bottom-up and taskoriented or top-down [99].Process-oriented approaches include sensory integration therapy, kinaesthetic training, and perceptual motor therapy.Task-oriented approaches include Cognitive-Orientation to Occupational Performance, neuromotor task training, and motor imagery training [3].In addition, studies have shown that process-oriented approaches may sometimes be effective but are less so than the task-oriented approaches, which are based on motor learning theories [100]. Motor imagery training Motor imagery (MI) training is a cognitive approach that was developed by Wilson [101].It uses the internal modelling of movements that facilitate the child in predicting the consequences for actions in the absence of overt movement.MI is a new intervention method for DCD children.The past literature has already described MI training as a method in stroke rehabilitation [102,103].MI training was investigated once in a randomised controlled trial, and it showed a positive effect if it was combined with active training [81]. In an fMRI study that investigated whether the neural substrates mediating MI differed among participants showing high or poor MI ability, intergroup comparisons revealed that good imagers exhibited more activation in the parietal and ventrolateral premotor regions, which are known to play a critical role in the generation of mental images [104].Our data also indicated that dysfunction in the parietal lobe, such as that in motor imagery, might be a mechanism underlying the motor skill deficits in DCD children.Thus, from our data, MI training may be a helpful strategy for DCD children. Conclusion From clinical and neuroimaging studies and models of motor skill learning, we conclude that parietal lobe dysfunction is the main mechanism underlying DCD.In addition, the parietal lobe is a key area of the MNS and MI training.However, the parietal lobe is not the only neural correlate brain region in DCD.Dysfunctions in the CB, striatum, and hippocampus are also related to the neurobiology underlying DCD.In order to further elucidate the pathogenesis and interventions of DCD, additional neuroimaging studies that include DMN and DTI are needed that link the neural networks and the functional connectivity of brain regions during motor performance. Figure 1 . Figure 1.Our study (No.2) design and conditions.The experiment was designed in a block manner and consisted of 3 conditions.Each condition lasted 24 s and was repeated 6 times in a pseudo-randomised order. Figure 2 . Figure 2. (a) shows the behavioural results for a DCD child and a control child.The DCD child showed much error at the return point and particularly at the beginning point compared to the control child.The distance between the target and the cursor and the change in the velocity of the cursor were significantly greater in the DCD group than in the control group (mean distance, 22.8 vs. 19.5 pixels, P = 0.001; mean velocity change, 398.5 vs. 369.9pixels/s/s, P = 0.013).The number of trials did not significantly affect task performance in either group over the final 6 training trials and 6 scanning trials [training trials: DCD group, F(5,55) = 0.41, P = 0.839 and F(5,55) = 1.20,P = 0.322; control group, F(5,55) = 0.49, P = 0.784] [scanning trials: DCD group, F(5,55) = 0.41, P = 0.839; control group, F(5,55) = 0.49, P = 0.780] (Figure 2. (b)). Figure 2a.The behavioural results of a child with DCD and a control child.The blue line shows the trajectory of the target, the green line shows the trajectory of the cursor and the red line shows the distance between the target and the cursor. Figure 2b . Figure 2b.Mean task performances for the DCD and control groups during 6 scanning trials.The vertical bars indicate the standard errors of the means for each data point. Figure 2 . Figure 2. (a) The behavioural results of a child with DCD and a control child.The blue line shows the trajectory of the target, the green line shows the trajectory of the cursor and the red line shows the distance between the target and the cursor.(b) Mean task performances for the DCD and control groups during 6 scanning trials.The vertical bars indicate the standard errors of the means for each data point. Figure 3 ( Figure 3(a).Brain activity differences between the DCD and control groups.In the comparison of the tracking condition versus the watching condition, the control-greater-than-DCD comparison showed differences in left hemisphere activation in the left SPL and IPL and the left postcentral gyrus (P < 0.001 at the voxel level and P < 0.05 with a correction for multiple comparisons at the cluster level). Figure 3 ( 3 . 2 No. 3 : Figure 3(b).Mean task performances and magnetic resonance signal changes for the DCD and control groups in the IPC.The vertical bars indicate the standard errors of the means, and the horizontal bars indicate the 90% confidence intervals for each data point. Figure 3 . Figure 3. (a) Brain activity differences between the DCD and control groups.In the comparison of the tracking condition versus the watching condition, the control-greater-than-DCD comparison showed differences in left hemisphere activation in the left SPL and IPL and the left postcentral gyrus (P < 0.001 at the voxel level and P < 0.05 with a correction for multiple comparisons at the cluster level); (b).Mean task performances and magnetic resonance signal changes for the DCD and control groups in the IPC.The vertical bars indicate the standard errors of the means, and the horizontal bars indicate the 90% confidence intervals for each data point. Table 2 . Cluster size, Z-values, and coordinates 3.2.No.3: Zwicker et al. studyDifferent brain activation in the trail-tracing task between DCD patients and controls BA, Brodmann area; L, left; R, right Table 2 . Cluster size, Z-values, and coordinates
2019-01-02T18:23:22.099Z
2013-06-19T00:00:00.000
{ "year": 2013, "sha1": "749d0ab3b261cf625cddc9f2f4811484dfe13f90", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5772/56496", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "749d0ab3b261cf625cddc9f2f4811484dfe13f90", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253419367
pes2o/s2orc
v3-fos-license
Temporal Eye–Hand Coordination During Visually Guided Reaching in 7- to 12-Year-Old Children With Strabismus Purpose We recently found slow visually guided reaching in strabismic children, especially in the final approach. Here, we expand on those data by reporting saccade kinematics and temporal eye–hand coordination during visually guided reaching in children treated for strabismus compared with controls. Methods Thirty children diagnosed with esotropia, a form of strabismus, 7 to 12 years of age and 32 age-similar control children were enrolled. Eye movements and index finger movements were recorded. While viewing binocularly, children reached out and touched a small dot that appeared randomly in one of four locations along the horizontal meridian (±5° or ±10°). Saccade kinematic measures (latency, accuracy and precision, peak velocity, and frequency of corrective and reach-related saccades) and temporal eye–hand coordination measures (saccade-to-reach planning interval, saccade-to-reach peak velocity interval) were compared. Factors associated with impaired performance were also evaluated. Results During visually guided reaching, strabismic children had longer primary saccade latency (strabismic, 195 ± 29 ms vs. control; 175 ± 23 ms; P = 0.004), a 25% decrease in primary saccade precision (0.15 ± 0.06 vs. 0.12 ± 0.03; P = 0.007), a 45% decrease in the final saccade precision (0.16 ± 0.06 vs. 0.11 ± 0.03; P < 0.001), and more reach-related saccades (16 ± 13% of trials vs. 8 ± 6% of trials; P = 0.001) compared with a control group. No measurable stereoacuity was related to poor saccade kinematics. Conclusions Strabismus impacts saccade kinematics during visually guided reaching in children, with poor binocularity playing a role in performance. Coupled with previous data showing slow reaching in the final approach, the current saccade data suggest that children treated for strabismus have not yet adapted or formed an efficient compensatory strategy during visually guided reaching. S trabismus affects 2% to 4% of children and results in discordant binocular experience when the visual and ocular motor systems are still developing. 1,2 Even after surgical or optical intervention to align the eyes, esotropic strabismus (nasalward eye turn) is associated with visual deficits, including amblyopia, binocular dysfunction, and ocular motor deficits that persist into adulthood. [3][4][5][6][7][8] Ocular motor deficits typical of strabismus include fixation instability, 6,9,10 decreased vergence, 7,11 and abnormal saccade initiation and execution. 8,12,13 Most ocular motor studies have focused on adults with strabismus and little is known about ocular motor development in children with treated strabismus. Sensory and ocular motor impairments in strabismus may interfere with other developing systems, such as the motor system, and with the communication between the eyes and the hands, namely, visuomotor integration. Yet, no studies have examined ocular motor impairments in strabismic children in relation to eye-hand coordination. Eye-hand coordination in three dimensional space is essential for efficient object manipulation, requiring depth perception cues to localize the object, plan the movements, and guide the arm toward the object. 14,15 Normal binocular vision during childhood provides important sensory input for optimal development of eye-hand coordination. [16][17][18] The use of binocular cues is immature during childhood, 19,20 which could thus be disrupted by discordant binocular experience early in life from strabismus. Children with strabismus and amblyopia have impaired fine motor skills that rely on eye-hand coordination, such as placing coins into a box, threading beads, and transferring answers to a multiple choice form. [21][22][23][24] Poor performance was associated with binocular dysfunction (decreased/no measurable stereoacuity, suppression), regardless of whether amblyopia was present, indicating normal stereoacuity and fusion are essential to optimal task performance. 21,23,25,26 We previously reported that children with treated strabismus are slower at reach execution, especially in the final approach, with greater end point error than their peers with normal vision when asked to touch a dot on a screen. 27 These findings suggest an inefficient use of visual feedback during online control of reaching in the final approach. During eye-hand coordination tasks, the eyes move first to fixate the target, providing high-resolution information about its physical properties and location, which can facilitate planning and execution of the reach. [28][29][30] Given the sensory and ocular motor dysfunction typical of strabismus, information gathered after the saccade regarding a target's physical properties and location may be suboptimal and could impact the control of the reach in the final approach. Here, we use a protocol previously established in adults 12,31,32 to examine visually guided reaching in children 7 to 12 years of age with a history of strabismus. We previously published reach kinematic data from this study as described elsewhere in this article. 27 As a next step in the current study, we analyze the eye movement data and evaluate temporal eye-hand coordination to determine the extent to which strabismus impacts visuomotor integration, and explore clinical and sensory factors associated with deficits. We hypothesize that children with strabismus will have slower saccades with more corrections made during the reach. Further, we predict that poorer control will be associated with impaired sensory binocular dysfunction (i.e., decreased or no measurable stereoacuity, suppression). The current analyses provides further insight into the role of vision and binocular function in the development of visuomotor integration, which may guide interventions to ameliorate or prevent eye-hand coordination impairments in children with strabismus. Participants Children 7 to 12 years of age diagnosed with esotropic strabismus (herein called strabismus) alone or strabismus and anisometropia were referred to the Retina Foundation of the Southwest by Dallas-Fort Worth pediatric ophthalmologists. Strabismic children were aligned with surgery or spectacle correction to less than 12 prism diopters of orthotropia at near at the time of testing. Age-similar control children with age-normal visual acuity and stereoacuity and no history of vision disorders were also enrolled. Testing was completed with the child's habitual spectacle correction. Diagnosis, current alignment, and prior treatment were extracted from medical records obtained from the child's referring ophthalmologist. All children spoke English as their primary language. Children who were preterm (<37 weeks gestational age) or had coexisting ocular or systemic disease, congenital infections or malformations, or (neuro)developmental delay were excluded from the study. Only children with arm lengths (shoulder to fingertip) of 50 cm or greater were enrolled. Ethics The research protocol observed the tenets of the Declaration of Helsinki, was approved by the Institutional Review Board of the University of Texas Southwestern Medical Center and conformed to the requirements of the United States Health Insurance Portability and Privacy Act. Informed consent was obtained from a parent or legal guardian and assent was obtained from children 10 years or more of age before testing and after explanation of the study. Procedure Vision Assessment. A vision assessment was conducted before visually guided reaching, (1) Crowded monocular best-corrected visual acuity with the Electronic Early Treatment Diabetic Retinopathy Study protocol, scored in logMAR. 33 Amblyopia was defined as an interocular difference of 0.2 or more logMAR, with best-corrected visual acuity in the fellow eye of 0.1 logMAR or less (20/25 or better). (2) Stereoacuity with the Randot Preschool Stereoacuity and Stereo Butterfly Tests, 34 converted to log arcsec (ranging from 1.3 to 3.3 log arcsec). No measurable stereoacuity was arbitrarily assigned a value of 4 log arcsec. (3) Extent of suppression was quantified with the Worth four-dot fusion test at seven different distances, measured as the farthest distance that four dots are reported, converted to size of suppression scotoma in log degrees. 35,36 Visually Guided Reaching. A detailed description of the setup and testing protocol can be found in our recent article that reported reach kinematics data from this study. 27 Briefly, children wore their habitual optical correction with both eyes open, used their self-reported dominant hand, and sat at a table with their head stabilized at a 35-cm viewing distance. The initial hand position required the child to use their index finger and thumb to hold a stick attached to the table at body midline 5 cm from the eyes (Fig. 1). Reach kinematics were recorded with the Leap Motion Controller The child was instructed to reach out and touch the dot with their index finger as quickly and accurately as possible, and then return to the stick. The EyeLink 1000 recorded eye movements and the Leap Motion Controller system (LMC) recorded hand movements. system (software version 4.0; Leap Motion Inc., San Francisco, CA, USA) placed 10 cm in front of the initial hand position. Eye movements were simultaneously recorded with a 500-Hz high-speed video binocular eye tracker (EyeLink 1000; SR Research, Ontario, Canada) placed behind and above the display monitor 45 cm from the child's eyes. Piloting showed this eye tracker position was best to avoid occlusion of the eye tracker by the hand or display monitor. Separate five-point horizontal calibrations were performed for the index finger (touch each dot as accurately as possible) and the eyes (look at each dot for 4 seconds) during binocular viewing using a 0.3°white dot presented sequentially from left to right at −10°, −5°, 0°, +5°, and +10°. In the experimental trials, the child was instructed to fixate a white cross (1.4 0 ) with a red dot in the middle centered on the screen with both eyes open. Once the cross disappeared, a 0.3°white dot appeared randomly at one of the four locations along the horizontal meridian (±5°o r ±10°from fixation). The child was instructed to reach out and touch the dot with the tip of their index finger as quickly and accurately as possible. A total of 40 trials were completed per child, with the first four counting as practice trials (36 experimental trials). Testing time was approximately 15 minutes. Data Processing Saccade Kinematics. Eye position data per eye were filtered with a low-pass second-order Butterworth filter and a cutoff frequency of 80 Hz. Filtered data was used to obtain eye velocity using a two-point differentiation method. A custom MATLAB script (MathWorks Inc, Natick, MA, USA) identified primary saccades using a velocity threshold of 30 deg/s. Each trial was inspected visually to confirm that saccades were correctly identified by the custom script and to ensure that both eyes moved together. A primary saccade was the first saccade that occurred within 80 to 1000 ms after target onset in the correct direction, with a gain of 30% or more of the expected amplitude. Corrective saccades were those occurring within 50 to 250 ms after the primary saccade. Reach-related saccades were those occurring more than 250 ms after the primary saccade ended and during the reach. This latency distinction between corrective and reachrelated saccades is based on research showing that corrective saccades typically occur with a latency of 250 ms. 37,38 To minimize the risk of categorizing a microsaccade as a corrective or reach-related saccade, we only included saccades that were 0.4°or more. Trials were excluded if data was missing (i.e., blink, lost tracking of eye) or noisy during the period from 250 ms before target onset to the end of the primary saccade. Mean saccade kinematic measures include (1) primary saccade latency: time from target onset to saccade initiation; (2) primary saccade gain: ratio of saccade amplitude to target amplitude, a measure of accuracy; (3) primary saccade precision: variability (i.e., standard deviation) of primary saccade gain; (4) primary saccade peak velocity (PV): maximum eye velocity attained during saccade; (5) final saccade gain: ratio of final saccade amplitude, which is the sum of the primary, corrective, and reach-related saccade amplitude, to target amplitude; (6) final saccade precision: variability of final saccade gain; (7) frequency of corrective saccades: percentage of trials that included a corrective saccade; and (8) frequency of reach-related saccades: percentage of trials that included a reach-related saccade. Temporal Eye-Hand Coordination. Details on reach kinematics data processing can be found in our previously published article. 27 Using reach kinematics measures in combination with primary saccade latency, we calculated two temporal eye-hand coordination measures: (1) saccadeto-reach planning interval: interval between end of primary saccade and reach initiation, which reflects time available for planning the reaching response after the primary saccade was complete; and (2) saccade-to-reach PV interval: interval between end of the primary saccade and when PV was attained, which reflects amount of time after the eyes were in close vicinity of the target to the end of the initial stage of reach execution. Statistical Analyses Primary Analyses. Independent t tests were used to compare strabismic children to control children for all saccade kinematics and temporal eye-hand coordination measures. Effect size was calculated using Cohen's d. Secondary Analyses. Kruskal-Wallis one-way ANOVAs were conducted to determine clinical and sensory factors related to performance: prior surgery (yes, no); amblyopia present (yes, no); stereoacuity measurable (present, not present); extent of suppression scotoma (bifoveal-macular fusion, −0.15 to 0.45 log deg; peripheral-no fusion, 0.60 to 1.2 log deg). Significant ANOVAs were followed with Mann-Whitney U post hoc tests. All tests were corrected for multiple comparisons and P values were adjusted using Holm's sequential Bonferroni procedure, which corrects for type I error as effectively as the traditional Bonferroni method while retaining more statistical power. 39 Children with fewer than 14 useable saccade trials (at least 7 useable trials per side, left/right) were excluded from further analysis. RESULTS Reach kinematic data from 36 strabismic children and 35 control children for this task have been published. 27 Of the children tested, eye movement data were available from 30 strabismic children (female = 20; mean age, 9.7 ± 1.8 years) and 32 control children (female = 18; 9.6 ± 1.8 years). The remaining 6 strabismic children and 3 control children were not included due to having fewer than 14 useable saccade trials because of artefacts, blinks, or poor calibration. Children with strabismus did not differ from controls in age (P = 0.79) or arm length (P = 0.32). (See Table 1 for group characteristics.) Saccade Kinematics No interocular differences (strabismus, nonpreferred vs. preferred eyes; control, right vs. left eyes) were found for either group. Therefore, only the preferred eye (left eye for controls) was included in the analysis. See Figure 2 for group comparisons of saccade kinematic measures, and Figure 3 for example eye traces from a typical child with strabismus and a control. Corrective and Reach-Related Saccades No group difference was found for frequency of corrective saccades (strabismus, 37 ± 16% vs. control, 38 ± 18%; t 60 = 0.32; P = 0.75; d = 0.08). However, strabismic children had Values are mean ± standard deviation or number (%), unless otherwise indicated. * For nonamblyopic children, either the previously amblyopic eye or the right eye (if the child was never amblyopic) is listed for AE BCVA. For normal control children, the right eye is listed for AE BCVA. .) Between the ages of 7 and 12 years, a transition occurs from beginning to use information derived from visual feedback to the acquisition of more integrated feedforwardfeedback control. 40,41 For those measures that were impaired in strabismic children compared with controls (saccade latency, saccade precision, reach-related saccades), we compared 7-to 9-year-old children with 10-to 12-year-old children within the strabismic group to determine whether any improvement occurs with age. The only measure that improved with age was primary saccade latency (7−9 years, 205 ± 31 ms vs. 10−12 years, 180 ± 16 ms; P = 0.015; d = 1.0). In contrast, final saccade precision was worse in the older age group (7−9 years, 0.14 ± 0.05 ms vs. 10−12 years, 0.18 ± 16 ms; P = 0.015; d = 1.0). Values are mean ± standard deviation. * Significantly different than controls. † Significantly different between categories for strabismic children ‡ For nonamblyopic children, the affected eye was either the at-risk or previously amblyopic eye, or the right eye (if the child was never amblyopic) DISCUSSION Children treated for strabismus have prolonged saccade onset latency during visually guided reaching while viewing binocularly, consistent with previous studies in strabismic adults. 8,12 Longer latencies may point to an immaturity of controlling visual fixation (i.e., disengaging fixation) that occurs before saccade onset. 42 Fixation instability, which is a hallmark of strabismus, may impact the timing of saccade initiation. 6,9,10 Because saccade latency reflects the time it takes to program the saccade before initiation, prolonged saccade latency could also reflect a delay in sensorimotor transformation. In other words, there may be a delay in processing the visual information about the location and distance of the target, converting that information into a planned motor command (i.e., the saccade), and then executing that motor command. Spatial distortions and positional uncertainty are present in strabismus 43,44 and could impact this sensorimotor transformation during visually guided reaching. Strabismic children initiated reach-related saccades twice as frequently as controls (16% vs. 8% of trials), consistent with strabismic adults (11%-15% of trials). 32 Inconsistent with strabismic adults who have more corrective saccades, 12 no difference in frequency of corrective saccades (i.e., occurred before reach initiation) was found. For strabismic children, the majority of corrective saccades (92%) occurred before or during the acceleration (initial) phase of the reach. Corrective saccades are common in normal vision and may be prepared at the same time as the primary saccade. 38 Strabismic children may be overshooting or undershooting the target, with reach-related saccades being generated to correct the positional error that remained after the primary saccade. This is supported by our finding of a 25% to 45% decrease in saccade precision, despite a mean saccade gain comparable with controls. The majority of reach-related saccades (82%) in the strabismic group occurring during the deceleration (final) phase of the reach. However, the initial variability in saccades was not rectified by these reach-related saccades, evidenced by the lack of difference between primary (0.15) and final (0.16) saccade precision, which may be impacting the lower touch accuracy found in this group. 27 Again, spatial distortions and positional uncertainty 43,44 in encoding the visual information could impact the precision of saccades. Coupled with slower reaching in the final approach, 27 an increase in the incidence of reach-related saccades, especially in the final approach to the target, suggests a reliance on visual feedback that is less efficient during the reach. It is also possible that the increased incidence of reach-related saccades is due to fixation instability, 6,9,10 which would increase the variability of the primary saccade. No group differences in temporal eye-hand coordination measures were found (saccade-to-reach, saccade-to-reach PV intervals). These data are in contrast with those from strabismic adults, who show a longer saccade-to-reach PV interval compared with controls, 32 particularly if their binocular vision was deficient, pointing to a deficit in the planning and initial execution of the reach. Because the saccadeto-reach planning interval and saccade-to-reach PV interval both exclude the final approach of the reach where strabismic children are slowest, no impairment on our temporal eye-hand coordination measures is not surprising and points to inefficient use of visual feedback as the culprit for slower reaching. It may then appear that the delay in saccade initiation (but normal saccade velocity and accuracy), and the slow reach in the final but not initial approach 27 point to a problem with saccades and reaches individually rather than there being a problem with visuomotor integration. Alternatively, despite normal saccade velocity and accuracy, saccade precision was decreased and the incidence of reach-related saccades was increased in strabismic children, suggesting that visuomotor integration is indeed impacted. A reachrelated saccade is an extra step that needs to be planned and executed, suggesting that additional information is required after the primary saccade to reach the target properly, and thus changing the coordination pattern. Coupled with our previously reported reach kinematic data, 27 the current findings suggest that children aged 7 to 12 years treated for strabismus have not yet adapted or formed an efficient compensatory strategy for visually guided reaching while binocularly viewing. Strabismic children take longer to initiate a saccade to a target before reaching, take longer to reach to the target owing to more time spent in the final approach, 27 and produce more reach-related saccades. Strabismic adults also take longer to saccade to the target and produce more reach-related saccades, but, unlike strabismic children, they spend more time in the initial approach, and produce more corrective saccades before the reach. 12,31,32 Therefore, the strategy for visually guided reaching in strabismic individuals changes from relying more on visual feedback for online control during childhood to relying more on the visuomotor plan in adulthood. At 7 years of age, children are just learning to use visual feedback for online corrections 40,41 ; the use of this feedback is not yet mastered and may be less efficient in strabismic children. This is evidenced by our finding that final saccade precision was worse by 29% in older strabismic children compared with younger strabismic children, despite a quicker saccade latency. This finding may reflect a speed-accuracy tradeoff. Even with quicker saccades, strabismic children age 10 to 12 years were still slower than controls aged 10 to 12 years (161 ± 15 ms; P = 0.002). Our findings point to a change in compensatory strategies that develop with age. It is unknown at what age this switch occurs because there are no data in teenagers with strabismus. Nonamblyopic children had prolonged saccade latency and decreased primary and final saccade precision, whereas those with amblyopia only exhibited a decreased final saccade precision. Nonamblyopic strabismic adults also show prolonged saccade latency (191 ± 29 ms), whereas amblyopic strabismic adults do not (177 ± 39 ms). 12 This difference may reflect the fact that infantile esotropia (before 12 months of age) is accompanied by poorer binocularity status but typically does not result in amblyopia. 45 In this study, seven of the strabismic children had an early onset of strabismus. However, variance of the saccade amplitude precision in the amblyopic group was large, despite having a similar mean as the nonamblyopic group, and may account for the lack of significance. This may also hold true for extent of suppression; both categories yielded similar mean saccade latencies and precision, but only one category was significantly different from controls (see Table 2). The disconjugacy of saccades in strabismus significantly decreases, but saccade accuracy remains the same after strabismus surgery. 46 In our study, all strabismic children were aligned within 12 prism diopters; thus, the disconjugacy of saccades would have been minimal. Certainly, we found no interocular difference in saccade kinematics, suggesting that saccade disconjugacies are not the cause of the increased latency or the decreased precision. A longer saccade latency, decreased saccade precision, and more reach-related saccades were related to having no measurable stereoacuity and peripheral-no fusion. Poor binocular status is associated with poor ocular motor function in strabismus, including decreased vergence 7,11 and abnormal saccade initiation and execution. 8,12,13 During visuomotor tasks, binocular cues provide vital information about an object's distance, location, and three-dimensional properties. 14 Good binocularity is important for eye-hand coordination, 16,21,25,26,[47][48][49] and the use of binocular cues may be disrupted by strabismus early in life. This finding is supported not only by better performance in strabismic children with better binocularity in our study, but also by end point inaccuracies during reaching and grasping in the strabismic children in our study and children with binocular dysfunctions in a previous study. 16 Previous studies also show better motor performance in those with recovered binocularity, 16,50 suggesting that binocularity contributes to optimum planning and execution of visually guided reaching. Our study had potential limitations. It is possible microsaccades were incorrectly categorized as corrective or reach-related saccades. To minimize this risk, we used an inclusion criterion of 0.4°or more amplitude in both eyes. It is challenging to tease apart the individual contributions of clinical and sensory factors because they often coexist, 4 especially with small sample sizes in two of the sensory categories (peripheral-no fusion, n = 9; stereoacuity present, n = 9). However, our data, along with previous studies on eye-hand coordination in strabismus, point to the role that binocular dysfunction plays in impaired visuomotor ability. Although we were unable to control for experience with eye-hand coordination, our task was a simple reaching task with which all children will have had experience, regardless of enrollment in physical recreational activities. CONCLUSIONS Strabismus impacts saccade kinematics during visually guided reaching in children, with poor binocularity playing a role in performance. There seems to be a reorganization of motor control with age; rather, a switch from a disruption in execution in childhood to a change in planning in adulthood that impacts visually guided reaching in individuals with strabismus, suggestive of a compensatory adaptation of reaching. Understanding this switch and the processes underlying impairments in eye-hand coordination may lead to interventions targeted at preventing or ameliorating slow reaching or slow eye movement latency in strabismic children.
2022-11-10T06:17:00.529Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "8f6e4be2fda7cb127abe68aad2e6f572f97e6e4e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1167/iovs.63.12.10", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38e98061e2b7dd80ee9e580cfc05069aa6b901d4", "s2fieldsofstudy": [ "Biology", "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55312657
pes2o/s2orc
v3-fos-license
Existence of maximal hypersurfaces in some spherically symmetric spacetimes We prove that the maximal development of any spherically symmetric spacetime with collisionless matter (obeying the Vlasov equation) or a massless scalar field (obeying the massless wave equation) and possessing a constant mean curvature $S^1 \times S^2$ Cauchy surface also contains a maximal Cauchy surface. Combining this with previous results establishes that the spacetime can be foliated by constant mean curvature Cauchy surfaces with the mean curvature taking on all real values, thereby showing that these spacetimes satisfy the closed-universe recollapse conjecture. A key element of the proof, of interest in itself, is a bound for the volume of any Cauchy surface $\Sigma$ in any spacetime satisfying the timelike convergence condition in terms of the volume and mean curvature of a fixed Cauchy surface $\Sigma_0$ and the maximal distance between $\Sigma$ and $\Sigma_0$. In particular, this shows that any globally hyperbolic spacetime having a finite lifetime and obeying the timelike-convergence condition cannot attain an arbitrarily large spatial volume. We prove that the maximal development of any spherically symmetric spacetime with collisionless matter (obeying the Vlasov equation) or a massless scalar field (obeying the massless wave equation) and possessing a constant mean curvature S 1 × S 2 Cauchy surface also contains a maximal Cauchy surface. Combining this with previous results establishes that the spacetime can be foliated by constant mean curvature Cauchy surfaces with the mean curvature taking on all real values, thereby showing that these spacetimes satisfy the closed-universe recollapse conjecture. A key element of the proof, of interest in itself, is a bound for the volume of any Cauchy surface Σ in any spacetime satisfying the timelike convergence condition in terms of the volume and mean curvature of a fixed Cauchy surface Σ0 and the maximal distance between Σ and Σ0. In particular, this shows that any globally hyperbolic spacetime having a finite lifetime and obeying the timelike-convergence condition cannot attain an arbitrarily large spatial volume. 04.20.-q, 04.20.Dw I. INTRODUCTION Given an initial data set for the gravitational field and any matter fields present, what can be said of the spacetime evolved from this initial data? In the asymptotically flat case, one would like to know such things as how much gravitational energy is radiated to null infinity, the final asymptotic state of the system, whether black holes are formed, the nature of any singularities produced, and whether cosmic censorship is violated. For example, it is known that the maximal development of sufficiently weak vacuum initial data is an asymptotically flat spacetime that is free of singularities and black holes [1]. In this case the gravitational waves are so weak that they cannot coalesce into a black hole; instead they scatter to infinity. Further it is known that an initial data set containing a future trapped surface or a future trapped region must be singular, provided the null-convergence condition holds [2,3]. In these cases, the gravitational field is already sufficiently strong that collapse is inevitable. In the cosmological case (spacetimes with compact Cauchy surfaces), the questions one asks are a bit different as one expects these spacetimes to be quite singular. In fact, it is known that spacetimes with compact Cauchy surfaces are singular, provided a genericity condition and the timelike-convergence condition hold [2,3]. So, here one would like to know such things as the nature of the singularities, if the spacetime has a finite lifetime (in the sense that there is a global upper bound on the lengths of all causal curves therein), whether it expands to a maximal hypersurface and then recollapses or is always expanding (contracting), and whether cosmic censorship is violated. For example, it is known that if the initial data surface is contracting to the future (past), then any de-velopment satisfying the timelike-convergence condition must end within a finite time to the future (past) [2,3]. Can more be said about the behavior of the cosmological spacetimes? The closed-universe recollapse conjecture asserts that the spacetime associated with the maximal development of an initial data set with compact initial data surface expands from an initial singularity to a maximal hypersurface and then recollapses to a final singularity (all within a finite time), provided that the spatial topology does not obstruct the existence of a maximal Cauchy surface (e.g., S 3 or S 1 ×S 2 ) and provided the matter satisfies certain energy and regularity conditions [4,5,6]. It has also been conjectured that such spacetimes admit a unique foliation by constant mean curvature (CMC) Cauchy surfaces with the mean curvatures taking on all real values. (See, e.g., conjecture 2.3 of [7] and the weaker conjecture C2 of [8].) Just what energy conditions the matter must satisfy is an open problem. However, in the study of the weak form of this conjecture (which merely asserts that the spacetime has a finite lifetime), the dominant energy and non-negative pressures conditions together have proven sufficient for the cases studied [9,10]. More subtle is the problem of what regularity conditions the matter needs to satisfy. The difficulty here is that the maximal development of an Einstein-matter initial data set may not contain a maximal hypersurface because of the development of a singularity in the matter fields, such as a shell-crossing singularity in a dust-filled spacetime, before the spacetime has a chance to develop a maximal hypersurface. While not for certain, it is thought that those matter fields that do not develop singularities when evolved in fixed smooth background spacetimes will not lead to the obstruction of a maximal hypersurface. Here, we study the maximal development of spherically symmetric constant mean curvature initial data sets with S 1 × S 2 Cauchy surfaces and matter consisting of either collisionless particles of unit mass (whose evolution is described by the Vlasov equation) or a massless scalar field (whose evolution is described by the massless wave equation). It has already been established that if the mean curvature is zero on the initial data surface, i.e., it is a maximal hypersurface, then its maximal evolution admits a foliation by CMC Cauchy surfaces with the mean curvature taking on all real values [11]. Further, it is known that if the mean curvature is negative (positive) then the initial data can be evolved at least to the extent that the spacetime can be foliated by CMC spatial hypersurfaces taking on all negative (positive) values [11]. Left unresolved was whether the maximal evolution in the latter two cases actually contains a maximal spatial hypersurface and, hence, can be foliated by CMC hypersurfaces taking on all real values. The nonexistence of a maximal spatial hypersurface would be reasonable if such spacetimes could expand (contract) indefinitely, however, it is known that these spacetimes have finite lifetimes [9,10]. Therefore, it would seem that their maximal development should contain a maximal Cauchy surface. We show that it does. Theorem 1. The maximal development of any spherically symmetric spacetime with collisionless matter (obeying the Vlasov equation) or a massless scalar field (obeying the massless wave equation) that possesses a CMC S 1 × S 2 Cauchy surface Σ admits a unique foliation by CMC Cauchy surfaces with the mean curvature taking on all real values. In particular, it contains a maximal Cauchy surface and its singularities are crushing singularities. By the maximal development of a globally hyperbolic spacetime, we mean the maximal development of an initial data set induced on a Cauchy surface in the spacetime. This is well-defined as the maximal developments associated with any two Cauchy surfaces are necessarily isometric [12]. Further, recall that a spacetime with compact Cauchy surfaces is said to have a future (past) crushing singularity if the spacetime can be foliated by Cauchy surfaces such that the mean curvature of these surfaces tends to infinity (negative infinity) uniformly to the future (past). That the future and past singularities associated with the spacetimes of theorem 1 are crushing is then a simple consequence of the existence of a CMC foliation taking on all real values. As a consequence of theorem 1, the maximal development of the spacetimes studied is rather simple. They expand from an initial crushing singularity to a maximal hypersurface and then recollapse to a final crushing singularity-all in a finite time. That is, they satisfy the closed-universe recollapse conjecture in its strongest sense as well as the closed-universe foliation conjecture. While the maximal development of the spacetimes in theorem 1 is about as complete as one could expect given the existence of a complete CMC foliation, these spacetimes may still be extendible (though there is no glob-ally hyperbolic extension). In other words, theorem 1 does not eliminate the possibility that these spacetimes violate cosmic censorship. In fact, cosmic censorship is violated in the vacuum case. This is easily seen by realizing that the maximal development in this case is either of the of the two regions where r < 2M of an extended Schwarzschild spacetime of mass M (r is the areal radius), modified by identifications so that the Cauchy surface topology is S 1 × S 2 . Although the "singularity" corresponding to r → 2M is a crushing singularity, this is actually a Cauchy horizon. Is this vacuum case exceptional? It is worth noting that if a crushing singularity corresponds to r → 0, then the singularity must in fact be a curvature singularity. This follows easily from the fact that R abcd R abcd ≥ (4m/r 3 ) 2 , for any spherically symmetric spacetime satisfying the null-convergence condition, and the fact that the mass function m is bounded away from zero by a positive constant in our case [10]. If we could show that r must go to zero (uniformly) at the extremes of our foliation, then the spacetime would indeed be inextendible, thereby satisfying the cosmic censorship hypothesis. Establishing such a result appears to be difficult and the vacuum case shows that such a result will not always hold (though this case may be exceptional). Using a different approach, Rein has shown that for an open set of initial data, there is a crushing singularity in which r → 0 uniformly, and which, therefore, is a curvature singularity [13]. While this is encouraging, the extent to which the spacetimes of theorem 1 satisfy cosmic censorship remains to be seen. The proof of theorem 1 involves a combination of three ideas. First, it is known that spherically symmetric spacetimes with S 1 × S 2 or S 3 Cauchy surfaces and satisfying the dominant energy and non-negative pressures (or merely "radial" non-negative pressure) conditions have finite lifetimes [9,10]. Second, using a general theorem (which is independent of symmetry assumptions) established in Sec. III, it follows that the spatial volumes of Cauchy surfaces in the spacetime are bounded above, which allows us to bound various fields describing the spacetime geometry. Third, introducing a new time function to avoid the problems associated with "degenerate" maximal hypersurfaces (i.e., surfaces where the mean curvature cannot be used as a good coordinate), the theorem then follows using the methods developed in [11]. Furthermore, it is worth noting that our method uses only a few properties of the matter fields themselves. Namely, we use the fact that they satisfy the dominant energy and "radial" non-negative pressures conditions and, roughly speaking, the fact that the matter fields are nonsingular as long as the spacetime metric is nonsingular. This latter property has not been given a precise formulation, as it seems difficult to do so, and serves merely as a heuristic principle-the arguments for collisionless matter and the massless scalar field in [11] providing an example of what it means in practice. In theorem 1 we have restricted ourselves to spacetimes with S 1 × S 2 Cauchy surfaces and have not con-sidered similar spacetimes with S 3 Cauchy surfaces. The problem with the S 3 case is that there exist two timelike curves on which the symmetry orbits degenerate to points. When we then pass to the quotient of our spacetime by the symmetry group, the field equations on the quotient spacetime are singular on boundary points corresponding to the degenerate orbits. Experience has shown that this degeneracy can have nontrivial consequences on the evolution of the spacetime. For example, in the study of the spherically symmetric asymptotically flat solutions of the Einstein-Vlasov equations, it has been shown that if a solution of these equations develops a singularity, then the first singularity (as measured in a particular time coordinate) is at the center [14]. However, currently it is not known how to decide when a central singularity must occur. In the case of asymptotically flat spherically symmetric solutions of the Einstein equations coupled to a massless scalar field, Christodoulou has shown that naked singularities do form in the center of symmetry for certain initial data (and that they can form nowhere else) [15]. Note that the degeneracy of the orbits in these spacetimes is of the same type that occurs in the spherically symmetric spacetimes with S 3 Cauchy surfaces. Similar problems occur in the study of the vacuum spacetimes with U (1) × U (1) symmetry and having S 3 or S 1 × S 2 Cauchy surfaces. Here the dimension of the orbits is non-constant and, consequently, this case is much harder to analyze than the T 3 case, which has orbits of constant dimension [16]. The spherically symmetric spacetimes with S 1 × S 2 Cauchy surfaces, having no degenerate orbits, avoid these complications. It would, of course, be preferable to strengthen theorem 1 by removing the requirement that there exist a CMC Cauchy surface in the spacetime. While such a result seems plausible, the methods currently used are not adequate to cover this more general case. Strengthening our results in this direction is a subject for future research. Our conventions are those of [3], with the notable exception that trace H of the extrinsic curvature K ab of a spatial hypersurface measures the convergence of the hypersurface to the future. Thus, surfaces with negative H are expanding to the future, while those with positive H are contracting to the future. II. PROOF OF THEOREM 1 Fix a spacetime (M, g) satisfying the conditions of theorem 1. Both classes of spacetimes considered here (the Einstein-Vlasov and massless scalar field spacetimes) satisfy the dominant energy condition (the Einstein tensor G ab satisfies G ab v a w b ≥ 0 for all future-directed timelike vectors v a and w b ) as well as the timelike-convergence condition (the Ricci tensor satisfies R ab t a t b ≥ 0 for all timelike t a ). While the Einstein-Vlasov spacetimes also satisfy the non-negative pressures condition (G ab x a x b ≥ 0 for all spacelike x a ), in general the massless scalar field spacetimes do not. However, they do satisfy the weaker "radial" non-negative pressures condition (G ab x a x b ≥ 0 for all spatial vectors x a perpendicular to the spheres of symmetry). It was shown in [9,10] that the spherically symmetric spacetimes with S 3 or S 1 × S 2 Cauchy surfaces satisfying the dominant energy and the nonnegative pressures conditions (or merely the "radial" non-negative pressures condition) have a finite lifetime, i.e., the supremum of the lengths of all causal curves is finite. Therefore, our spacetime (M, g) has a finite lifetime. It then follows immediately from lemma 2 (established in Sec. III) that the volumes of all spatial Cauchy surfaces in (M, g) are bounded above. Denote the mean curvature of the Cauchy surface Σ by t 0 . This initial data surface must be spherically symmetric. In the case t 0 = 0, this follows from the uniqueness theorem for such hypersurfaces (see, e.g., theorem 1 of [4]) since if a rotation did not leave Σ invariant, we would have a distinct CMC Cauchy surface with identical (nonzero) constant mean curvature. The case where t 0 = 0 then follows from the fact that there is a neighborhood N of Σ in M such that N can be foliated by CMC hypersurfaces, each having a different CMC, and the fact that those with non-zero CMC must be spherically symmetric. As the theorem has already been proven in the case where t 0 = 0 (Σ is a maximal hypersurface) [11], we shall take t 0 to be negative (Σ is expanding to the future). The case where the mean curvature is initially positive follows by a time-reversed argument. As was shown in [11], in a neighborhood of the hypersurface Σ, the spacetime can be foliated by CMC Cauchy surfaces. Define the scalar field t at any point to be the value of the mean curvature of the CMC hypersurface passing through that point, i.e., so level surfaces of t are CMC hypersurfaces and, in particular, the surface t = t 0 is Σ. A further scalar field x can then be introduced so that the spacetime metric g is given by where Ω is the natural unit-metric associated with the spheres of symmetry. The functions α, β, and A depend only on t and x (being spherically symmetric) and are periodic in x. The function a depends only on t. The fields can be chosen so that β(t, x) dx = 0 for each t, where the integral is taken over one period of a surface of constant t. It was shown in [11] that the initial data induced on Σ can be evolved so that t covers the interval (−∞, 0) and that, if it can be evolved to the closed interval (−∞, 0], i.e., a maximal hypersurface is attained, the spacetime can be extended and foliated by CMC spatial hypersurfaces taking on all real values. Therefore, our task is to establish the existence of a maximal hypersurface. To this accomplish this, we establish the existence of upper bounds on a, A, and their inverses on the interval [t 0 , 0). We then introduce a new time function τ = f •t by introducing a function f that allows us to avoid the problem associated with t being a bad coordinate on maximal hypersurfaces. Once this has been accomplished, theorem 1 will follow from an argument similar to that used in [11]. First, we establish upper bounds on the area radius r = aA, the mass function m = 1 2 r(1 − ∇ a r∇ a r), the volume V (t) of level surfaces of t, and their inverses. That r and m −1 are bounded above follows from the results of [10]. (Note, m is positive.) Further, the technique introduced in [17] was used in [11] to show that m/r is bounded above on [t 0 , 0). Therefore, m and r −1 are also bounded above on [t 0 , 0). (That is, the mass m cannot become arbitrarily large and r cannot become arbitrarily small in this portion of the spacetime. This is nontrivial as both m and r −1 can become arbitrarily large on unbounded intervals, e.g., near an initial or final singularity.) As we have already established that the volume of all spatial Cauchy surfaces are bounded above, V (t) is bounded above. Using the fact that ∂ t V (t) is positive on [t 0 , 0), as these hypersurfaces are everywhere expanding, shows that V is bounded from below by a positive constant, and hence V −1 is bounded above on [t 0 , 0). Next, that a, A, and their inverses are bounded above on [t 0 , 0) now follows easily from the facts that r = aA, and our upper bounds for V , r, and their inverses. Next, we bound α ′ using the lapse equation where K ab is the extrinsic curvature of the CMC hypersurface, n a is a unit timelike normal to the CMC hypersurface, and a prime denotes a derivative by ∂ x . (This is equation (2.4) in [11].) Using the fact that K ab K ab is manifestly non-negative and R ab n a n b ≥ 0 by the timelike convergence condition, it follows that (Aα ′ ) ′ ≥ −A 3 . Using the fact that A is bounded above and integrating in a CMC hypersurface, we find that (Aα ′ )| p −(Aα ′ )| q ≥ −C 1 for some positive constant C 1 and any two points p and q in the hypersurface. Choosing q where α is extremal on the surface (so α ′ (q) = 0) and using the fact A −1 is bounded above shows that α ′ is bounded from below. Choosing p where α is extremal on the surface (so α ′ (p) = 0) and using the fact A −1 is bounded above shows that α ′ is bounded from above. Therefore, there exists a constant C 2 such that |α ′ | ≤ C 2 . Thus, even if α is unbounded, it must diverge in a way that is uniform in space: For any two points p and q in a CMC hypersurface, |α(p) − α(q)| = | q p α ′ dx| ≤ q p |α ′ | dx ≤ πC 2 . If we knew that α were bounded above on [t 0 , 0), we could then proceed to argue as in [11]. While such a bound can be established rather easily for fields satisfying the dominant energy and non-negative pressures conditions, such an argument fails for the massless scalar field. The difficulty in establishing an upper bound on α is linked to the possibility that dt may be zero on a maximal hypersurface, and thus t being a bad coordinate. Note that this can only occur if K ab = 0 everywhere on Σ (i.e., Σ is momentarily static) and R ab n a n b = 0 everywhere on Σ. If the non-negative energy condition (G ab t a t b ≥ 0 for all timelike t a ) and non-negative sum-pressures condition [G ab (t a t b + g ab ) ≥ 0 for all unit-timelike t a ] are satisfied, then R ab n a n b = 0 implies that G ab n a n b = 0 and, hence, by the Hamiltonian constraint equation, the Ricci scalar curvature of the metric induced on Σ must be zero. However, it is easy to show that there are no such spherically symmetric geometries on S 1 × S 2 . Thus, the Einstein-Vlasov spacetimes do not admit such surfaces. However, it can be shown that there are massless scalar field spacetimes with such "degenerate" maximal hypersurfaces. To avoid this difficulty, we change our time function to one that is guaranteed to be well-behaved even on a maximal hypersurface with dt = 0. Fix any inextendible timelike curve γ that is everywhere orthogonal to the CMC hypersurfaces. The length of the segment of γ between any two CMC hypersurfaces t = t 1 and t = t 2 is then simply t2 t1 α(γ(u)) du. Using the fact that there is a finite upper bound on the lengths of all timelike curves in our spacetime, the integral 0 t1 α(γ(u)) du = lim t2→0 t2 t1 α(γ(u)) du (2.4) must exist, i.e., α(γ(t)) is integrable on any interval of the form [t 1 , 0). Fix some value x 0 of x and consider the function α(t, x 0 ). Since α ′ is bounded there is a constant C such that α(t, x 0 ) ≤ α(γ(t)) + C. It follows that α(t, x 0 ) is also integrable on any interval of the form [t 1 , 0). Using this fact, define the function f on (−∞, 0) by setting Noting that f ′ (λ) = 1 + α(λ, x 0 ) and lim λ→0 f (λ) = 0, we see that f is an orientation-preserving diffeomorphism from (−∞, 0) to (−∞, 0). Hence, is a new time function on our spacetime. Note that ∂τ /∂t = 1 + α(t, x 0 ). The level surfaces of τ clearly coincide with those of t and so are CMC hypersurfaces. As a consequence the field equations for the geometry and the matter written in terms of τ look very similar to those written in terms of t. Using τ in place of t, the metric has the same form as before where the new lapse functionα is given bỹ and similarly for the new shiftβ. In terms of our new coordinates (τ replacing t) and new variables (α andβ replacing α and β, respectively), the field equations are the same as in [11] with ∂ τ replacing ∂ t ,α replacing α, β replacing β, and ∂t/∂τ replacing the right-hand side of equation (2.3). Explicit occurrences of t in the equations are left unchanged, t being simply considered as a function of τ , determined implicitly by equation (2.6). Using equation (2.8), it is straightforward to show that ∂t/∂τ = 1 −α(τ, x 0 ). With this, the lapse equation can be written as Using the fact that α ′ is bounded, as argued above, it follows that α(t, x) ≤ α(t, x 0 )+ C, where C is a constant. Therefore, by equation (2.8),α is bounded above. It is now possible to apply the same type of arguments to the system corresponding to the time coordinate τ as were applied in [11] to the system corresponding to the time coordinate t to show that all the basic geometric and matter quantities in the equations written with respect to τ are bounded and that the same is true for their spatial derivatives of any order. Bounding time derivatives of all these quantities requires some more effort. All but one of the steps in the inductive argument used to bound time derivatives in [11] apply without change. (Note that in [11], derivatives with respect to t were bounded, whereas here, derivatives with respect to τ are bounded.) The argument that does not carry over is that which was used to bound time derivatives of α and α ′ . To see why, consider the equation obtained by differentiating equation (2.9) k times with respect to τ where D k τ = ∂ k τ denotes the k-th partial derivative with respect to τ . Here B k is an expression which is already known to be bounded when we are at the step in the inductive argument to bound D k τα and D k τα ′ . In lemma 3.4 of [11], D k t α was bounded by using the fact that t was bounded away from zero. The analogous procedure is clearly not possible in the present situation, where t is tending to zero. This kind of argument was also used in [11] to bound time derivatives of higher order spatial derivatives of α, but that is unnecessary, since such bounds can be obtained directly by differentiating the lapse equation once the time derivatives of α and α ′ have been bounded. The same argument applies here, so all we need to do is to prove the boundedness of D k τα and D k τα ′ using equation (2.10) under the hypothesis that B k is bounded. This follows by simply noting that equation (2.10) has the same form for each value of k and the following lemma. Lemma 1. Consider the differential equation where a, b, c, d, and u are 2π-periodic functions on the real line and x 0 is a point therein. Suppose that a > 0, b ≥ 0, d ≥ 0, and that d is not identically zero. Then |u| and |u ′ | are bounded by constants depending only on the quantities To see this, suppose otherwise and let x 1 be a point where u achieves its maximum, so u(x 1 ) ≥ u(x 0 ) > 2πK 1 K 2 and let x 2 be that number such that u > 0 on [x 1 , x 2 ) and u(x 2 ) = 0 (so x 1 < x 2 < x 1 + 2π). Then, on the interval [x 1 , x 2 ], we have (au ′ ) ′ ≥ c, from which it follows that u ′ ≥ −K 1 K 2 on [x 1 , x 2 ]. Integrating this and using the fact that u(x 2 ) = 0, we find that u(x 1 ) ≤ 2πK 1 K 2 , contradicting the fact that u(x 1 ) > 2πK 1 K 2 . Therefore, as u is everywhere positive, it follows that (au ′ ) ′ ≥ c. Integrating this inequality starting (or ending) at a point where u ′ = 0 shows that |u ′ | ≤ K 1 K 2 . Integrating equation (2.11) from 0 to 2π and using the fact that u is positive gives u( 3 . Using this and the fact that |u ′ | ≤ K 1 K 2 shows that |u| ≤ K 2 K −1 3 + 2πK 1 K 2 . Second, if u(x 0 ) < −2πK 1 K 2 , a similar argument shows that u is everywhere negative and we again obtain the same bounds on |u ′ | and |u|. Third, suppose that |u(x 0 )| ≤ 2πK 1 K 2 . If max(u) > 2πK 1 K 2 (1 + 2πK 1 K 3 ), using the inequality (au ′ ) ′ ≥ c + du(x 0 ), we can argue much as before to see that u is everywhere positive and again obtain the same bounds on |u ′ | and |u|. Similarly, if min(u) < −2πK 1 K 2 (1+2πK 1 K 3 ), it follows that u is everywhere negative and again we recover the same bounds on |u ′ | and |u|. Next, if |u| ≤ 2πK 1 K 2 (1 + 2πK 1 K 3 ) everywhere, |u| is already bounded, and to bound |u ′ |, we note that we have bounds for all terms on the right hand side of equation (2.11), so it suffices to integrate it starting from a point where u ′ is zero to bound |u ′ |. At this stage, we have indicated how all geometric and matter quantities, expressed in terms of the new time coordinate τ , can be bounded, together with all their derivatives. In particular, this means that all these quantities are uniformly continuous on any interval of the form [τ 1 , 0), where τ 1 is finite. It follows that all these quantities have smooth extensions to the interval [τ 1 , 0]. Restricting them to the hypersurface τ = 0 gives a initial data set for the Einstein-matter equations with zero mean curvature. By the standard uniqueness theorems for the Cauchy problem, the spacetime which, in the old coordinates, was defined on the interval (−∞, 0) is isometric to a subset of the maximal development of this new initial data set. It follows that the original spacetime has an extension which contains a maximal hypersurface. Lastly, that the foliation is unique now follows from the fact that compact CMC Cauchy surfaces with non-zero mean curvature are unique [4] and that the spacetime is indeed maximal follows from the fact that any spacetime admitting a complete foliation by compact CMC Cauchy surfaces is maximal [7]. III. A BOUND FOR THE VOLUME OF SPACE It is well known that as we transport an "infinitesimal" spacelike surface S along the geodesics normal to itself, the ratio ν of its volume of to its original volume is governed by the Raychaudhuri equation where t is the proper time measured along the geodesics normal to S, R ab is the Ricci tensor, and σ ab is the shear tensor associated with the geodesic flow [2,3,18]. (This equation is usually written in terms of the divergence of the geodesic flow θ = ν −1 dν/dt.) On the surface S, ν satisfies the initial condition ν = 1 and dν/dt = −H(p), where H(p) is the trace of the extrinsic curvature of S at the point p where the geodesic intersects S. Therefore, if the spacetime satisfies the timelike-convergence condition (R ab t a t b ≥ 0 for all timelike t a ), it follows that as long as ν remains non-negative, from which we find that This equation bounds the growth of the volume of a local spatial region in the spacetime. Using this result, it is not difficult to show that, in a spacetime satisfying the timelike-convergence condition, if we fix a Cauchy surface Σ 0 and construct from it a second Cauchy surface Σ by transporting Σ 0 to the future along the flow determined by the geodesics normal to Σ 0 , as long as these flow lines do not self-intersect (which will be true if Σ is sufficiently close to Σ 0 ), then where vol(S) denotes the three-volume of a Cauchy surface S and T is the "distance" between the two surfaces measured by the lengths of the geodesics normal to Σ 0 (which will be independent of which geodesic is chosen by the construction of Σ). Therefore, we have a bound on the volume of Σ in terms of the volume of Σ 0 , the extrinsic curvature of Σ 0 , and the distance between Σ 0 and Σ. Does a similar result hold for more general Cauchy surfaces Σ? For instance, a more general hypersurface Σ may not be everywhere normal to the geodesics from Σ 0 , some geodesics normal to Σ 0 may intersect one another between Σ 0 and Σ, and parts of Σ may lie to the future of Σ 0 while other parts may lie to the past. Can the simple bound given by equation (3.4) be modified to cover these cases? That it can is the subject of the following lemma. Lemma 2. Fix an orientable globally hyperbolic spacetime (M, g ab ) satisfying the timelike-convergence condition (R ab t a t b ≥ 0 for all timelike t a ) and a smooth spacelike Cauchy surface Σ 0 therein. Then, for any smooth spacelike Cauchy surface Σ, where vol(S) denotes the three-volume of a Cauchy surface S, H is the trace of the extrinsic curvature of Σ 0 (using the convention that H measures the convergence of the future-directed timelike normals to a spacelike surface), and ∆(Σ 0 , Σ) is the least upper bound to the lengths of causal curves connecting Σ 0 to Σ (either future or past directed). Further, for any Cauchy surface Note that for p, q ∈ M , ∆(p, q) is not quite the distance function d(p, q) as used in [2] as d(p, q) = 0 if q ∈ J − (p). Instead, ∆(p, q) does not distinguish between future and past: ∆(p, q) = ∆(q, p) = d(p, q) + d(q, p). From lemma 2, we see that for a spacetime satisfying the timelike-convergence condition, possessing compact Cauchy surfaces, and having a finite lifetime (in the sense that d(p, q) [equivalently ∆(p, q)] is bounded above by a constant independent of p and q), then the volume of a Cauchy surface therein cannot be arbitrarily large. Further, we see that if the spacetime admits a maximal Cauchy surface Σ 0 (H = 0 thereon), we reproduce the result that there is no other Cauchy surface having volume larger than Σ 0 (though there may be surfaces of equal volume) [4]. In the following, df denotes the derivative map associated with a differentiable map f between manifolds. When viewed as a pull-back, we denote df by f * and, when viewed as a push-forward, we denote df by f * . For a map f : A → B, f [A] denotes the image of A in B. Lastly, A \ B denotes the set of elements in A that are not in B. A. Proof of lemma 2 To begin the proof of lemma 2, for each point p ∈ Σ 0 , let γ p denote the unique inextendible geodesic containing p and intersecting Σ 0 orthogonally. Parameterize γ p by t so that the tangent vector to γ p is future-directed unittimelike and γ p (0) = p. Then, define the map f : Σ 0 → Σ, by (3.7) Note that for each p ∈ Σ 0 , f is well defined since γ p intersects Σ at precisely one point as Σ is a spacelike Cauchy surface for the spacetime. Next, let K be the subset of Σ 0 defined by the property that p ∈ K if and only if the geodesic γ p does not possess a point conjugate to Σ 0 between Σ 0 and Σ (although it may have such a conjugate point on Σ). Note that this is precisely the condition that for each p ∈ K the solution ν to equation (3.1) along γ p , satisfying the initial conditions ν = 1 and dν/dt = H(p) at p, be strictly positive on the portion of γ p between p and f (p). It follows that K is closed. Furthermore, f maps K onto Σ. To see this, recall that for any point q ∈ Σ there exists a timelike curve µ connecting q to Σ 0 having a length no less than any other such curve. Furthermore, such a curve µ must intersect Σ 0 normally, is geodetic, and has no point conjugate to Σ 0 between Σ 0 and q. (See Theorem 9.3.5 of [3].) Therefore, the point p = µ ∩ Σ 0 is in K and µ ⊂ γ p , so f (p) = γ p ∩ Σ = µ ∩ Σ = q. Therefore, f maps K onto Σ. However, in general, f will not be one-to-one between K and Σ. Let C denote the set of critical points of the map f on Σ 0 . That is, p ∈ C if and only if its derivative map f * : (T Σ 0 ) p → (T Σ) f (p) is not onto. Then, by Sard's theorem [19], f [C] (the critical values of f ), and hence f [K ∩ C], are sets of measure zero on Σ. Now, note that Σ can be expressed as the union of f [K \ C] and a set having measure zero. To see this, we write The last two sets are manifestly disjoint and the latter is a set of measure zero (as it is a subset of a set of measure zero). Therefore, we need only concern ourselves the behavior of f on the set of regular points of f within K. This is useful since, by the inverse function theorem [19], f is a local diffeomorphism between K \ C and f [K \ C]. As we shall see, for all p ∈ K \ C, the point f (p) is not conjugate to Σ 0 on γ p , from which it follows that K \ C is an open subset of Σ 0 . Denote volume elements associated with the induced metrics on Σ 0 and Σ by e abc and ǫ abc , respectively, chosen so that e abc and ǫ abc correspond to the same spatial orientation class (which can be done as the spacetime is both time-orientable and orientable). Then the Jacobian of the map f is that unique scalar field J on Σ 0 such that (f * ǫ) abc = Je abc . (3.9) Note that J is zero on C and positive on K \ C. With these definitions, we have The first step follows from the facts that Σ = f [K] and f [K ∩ C] is a set of measure zero. That we have an inequality in the second step follows from the fact that although f is a local diffeomorphism, it may not be oneto-one between K\C and f [K\C]. The third step follows from the definition of J given by equation (3.9) and the fact that J is bounded above by its supremum. Lastly, the fourth step follows from the fact that K\C is a subset of Σ 0 . So, to prove lemma 2, we need to show that, on the set K \ C, J is bounded above by the relevant expressions in lemma 2. To that end, define φ : Σ 0 ×R → M by setting φ(p, t) = γ p (t). Of course, if γ p is not future and past complete, this will not be defined for all t. Next, define T : Σ 0 → R by setting T (p) to that number such that γ p (T (p)) = f (p), i.e., T (p) is the "time" along the geodesic γ p at which γ p intersects Σ. Note that if f (p) lies to the future of Σ 0 , then T (p) is positive, while if f (p) lies to the past of Σ 0 , then T (p) is negative. Fix a point p ∈ K \ C and define the map g : Σ 0 → M by setting g(q) = φ(q, T (p)). Should γ q (T (p)) not be defined, then g is not defined for that point of Σ 0 . However, it will always be defined for some neighborhood of p as g(p) = f (p). Notice that g simply "translates" points on Σ 0 along the geodesics normal to Σ 0 a fixed distance T (p) (independent of point), i.e., it is a translation along the normal geodesic "flow". Therefore, the derivative map of g at a point is precisely the geodesic deviation map. In particular, dg is injective (one-to-one) from (T Σ 0 ) p to (T M ) f (p) if and only if f (p) is not conjugate to Σ 0 on γ p (by the definition of such a conjugate point). Noting that f can be written as f (q) = φ(q, T (q)), we see that the derivative maps of f and g at p [both of which are maps from (T Σ 0 ) p to (T M ) f (p) ] are related by (df ) a b = (dg) a b + t a (dT ) b , (3.11) where t a is the unit future-directed tangent vector to γ p at f (p). From this we see that df is injective [from (T Σ 0 ) p to (T M ) f (p) ] if and only if dg is injective. Therefore, on K \ C, not only is df injective, but dg is also injective, and hence f (p) is not conjugate to Σ 0 on γ p . Defineê abc at f (p) by parallel transporting e abc at p along γ p . Then, (f * ê ) abc = (g * ê ) abc = ν(T (p))e abc . (3.12) The first equality follows from (3.11) and the fact that t aê abc = 0. The second equality follows by recognizing that the coefficient of the right-hand most term is precisely the ratio of the volume of an "infinitesimal" region in Σ 0 to its original volume as it is transported along the geodesic flow normal to Σ 0 . As the transport is done from p to f (p), the coefficient is ν(T (p)), where ν is the solution of equation (3.1) satisfying the stated initial conditions. (In other words, ν(t) is the Jacobian of the geodesic deviation map.) Denote the future-directed normal to Σ at f (p) by n a . Then, there exists a unit-spacelike vector x a ∈ (T Σ) f (p) such that t a = γ(n a + βx a ), where γ = (−t a n a ) and β = 1 − γ −2 . Then, for one of the two volume elements ǫ abcd on M associated with the spacetime metric, we have ǫ abc = n m ǫ mabc andê abc = t m ǫ mabc , which gives the following relation between these two tensors at f (p), e abc = γǫ abc + γβx m ǫ mabc . (3.13)
2018-12-06T09:37:51.851Z
1995-08-01T00:00:00.000
{ "year": 1995, "sha1": "aadcd041ee8d86c951ce7c974bfa0ba9f5899961", "oa_license": null, "oa_url": "http://arxiv.org/pdf/gr-qc/9508001", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0a9087502915421d02d87bfdd6c6a99408371e64", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
242241830
pes2o/s2orc
v3-fos-license
Cedecea lapagei An Extremely Rare Uropathogen: A Case Report and Review of the Literature Background: Gram-negative enterobacteria are the most common cause of urinary tract infections. Cedecea is a new separate genus in the family enterobacteriaceae, and it is a very rare pathogen that was primarily found in the respiratory tract. Cedecea lapagei is a very rare pathogen of urinary tract infections. To the best of our knowledge, this is the rst case report in the world reported in English literature. Case presentation: A 55 years old man with chronic renal failure, poorly controlled diabetes mellitus, and hypertension presented with acute exacerbations of renal failure and irritative voiding symptoms. After stabilization and empirical antibiotic therapy with Ceftriaxone, the patient’s condition was not improved and deteriorated progressively. After the request of urine culture, the culture was isolated, an extremely rare uropathogen recently recognized by the Centers for Disease Control and Prevention (CDC); the Cedecea lapagei. Cedecea lapagei identication had been done using Eosin methylene blue agar (EMB). Gram-negative lipase positive bacteria with bacillus in shape, motile in nature that is non-spore-forming, and non-encapsulated enterobacteria with the nal result of >100,000 colony-forming units per ml of Cedecea lapagei were isolated. Mueller-Hinton agar had been used to perform antimicrobial sensitivity and resistance. The pathogen was extensively resistant to the extended-spectrum beta-lactamases antibiotics and extended-spectrum beta-lactam inhibitors while carbapenems, uoroquinolones, aminoglycosides, and Trimethoprim-sulfamethoxazole showed a higher sensitivity rate. Conclusion: The treatment of Cedecea lapagei infections represents a challenging issue due to its multi-drug resistant and extensive drug resistance patterns to a variety of antimicrobial classes, such as extended-spectrum beta-lactamases, cephalosporins, and beta-lactam inhibitors. Antimicrobial treatment should be aligned with the culture ndings once available. Introduction Urinary tract infections (UTIs) are recognized to be the most common community and hospital-acquired bacterial infections. Chronic renal failure patients with uncontrolled diabetes mellitus are vulnerable to recurrent urinary tract infections and urosepsis caused by the usual and rare opportunistic uropathogens. Gram-negative enterobacteria are the most common cause of urinary tract infections. Cedecea is a new separate genus in the family enterobacteriaceae, and it is a very rare pathogen that was primarily found in the respiratory tract. Its name is an abbreviation for the Centers for Disease Control (CDC) Laboratories in 1981, where the initial group isolates "Enteric Group 15". They are Gram-negative, lipase positive and non-spore-forming bacilli. Cedecea genus was isolated from human clinical specimens including sputum, blood, Ulcer, and urine (1,2). Cedecea lapagei a rare bacterial infection in humans and has an emerging antimicrobial resistance. First case of Cedecea lapagei was reported in 2006 and most of the Cedecea species have been isolated from the respiratory tract. A literature search revealed no reports of prior isolation of Cedecea lapagei from urine culture and this is the rst case of Cedecea lapagei as an uropathogen in the world reported in English literature. We report an extremely rare case of clinically signi cant urinary tract infection caused by Cedecea lapagei in a 55 years old dialysis patient with chronic renal failure and 2 years follow up for the patient did not show any recurrence of the isolate. Case Report A 55 years old man with chronic renal failure, uncontrolled diabetes mellitus, and hypertension presented with acute exacerbations of renal failure and irritative voiding symptoms. Laboratory investigations revealed creatinine (13.43mg/dl), urea (177mg/dl), low hemoglobin (6.9mg/dl), marked leukocytosis (WBC: 11,15), high blood sugar (436mg/dl), hyperkalemia, and metabolic acidosis. Ultrasound of the abdomen showed grade 2 parenchymal disease, and other organs were unremarkable. The patient was admitted to the intensive care unit and underwent several dialysis occasions, blood transfusions, prompt blood sugar, and blood pressure control, and adequate uid resuscitation. Empirical antibiotic therapy with ceftriaxone was initiated, but unfortunately, the patient's condition was not improved and deteriorated progressively day by day according to the general condition of the patient and laboratory investigations. A clean catch midstream urine sample was obtained from the patient and the urine culture was isolated, an extremely rare uropathogen recently recognized by the Centers for Disease Controland Prevention (CDC); the Cedecea lapagei. Cedecea lapagei identi cation had been done using eosin methylene blue agar (EMB). Gram-negative lipase positive bacteria with bacillus in shape, motile in nature that is non-spore-forming, and non-encapsulated enterobacteria with the nal result of >100,000 colonyforming units per ml of Cedecea lapagei were isolated. Mueller-Hinton agar had been used to perform antimicrobial sensitivity and resistance.Antimicrobial sensitivity and resistance pattern of the pathogens was performed under standard protocols. The antibiotic susceptibility of uropathogens was studied against imipenem 10mcg, ertapenem 10mcg, amikacin 30mcg, cefazolin 30ug, ceftazidime 30ug, trimethoprim/sulfamethoxazole 1.25/23.75 mcg, cipro oxacin 5mcg. The pathogen was extensively resistant to the extended-spectrum beta-lactamases antibiotics, and extended-spectrum beta-lactam inhibitors (Cephalosporins including ceftriaxone, cephazolin, Ceftazidime, Ce xime, and ampicillin and amoxicillin-clavulanic acid). The pathogen showed a higher sensitivity pattern against Carbapenems (imipenem and ertapenem), Fluoroquinolones (cipro oxacin, levo oxacin), aminoglycosides (amikacin and gentamicin), and Trimethoprim-sulfamethoxazole. Levo oxacin 500mg acon once daily was initiated after culture results became available. Table 1 demonstrates the antimicrobial pro le of the microorganism. The condition of the patient was improved, and the return of urine culture did not show any bacterial growth. The patient was discharged home with routine dialysis, levo oxacin tab, antihypertensive medications, and diabetic medications. The patient did not show any recurrence with this unusual uropathogen for two years following up with routine dialysis. Discussion E.coli is the most common cause of bacterial urinary tract infections in both community and hospitalacquired UTIs and both gender and age groups followed by Klebsiella pneumonia. Furthermore, rare opportunistic microorganisms included Enterobacter cloacae, Enterococcus faecium, Streptococcus species, Citrobacter freundii, Staphylococcus haemolyticus, Candida, and other rare pathogens are prevalent in immunocompromised patients as the current case demonstrated an immunocompromised patient with a very unusual case of urinary tract infection caused by Cedecea lapagei (3). In the medical literature, there are very few cases caused by different species of the Cedecea genus such as pneumonia, soft tissue infections, and sepsis. Perkins SR and colleagues reported that most of the Cedecea species have been isolated from the respiratory tract, and Cedecea davisae were the most commonly reported species caused by all of the cases (4,5). No previous reports of prior isolation of Cedecea lapagei from urine culture in the literature and this case of Cedecea lapagei as an uropathogen is documented in the world for the rst time. This case report described an extremely rare case of clinically signi cant urinary tract infections caused by Cedecea lapagei. The treatment of Cedecea species infections represents a challenging issue due to its multi-drug resistant and extensive drug resistance patterns to a variety of antimicrobial antibiotics, such as extended-spectrum beta-lactamases, cephalosporins, and beta-lactam inhibitors as the present case have been noticed (6). The patient responded well with levo oxacin after drug adjustment due to the preexisting azotemia. The antimicrobial choices of such chronic renal failure patients are debating and should be adjusted according to the renal function, the e cacy of the drug, and minimize the worsening of preexisting antimicrobial resistance. Conclusıon The current case recognized that Cedecea lapagei were sensitive to a variety of antimicrobial classes including carbapenems but antimicrobial sensitivity and resistance pattern differs from case to case. Antimicrobial treatment should be aligned with the culture ndings once available. Full attention should be given in immunocompromised patients not responding to the initial empirical therapy. Declarations Disclosure of potential con icts of interest The authors declare no con ict of interest for the study. Funding resources This study received no nancial support. Ethics committee and informed consent to publish case reports are not required for any ethical approval in our institution, and the patient was received a written informed consent. Availability of supporting data not applicable.
2021-08-25T17:27:14.998Z
2021-02-11T00:00:00.000
{ "year": 2022, "sha1": "90ba7bdd50e9291d44d7588054a0047dcc52df25", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-181472/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "7b410af3fb646478d7429410c2dc6d46a8aa471a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
251982409
pes2o/s2orc
v3-fos-license
Transmission Analysis of COVID-19 Outbreaks Associated with Places of Worship, Arkansas, May 2020–December 2020 The purpose of this study was to describe a statewide COVID-19 transmission involving places of worship (POWs) during the early phase of the pandemic. During the period of May 2020–December 2020, this analysis evaluated COVID-19 cases in Arkansas reported in REDCap for overall cases associated with POWs, cluster detection, and network analysis of one POW utilizing Microbetrace. A total of 9904 COVID-19 cases reported attending an in-person POW service during the early phase of the pandemic with 353 probable POW-associated clusters identified. Network analysis for ‘POW A’ showed at least 60 COVID-19 cases were traced to at least 4 different settings. The pandemic gave an opportunity to observe and stress the importance of public health and POWs working closely together with a shared goal of facilitating worship in a manner that optimizes congregational and community safety during a public health emergency. Introduction On March 13, 2020, the United States declared the novel SARS-CoV-2 virus a national emergency (American Journal of Managed Care, 2020). Based on scientific understanding of disease transmission and in an effort to mitigate COVID-19 spread, public health and medical professionals communicated preventative measures including stayat-home practices, hand washing, social distancing, and usage of face masks (Schuchat, The views expressed in this article are solely those of the authors and do not necessarily represent the official views of the Arkansas Department of Health. or antigen testing methods at any laboratory were entered into a surveillance system, utilized by ADH, known as Research Electronic Data Capture (REDCap). ADH and case investigators (CIs) used a standardized questionnaire during interviews of COVID-19 cases to collect pertinent epidemiologic and clinical information, including their 14-day, in-person activities/attendance since COVID-19 symptoms started or since their positive COVID-19 result, whichever came first. The questionnaire collectively asked about any community exposures to retail store settings (grocery stores, department stores), dining/restaurants, bars, indoor fitness centers/athletic training facilities, outdoor athletic facilities/pools, casinos, barbershops/salons/beauty shops, health and wellness facilities (dental office, medical office, massage & spa), daycare(s), outdoor venues (concerts, fairs, national/state parks, hotel/motel, private/public educational settings and extracurricular activities (PreK-12, higher education), group/institutional settings (assisted living facilities, nursing homes, rehabilitation centers, prison/jail), occupational settings (healthcare setting, poultry facility), travel history, place(s) of worship, and an open section for other settings not listed. Close contacts of patients with laboratory-confirmed cases of COVID-19 were also interviewed and enrolled in an active symptom monitoring system. Cluster Investigation Clusters were defined as five or more positive cases who reported attending a POW within the 14 days prior to their illness and had a positive test result (Furuse et al., 2020). The POWs team continuously monitored the dataset to identify potential clusters associated with POWs throughout the state. Once clusters were identified, ADH staff would attempt to contact the leadership at the impacted POW to provide education. Network Analysis After evaluating POWs cluster data, one POW cluster emerged for network analysis. This POW cluster was chosen based on its large, reported number of positive cases having attended the POW, community cases, and availability of corresponding detailed CI information that showed the majority of cases attending the same service then infection arising days after. A network analysis was conducted to demonstrate the spread of COVID-19 through a POW and the community in which the POW was located. To better understand this transmission, this study used the network analysis tool, Microbetrace. This instrument has been used in many other COVID-19 studies and is retooling molecular epidemiology for rapid public health response (Campbell et al., 2021). POW Cases in Arkansas From May 2020 to December 2020, there were 9904 COVID-19 positive cases that either reported attending at least one event held at a POW within two weeks of receiving a confirmed positive COVID-19 test or were cases mentioned in CI notes of the positive cases (Table 1). There were more females (55.67%) than males and nearly a third of the cases were between the ages of 45 and 64 years. Most cases were among whites (85.51%) and non-Hispanic ethnicity (90.80%). These cases resulted in 530 hospital admissions with 135 people admitted to the intensive care unit. Of the 9904 cases, 79 people died due to complications from COVID-19. Clusters Clusters associated with POW were identified in 63 of the 75 counties in Arkansas (84%). From May 2020 through December 2020, 353 probable clusters associated with POWs were identified in Arkansas. There were four POWs that had more than one cluster during this period. Additionally, there were 30 POWs that had a cluster of more than 20 cases. POW A and POW B POW A held regular, in-person services in late September 2020 and, over a period of 3 weeks, continuous spread occurred throughout Sunday services. This resulted in a total of 21 primary cases ( ) shown in the network analysis ( Fig. 1). According to information collected by CIs and contact tracers, at least 15 2nd degree exposure cases ( ) had a household association or attended the same work location as a primary case (Fig. 1). The network analysis found possible spread between 2 POWs ( ) (Figs. 1 and 2). The probable transmission may have occurred when a primary case, who attended POW A, went to their place of work ( ) during their infectious period. Based on contact tracing, this case attended the same place of work as two 2nd degree exposure cases ( ). These 2nd degree exposure cases subsequently both attended POW B ( ) during their infectious period. POW B was later determined as a COVID-19 POW cluster, with 18 members testing positive. These cases were identified as 3rd degree exposure cases ( ). Two 3rd degree exposure cases from POW B reported attending the same place of work during their infectious period, resulting in a possible COVID-19 exposure to a 4th degree exposure case ( ). These three cases then attended a community event, which was reported to have at least 100 people in attendance ( ) (Fig. 1). Furthermore, separate probable community spread may have also developed from POW B, identified as 4th degree exposure cases ( ) (Fig. 1). This community spread includes a family gathering held by two 4th degree exposure cases where the attendance from members outside of the state ( ) would later result in a COVID-19 case status. In total, 60 cases were identified as probable associations within this network analysis (Fig. 1, Table 2). Discussion Throughout our study analysis, a total of 9904 COVID-19 cases reported attending an in-person POW service during May 1, 2020-December 31, 2020 with 353 probable POW-associated clusters identified. Historically, network technique tools have been useful in studying the diffusion of infectious diseases. Considering the limited knowledge of COVID-19 at the time, the network analysis tool used, Microbetrace, was purposeful in studying COVID-19 spread where at least 60 cases from POW A were traced to at least 4 different settings, including a secondary POW (POW B) (Campbell et al., 2021;Laumann et al., 1989;Maheshwari & Albert, 2020). During the time period when the cases and clusters occurred in POW A and POW B, COVID-19 was widespread in Arkansas, with approximately 6800 active cases across the state. While it may be difficult to definitively identify the source of infection for any given case, a thorough examination of the CI notes for cases who identified as having attended these POWs allowed the researchers to highlight two particular patterns. First, at least 75% of the cases from both POWs reported to the CI that their most recent date of attendance fell within a 14-day period at each POW. Second, in both POW A and POW B, the confirmed primary positive cases all tested positive within one to two weeks of each other, with positive tests results starting toward the end of the 14-day cluster period at each POW. While there were some instances of multiple positive cases from households within the POWs, it was clear that these clusters occurred across many households, further suggesting that transmission was not limited to household spread. It has been reported that people are less likely to wear masks while attending religious services (DeFranza et al., 2021). During the COVID-19 pandemic, evidence has steadily mounted for infected aerosols as the primary mode of SARS-CoV-2 viral transmission (Echternach et al., 2020;Katelaris et al., 2021). It is also welldocumented that a significant proportion of infected individuals are asymptomatic but can still be highly contagious (Moghadas et al., 2020). Although recognition of variation of the size and quantity of respiratory droplets with different expiratory activities (e.g., quiet breathing, heavy breathing, speaking, shouting, singing, coughing and sneezing) is not new, we have come to a much clearer understanding that the smallest respiratory droplets (aerosols) can travel much farther than six feet (Katelaris et al., 2021;Morawska et al., 2009). Recent studies have demonstrated the effectiveness of masks in reducing the dispersion of respiratory particles during singing, even with loud singing by professional singers (DeFranza et al., 2021). Alsved et al. (2020) demonstrated that wearing an ordinary surgical mask while singing reduced the amount of measured respiratory particles to levels comparable to that of unmasked normal speech. From our preliminary investigation, we observed that POWs that practiced universal masking throughout worship services (including during congregational singing) had a much lower incidence of clusters compared to POWs where masks were either not worn or were removed for singing. It is possible that Arkansas had an underreported and undertested caseload of COVID-19 similar to a 2020 study examining US COVID-19 cases and testing. This may be due to overwhelmed medical facilities, limited access to a nearby testing center, insufficient medical cost coverage, and/or the reportability between a pathology laboratory and health department during the early of the pandemic (Lau et al., 2020). Still, this study is unique in its evaluation of POW-associated cases and focus on one POW and its resultant community exposures. Strengths of this descriptive study include the ability to describe cases associated with POW and community spread in the network analysis, and usage of Microbetrace in evaluating the probable association between POW cases and community spread. While this study was able to provide a general overview of POW-associated COVID-19 cases, there were limitations identified relevant to biases. One limitation throughout the data collection process could be recall bias of the participants. Recall bias could affect the patient's accuracy or completeness of the recollection retrieved from the CI's. Interviews were done a week or more after their positive results where cases would be asked to recall their 14-day activity since their first date of symptoms or first positive COVID-19 result. In addition, there were also cases who withheld or omitted providing any information about their POW. This also would lead to underreporting of exposure among cases, therefore may have led to underestimating the effects. The other limitation is unable to interview all cases in the REDCap system. Due to the fluctuation of increased cases in the state within the study period, not all cases could be reached by a CI. Since not every single case was interviewed, there were cases from which community exposures and association with a POW remained unknown. This study was also susceptible to selection bias relevant to the target population. If an individual felt restricted of their religious freedom and/or if they have mistrust in science or their respective government officials regarding the existence of the pandemic, they may have reactionary effect to directives and guidance early in the pandemic (DeFranza et al., 2021). Therefore, it is possible our study did not capture cases who experienced mild COVID-19 symptoms and did not take a COVID-19 test. Additionally, this study is vulnerable to confounding bias. This study does not pinpoint POWs as the source of infection because exposure to infection may have derived from other social activities attended from the CI questionnaire. If individuals were willing to attend in-person services at a POW, then they may have also attended other in-person activities and events outside of their POW (DeFranza et al., 2021). Other than the surveillance of POW-associated COVID-19 cases analyzed for this study, when a cluster was identified, education was provided to POW leaders directly from the ADH COVID-19 Guidance for POWs. Part of the education included information on congregate singing with a mask on along with following other safety guidelines to help stop the spread of COVID-19. This allowed POW leaders and public health officials to work closely together. Pastors, faith leaders and congregants were part of the boots on the ground people at COVID-19 community-testing sites along with many POW-hosted testing sites, working alongside medical professionals and health department staff to serve their respective communities. They also prepared meals for the community and healthcare workers and sewed thousands of facemasks. As a resource, pastors and faith leaders have first-hand knowledge of what affects their communities and are trusted messengers, underlining the importance of a reciprocal relationship between local health agencies and faith leaders. For instance, faith communities can turn to public health leaders when they need their help and in turn, public health leaders can turn to the faith community when they need their help. Because of these relationships, when COVID-19 struck, public health agencies and POWs were able to collaborate in more ways during this crisis. This relationship needs to extend beyond the pandemic to other areas of public health and national emergencies.
2022-09-02T13:44:04.237Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "b85acb1d68c300b8fd3813978fac230649907cb5", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10943-022-01653-y.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b85acb1d68c300b8fd3813978fac230649907cb5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236381789
pes2o/s2orc
v3-fos-license
Predictors of antiretroviral therapy interruptions and factors influencing return to care at the Nkolndongo Health District, Cameroon Background Antiretroviral therapy is a lifelong commitment that requires consistent intake of tablets to optimize health outcomes, attain and maintain viral suppression. Objective We aimed to elicit predictors of treatment interruption amongst PLHIV and identify motivating factors influencing return to care. Method We conducted a cross-sectional study using a mixed-method approach in four hospitals in Yaoundé. Sociodemographic and clinical data were collected from ART registers. Using purposeful sampling, thirteen participants were enrolled for interviews. Quantitative data were analyzed using Epi-Info and Atlas-TI for qualitative analysis. Ethical clearance approved by CBCHS-IRB. Results A total of 271 participants records were assessed. The mean age was 33 years (SD±11years). Private facilities CASS and CMNB registered respectively 53 (19.6%) and 14 (5.2%) participants while CMA Nkomo and IPC had 114 (42.1%) and 90 (33.2%) participants. Most participants (75.3%) were females [OR 1.14; CI 0.78–1.66] compare with males. 78% had no viral load test results. Transport cost and stigmatization constituted the most prominent predictors of treatment interruption (47.5%) and (10.5%) respectively. Belief in the discovery of an eminent HIV cure and the desire to raise offspring motivated 30% and 61%, respectively to resume treatment. Conclusion Structural barriers like exposed health facility, and dispensing ARVs in open spaces stigmatizes clients and increases odds of attrition. Attrition of patients on ART will be minimized through implementation of client centered approaches like multiplying proxy ART pick points, devolving stable clients to community ARV model. Introduction Consistency on antiretroviral therapy (ART) remains the most effective intervention in the global HIV response and has proven an effective for people living with HIV (PLHIV). ART treatment interruption is a patient-initiated episode of more than 30 days of stopping ART but who will subsequently resumed treatment. When the rate of adherence to medication is as high as 95%, the viral suppression rate approaches 78%, however, when the rate of adherence is reduced to 80%, there is a dramatic reduction in the viral suppression rate, which can be as low as 20% thereby increasing odds of HIV transmissibility 1 . The adherence rate of medication should be maintained at 95% or above to optimize treatment outcomes and attain viral suppression. Since the discovery of ART in 1996, substantial improvement has been noticed on the path of HIV disease progression. According to UNAIDS, there were 38.0 million PLHIV worldwide by 2019. Global trends showed about 25.4 million were accessing antiretroviral therapy, up from 6.4 million in 2009 2 . Incidence of HIV and its related deaths were 1.7 million and 770.000 respectively 3 . Attain and maintaining epidemic control requires acting on retention to achieve the best health outcomes associated with medication intake. Prior studies recommends an adherence of 95% or more to optimized health benefits associated with daily medication intake 4 . ART consistency maintain the virus in a latent state, and interrupt viral replication. Daily prescription for timely intake is important and psychosocial and counselling support provided by care providers to support consistency on ART. Monitoring and tracking of clients are required to provide support and adequately report to update the national health information system. By 2018 retention rate globally stood at 62% and moved to 73% by 2019 with significant variations per region, and Africa harboring 2/3 of the global burden 5, 6 . ART being a lifelong commitment, requires consistency in treatment for all PLHIV to guarantee the continued effect of the medication, and reducing immune activation 7 . These drugs act within a time-lapse and, therefore, regular and timely intake is needed to maintain the virus in a latent state 8 . Viral suppression not only leads to improved clinical outcomes for the individual but also reduces the risk of drug resistance and HIV transmission 9 to sexual or biologic contacts most at risk. This underscores the need to adhere diligently, follow the treatment schedule, take prescribed ART respecting appropriate time, doses, and frequencies 10 , to inhibits viral replication 11 . In contrast, the absence of treatment, treatment inconsistency, and treatment interruption leads to quick viral rebound, replication, increased odds of drug resistance and increases the risk of opportunistic infections 12 . Complete adherence to ART can prevent more than 96% of mother to child HIV transmission 13,14 with concomitant decrease in morbidity and mortality 15 . As interventions, establishing ART proxy pickup units on a peripheral level and task-shifting 16 contributes significantly to reduce problems associated with congestion in facilities, proximity barriers, and travel costs. However, stigmatization remains a challenge and contributes to attrition 17 . In the early years of the HIV/AIDS epidemic, the social consequences of stigma and discrimination towards people with HIV were identified as part of the "third phase of the epidemic" and addressing these consequences were "central to the global AIDS challenge as the disease itself " to date stigmatization still prevent PLHIV from accessing treatment due to misconceptions by community and fear of critics 18 . Cameroon harbored approximately 540,000 PLHIV by 2018, with 23,000 new infections and 18,000 related mortality to AIDS 19 . According to UNAIDS by 2019, only 67% of all PLHIV on ART were retained on treatment showing a gap in retention. Some reasons explaining this gap include behaviors associated with stigmatization like traveling long distances for a refill 2,20 . Weight gain after commencement of treatment to be a sign of cured 21,22 . Other factors like, depreciating health state, unstable housing conditions, and frequent displacement increase the risk attrition rates 23 . Intrinsic factors, such as sex and age, influence retention. Service delivery prone higher utilization for women and reduced health service uptake targeting men 24 . Depression and other mental health problems are common comorbidities for PLHIV and are a cause of treatment interruption 22 . On the other hand, health care providers and health system associated factors including poor patient-provider rapport 25 , shortage of staff, inadequate space at ART clinics 21 , concerns about confidentiality, inadequate counseling before initiation, and drug stockouts 26 contribute to treatment interruption and attrition. Community-related factors like switched to traditional medicine 22,25 , the use of herbal preparations, fear of disclosure of status with partner, friends, siblings, and offspring 20 are cited as negative influences on retention in care 21 . Family pressures and religious beliefs also contribute to attrition. Stigmatization 25 and discrimination in access to public services hinder adherence. In this context, ART treatment interruption is a patient-initiated episode of more than 30 days of stopping ART but who subsequently resumed treatment, while we defined retention in care as a patient who is still on ART (assessed at intervals longer than 12 months post-initiation) and has not died, transferred out, stopped treatment or been lost-to follow-up (LTFU). We aimed to identify predictors of TI amongst PLHIV on ART and understand motivating factors that influence return in care in selected hospitals in Yaoundé. This study arose in response to the growing rate of clients reported to have inter-rupted treatment and in remedy to understand reasons for this trends to guide policy makers in designing interventions aimed to raise retention and approaching epidemic control. Research design and approach Study design and setting We conducted a mixed-method study using a cross-sectional design. This design enabled us to capture quantitative data, we later used purposeful sampling to select participants for in-depth interview from the quantitative population. The said approaches enabled us to explore the lived experience of participants and understand the meaning they attribute to the phenomenon of investigation included in the study were clients who initiated treatment in the four participating facilities from January 1 st , 2016 to December 31 st , 2018 but interrupted ART from September to December 2018, to enable us understand reasons associated with attrition, and generate evidence for policy guided interventions. Furthermore, in-depth interviews were also conducted. Study population The study was conducted in the Nkolndongo health district, in Yaoundé. Participants were purposefully selected from four ART treatment units providing global HIV management to PLHIV. We included into the study all PLHIV who had interrupted treatment from September to December 2018 and returned by March 2019 and only those who did not give their consent to participate were excluded. To show variability the health facilities (HF) characteristics were disaggregated into private HF (CASS Nkolndongo and Medical Center Nicolas Barre (CMNB)) and public HF (CMA Nkomo and Infimerie prison Centrale (IPC)) settings. An exhaustive sampling targeting PLHIV 20 years and above was carried out. Subsequently, a consecutive sampling technic was used to enroll participants. in two different groups: 1). Early treatment interruption respondents (less than 12 months after ART initiation); and 2). Late treatment interruption respondents (more than 12 months after ART initiation). Sampling and sample size No sample size was calculated for the quantitative assessment because we included all clients reported to have interrupted treatment during the investigation period. We purposefully sampled clients from the quantitative population for IDI and after 3 interviews per facility, we attained saturation of responses. Study procedures We resorted to two of the most used data collection methods. Qualitative data We conducted 13 interviews with non-structure probes to guide the respondents provide their lived experiences. We briefly voice a discussion topic then listen keenly to respond. The interviews were recorded using an audio tape recorder, later transcribed, verbatim using Atlas-TI, and translated from French to English by the first author who is fluent in both languages and checked by other others. Related codes from themes were grouped and the responses categorized. Some topics covered included: "What made you interrupt from care?" "can you relate what you or other people think, prompt some people stick, stop or restart treatment?" "From your experience or other people, you may know, relate how health services offered influence you/others decision to interrupt ART?" "What personal/structural/community factors prompted you to return and stay in care?" and "In your opinion, what do you think motivated those who interrupted treatment to restart?" Quantitative data We designed a data entry matrix using the software epi-info 7. We extracted demographic and clinical data on clients age, sex, date of initiation on ARV, duration on treatment, time of ARV interruption, and follow-up outcomes from registers and patient's files, follow up the patients through phone calls and if necessary, home visits to discuss their opinion on factors prompting TI and influencing factors to return in care. A comparable group of individuals who had not interrupted treatment with same characteristics, age, sex and cohorts was extracted to assess Outcome variable Treatment interruption and resumption to care would be our outcome of interest. . Data analysis Quantitative data were entered and analyzed using EPI info 7.2. Simple frequency tables were made to view the data and perform cleansing prior to analysis. Univariate and bivariate analysis performed. Chi-square test performed to assess how significant the various factors influenced retention with the presentation of the p-value at 95% confidence interval . Viral load coverage was assessed with interest on viral suppression. A comparative group of clients who had not interrupted treatment with similar characteristics as time of initiation, and consistency on ARV was extracted to compare our variable of interest and assess measures of association. Qualitative data Content data analysis was done with first step consisting of quality control of transcripts. All validated transcripts reviewed, and codes identified. Initial codes identified and generated based on predefined themes and codes informed by the interview guide. New codes were identified after review of transcripts. Both old and new codes classified under main and sub themes. The validated codes generated into Atlas-TI. Ethical considerations Ethical clearance was obtained from the Cameroon Baptist Convention Health Services Institutional Review Board IRB (Ethics Clearance No. IRB2019-14). Written permission to start the study was also received from the District Medical Officer, Nkolndongo. All the ethical principles of informed consent, autonomy, and beneficence, as well as confidentiality, were observed. After follow-up of those who had interrupted treatment, 95 (34.9%) on care, transport cost and distance were reported as major predictors 67 (47.5%) and 35 (24.8%) respectively. While stigmatization led to 28 (19.7%) of treatment interruption. Most of the participants, 212 (78.2%), had not attested for viral load since initiation. The details of the socio-demographic and clinical characteristics of participants are summarized in Table 2. Association between socio-demographic characteristics and treatment interruption Residing outside the center region increased by three-fold the risk of interrupting care. PLHIV on treatment for over one year were more likely to interrupt treatment as depicted in Table 3. Health system and care provider associated factors Establishing trust and good care providers -client rapport, is recommended to optimize retention in care. care providers need to support the client and link them to the most convenient and closest facility to their residence. This reduces the financial burden breaks distance barrier and ensure smooth care providers-patients rapport. A patient stated, "I was doing a little business that enabled me to pay the transport cost to the hospital. I relocated and later had to spend 10,000frs ($17) to come to the hospital from Nanga (more than 80 kilometers from Yaoundé) this made me interrupt treatment and I didn't want to get a transfer because I was not certain of care provider support in the new facility." (woman 30 years old interrupted treatment and restarted) The attitude of care providers and organizational setting influenced the client's outcomes. Respondents expressed satisfaction regarding the type of care offered and the level of confidentiality yet, some patients experienced ill events and related: "I was challenged with the fact the facility is too open and close to the roadside" (IPC_man, 57 years interrupted treatment 2 months and restarted). "Some health providers speak in quarters about client's status they recognize from the care unit. This morning, my neighbor a nurse was telling me about the status of a woman we know in the neighbor-hood this frustrated me, and I wondered if others don't speak about me too!" (CMNB_woman, 34 years interrupted treatment 4 months and restarted). "I am sure they keep to our privacy and for this reason, I would not want to transfer to another location as I can't attest, I will get the same treatment that way." ( CASS_woman, 34 years interrupted treatment 2 months and restarted). Community-related factors and perceptions The external influence of healers and spiritualists affects patterns concerning treatment outcomes. Most participants disclosed their status only to their partners or siblings; hence there was little influence from the community, but one related. "Yes! once in the university teaching hospital (CHU), a lady I met there told me she had a product that could cure me. She told me to call hr, I believed her, I did, and she asked me to pay 600,000frs ($1000) for the first dose and later I can pay for the second after. She told me her treatment will cure me. This made me stop treatment hoping I will get this money and start her therapy but later did not have money." (CASS_woman, 34 years interrupted treatment 6 months and restarted). Motivation to return to ART Disclosing the client's status to a partner or close relatives and friends created a free environment for the patient to intake daily doses comfortably as well as respect monthly ART pickup rendezvous. Discussion Retention remains a major challenge 29 and stands as a major pillar in the global respond in the HIV/AIDS pandemic. In a study by Bulsara et al, demographic and economic factors increased odd of ART attrition 23 . These finding revealed a similar observation with economic factors influencing attrition patterns. Clients inability to afford transport cost to health facility were reluctant to pick monthly ART doses, as a result, they interrupted care. Similar findings were seen in the qualitative analysis were PLHIV reported to had interrupted care as they could not afford transport cost to the ART pick up health facility. The predominance of women 75.3% compared with men 24.6% observed was due to the feminization of health services like antenatal clinic and low male uptake of health services. Similar trends were observed in Ethiopia whereby 64% female compared with 34% male experienced interruption 20,25 this disparity was observed as women utilize health services most with more service packages targeting them in contrast with men who showed reduced utilization of health services. Clients in private HF experienced more support and follow-up coupled with the accommodating environment 30 . They had more devoted staff, who worked extended hours. This was different in the public facilities where clients were received during limited hours contributing to increase attrition. Mukumbang et al, also observed private facilities offered a better infrastructure and environment promoting privacy, safety, security, and confidentiality compared with public facilities 20 , These correlated with qualitative findings where clients affirmed comfort, confidentiality and hesitated to be transferred hence interrupted treatment when she could not travel to the treatment center. Infirmerie Prison Central had a high population of prisoners who interrupted care shortly after release from prison. This was due to inadequate counseling before their release equally observed. 24 . Most interview responded reported attrition when depressed and stressed psychologically. In the qualitative analysis, client reported support from care provider and frequent call greatly strengthened client-care providers rapport and this influenced their will to stick on ART and motivated resumption on care and treatment. The sharing of HIV status enabled PLHIV to gain support from partners or children. The latter reminded them of daily doses intake and monthly pickup appointments. Similar observations made by Alan et al. were disclosure of status improved social support 28 and better retention. Drug side effects influence the client's decision to interrupt treatment. About 5% of clients interrupted treatment due to adverse drugs effects. Clients in qualitative assessment also revealed drugs effects prompted them to interrupt treatment though they used fruits like "pineapple, oranges", to attenuate these effects. Other studies mentioned these effects as compromising the quality of life. Clients believe in a forthcoming discovery of an HIV cure 32,33 , stories from the patients cured using stem cell transplants raised hope in PLHIV especially expressed in the qualitative assessment and influenced positively retention in care where participants revealed to be in expectation of a future cure reason they should stick on treatment. Follow-up through phone calls, home visits strengthened the clients-care provider relationship leading to better retention similar observation made by Nathalie et al 34 . Vrazo, also observed home visits improved retention in care 35 . We likewise observed women generally expressed relief after having an HIV negative outcome for their babies. This motivated them to adhere to their ART. The desire to raise offspring, and the support received from family, influenced PLHIV decision to restart and continue ART. In the qualitative interview's female participants reported to restart ART as they wanted to prevent transmission to their offspring's. Cecilia, et al findings revealed social support contributed to promoting treatment resumption 36 . Early active follow-up of patients improved retention in treatment. Our observation also revealed the need to track clients soon as they default to improve retention. Clients also related belonging to a support group and learning and economic activity might reduce their financial burden hence enabling them to assume related charges to attain monthly visits. Limitations -Double coding for qualitative analysis not done. -Few health facilities enrolled, and hence bigger studies recommended. Recommendations -Similar study be conducted on a national scale for more robust conclusions and improve patients care and treatment. -Keen attention and clinical follow up be made for clients who interrupt treatment especially viral load monitoring. Conclusion Social, economic, and health system factors influence a client's therapeutic outcomes and retention in care. This study also revealed that client's inability to bear transport cost to the health facility and unreadiness to be transfer to a proxy treatment center to their homes influenced retention in care. Prominent factors influencing retention in care included stigma, drugs side effects and care provider patient rapport. Therefore, ART attrition will be minimized through implementation of strategies that further reduce these structural, and socioeconomic barriers. This stresses the need to reinforce psychotherapy throughout treatment cascade and educating health care providers on patient-centered approaches to optimize retention.
2021-07-27T00:05:51.586Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "3a37b61fead392e078517aeb36aa21b3df0aa922", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/ahs/article/download/207499/195597", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "13e81b1d10158c7981802560d6a69ac8e829811f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
198390916
pes2o/s2orc
v3-fos-license
Estimating uncertainty of temperature measurements for studies of flow boiling heat transfer in minichannels This paper presents the method of estimating the uncertainty of temperature measurements conducted using K-type thermocouples in the study of flow boiling heat transfer in minichannels. During heat transfer experiments, the fluid temperature at the inlet and outlet of the minichannel is measured with thermocouples connected to a DaqLab 2005 data acquisition station. The major part of the experimental setup for calibration of temperature measurement included a calibrator of thermocouples. The thermocouples were manufactured by Czaki Thermo-Product, Poland. The temperatures recorded with the thermocouples were compared statistically while measuring the temperature of demineralised water at several characteristic points at liquid phase change or using the reference temperature known from the calibrator. The experimental error of the temperature measurement method was determined according to the principles of statistical analysis. Estimates of the mean value and the experimental standard deviation of the experimental error as well as the confidence interval for a single experimental error and the measurement accuracy were presented. The uncertainty of the difference in temperature was also calculated. Introduction Contact and contactless methods can be used for measuring temperature. Previous authors' works presented estimation of temperature measurement for studies of flow boiling heat transfer in minichannels [1,2]. The comparison of selected two methods for temperature measurement as follows: -infrared thermography and liquid crystals thermography contactless surface temperature measurements -were discussed in [1], -using thermocouples (contact method) and infrared camera (contactless method) -were described in [2]. The common contact method for temperature measurement is using thermocouples for measuring the surface temperature or fluid temperature. T. J. Seebeck discovered in 1821 that it was possible to create an electrical voltage by soldering two different metals (known as "the Seebeck effect"). The voltage (electromotive force) can be measured by using two metallic elements (with different Seebeck coefficients) are joined together at one end [3]. In the world literature, researchers often use thermocouples connected to the data acquisition stations for temperature measurement as a convenient and accurate method. Several examples are described below. In the paper [4] presented the impact of metallic porous microlayer on pressure drop and heat transfer of stainless steel plate heat exchanger. Thermocouples of J-type were used to measure of temperature in four locations i.e. at the inlet and outlet of heat exchanger cold and hot sides. The heat transfer coefficient determination in research on single-phase laminar flow of water was discussed in [5]. The fluid temperature at the inlet and outlet to/from the test section was measured by K-type thermocouples (Omega®, 0.5 mm bead diameter, accuracy ±0.1 after calibration). In paper [6] was described the subcooled flow boiling research and the validation for the infrared camera measurements. During experiments, K-type thermocouples were attached to the external face of the heater. The fluid temperature at the inlet and outlet of the test section were measured with T-type thermocouples and ambient temperature fluctuations -with J-type thermocouples. Estimating uncertainty of temperature measurements obtained from thermocouples can be conducted by the conventional method presented in the Guide to the expression of Uncertainty in Measurement (GUM) or the numerical Monte Carlo method [7]. Several papers on this subject were presented in [2]. Analyzing of temperature from E-type thermocouples and relationship between the thermal electromotive force and temperature were discussed in [8]. This paper presents the method of estimating uncertainty of temperature measurements conducted using K-type thermocouples. Such measurements are conducted in research on flow boiling heat transfer in minichannels. The main aim of this work is to calculate accuracy of fluid temperature measurements by K-type thermocouples using elements of statistical analysis. Estimation of uncertainty of difference temperature was also discussed. Experimental stand for study of flow boiling heat transfer A view of the experimental stand used for flow boiling research is presented in Fig. 1. Fig. 1. A view of the main systems of the experimental setup: 1 -a pressure meter, 2 -a test section; 3 -an infrared camera, 4 -a gear pump, 5 -a thermocouples, 6 -a data acquisition station, 7 -an inverter welder. The essential part of the experimental stand is a test section with a single rectangular minichannel or parallel minichannels asymmetrically heated [9][10][11][12][13]. The heated plate for the working fluid flowing along the minichannel is made of an alloy. Temperature of the outer plate surface is measured using contactless temperature methods (infrared thermography or liquid crystal thermography) or contact method (with using thermocouples). Two-phase flow structures are observed through the glass panel at the other side of the plate. At the inlet and outlet of the test section, pressure meters and K-type thermocouples are installed. The cross-section view and image of the test section with a minichannel are shown in Fig. 2 and Fig. 3, respectively. Experimental stand for temperature calibration The major parts of the experimental setup for temperature calibration included (Fig. 4): an instrument for calibration of the thermocouples -a calibrator (1), thermocouples type 221-b-100 manufactured by Czaki Thermo-Product, Poland [14] (2), a compensation cable type Lx2 (3), a DaqLab 2005 data acquisition station (4), a PC computer with a specialist software (5) and an absolute pressure meter (6). Experimental methodology of temperature calibration Two tested thermocouples' sensors were put to the upper hole of the calibrator (TC 1 , TC 2 input, Fig. 5) and connected due to compensating cables (3, Fig. 4) to the DaqLab 2005 data acquisition station (4). The measurement data collected at this station was transferred via the Ethernet interface to a PC computer (5). The computer was equipped with DaqView data acquisition software. The measurements were carried out in the laboratory at 25 °C ambient temperature and atmospheric pressure of 0.975 bar. Eleven series of measurements were taken at reference temperatures in the range from 0 to 100 °C, with a temperature step of 10 °C. The reference temperature (T ref ) was set on the calibrator. When temperature was stabilized during time about 15 minutes, n measurements were collected with a time step of 1 s. The recorded temperature was assumed as the arithmetic mean of 100 sampling during the selected time step. Averaging was conducted by the data acquisition station. Then, temperature was increased by ten degrees using the calibrator. Such temperature calibration procedure was repeated until the reference temperature equal to 100 °C was achieved. Basic data Thermocouples are temperature sensors that respond to a change in the temperature with a change in the thermoelectric force. The K-type thermocouples applied to measure the temperature of the fluid were 0.5 mm in diameter. They were capable of measuring temperature in the range from −40 °C to 600 °C. The thermocouples were manufactured by Czaki Thermo-Product, Poland (type TP-221) with T max = +600 ºC. They have a NiCr-NiAl K-type sheathed thermocouple sensor with an outer diameter of 0.5 mm. The measurement junction was galvanically isolated from the sheath (type b) and its length was L = 100 mm. K-types thermocouples are oxidation-proof resistant to high temperature, a reducing atmosphere and sulphur compounds. They can operate at temperature up to 1000 °C or even 1100 °C; they are more resistant to high temperature than other thermocouples made of nonnoble metals [14]. A scheme of used in experiments a K-type thermocouple is presented in Fig. 6. Fig. 6. A scheme of a K-type thermocouple, Czaki Thermo-Product, Poland [14]. The accuracy of temperature measurement for K-type thermocouples was estimated at 1.5 °C in the range -40 °C ÷ 375 °C, according to manufacturer [14]. Basic assumptions for analysis The temperatures recorded with the thermocouples were compared statistically while measuring the temperature of demineralised water at several characteristic points at liquid phase change or using the reference temperature known from the calibrator. The experimental error of the temperature measurement method was determined according to the principles of statistical analysis. Estimates of the mean value and the experimental standard deviation of the experimental error as well as the confidence interval for a single experimental error and the measurement accuracy were presented. The uncertainty of the difference in temperature was also calculated. Determination of corrections of temperature measurement for tested thermocouples Corrections of temperature measurement C Ti calculation were based on the following relationship [16 -20]: where: C DR -correction for the DaqLab 2005 data acquisition station resolution, C IC -correction for the instrument for calibration of the thermocouples (a calibrator), i-number of the tested thermocouple. The uncertainty of measurement resulting from resolution capability of the DaqLab 2005 was estimated as follows [21,22]: where: a -DaqLab 2005 data acquisition station temperature measurements resolution, a = 0.1 ºC. Uncertainty resulting from the applied etalon -the instrument for calibration of the thermocouples -was estimated as follows [21,22]: Table 1 shows the results of statistical analysis including: the selected corrections of temperature measurement obtained as the difference between the thermocouple's measurement (T TC1 or T TC2 ) and the presetting temperature T ref . where: n -sample size, , -mean values of the temperature for each thermocouple, u(C T1 ), u(C T2 )uncertainty of correction for each thermocouple, U(C T1 ), U(C T2 ) -extended uncertainty correction for each thermocouple, C T1 , C T2 -correction for each thermocouple. The maximum value of correction of temperature measurement was approx. 0.704 ± 0.0023ºC in the temperature range which was taken in experimental analysis. The relative experimental error The investigations included determining the measurement accuracy for two tested thermocouples. It was assumed that values of the reference temperature are based on the experimental results recorded with thermocouples and compared to values of temperature presetting on the instrument for calibration of the thermocouples [20,21]. The relative experimental error (EME) was calculated using the following dependence [22], similarly as in [1,2]: where: T TCi -temperature recorded with thermocouples (the test method), i -number of a tested thermocouple, T ref -the reference temperature, set on the calibrator. The assumption that the reference temperatures are known from the calibrator was made similarly as reported in [2]. The relative error of the TC measurement was determined like in [16,22]. The procedure for mean value of the experimental error (EME ) and the confidence interval (CI) calculation was carried out as follows [21]: 1. determination of the experimental errors for the TC measurement, 2. calculation of the mean values of the experimental errors, 3. determination of the intervals of confidence for the mean values of the experimental errors with normal distribution. Method accuracy for the thermocouple measurement The values of the relative experimental error were used to estimate the method accuracy MA. The calculations were based on the following relationship: where: -mean value of the relative experimental error, k -the expansion coefficient for the level of significance  = 0.05, k = 2 [21], and s -experimental standard deviation of the experimental error. The method accuracy (MA) determined with Eq. 6 [8] was used to qualitatively assess the measurement accuracy of the Czaki Thermo-Product TP-221 K-type thermocouples employed in the tests. Table 2 shows the results of statistical analysis including the relative experimental error obtained by comparing thermocouples measurements (T TC1 and T TC2 ) with the temperature presented on the calibrator (the reference temperature). From the method accuracy (MA) results shown in Table 2 it is evident that all values of parameters obtained for two thermocouples used for the measurement of the fluid temperature are similar. In certain ranges, they can be used interchangeably. Estimating of the accuracy of the temperature measurement system in scientific research can reach max. 15% [22]. In the case considered, the maximum value of MA was below 4.5% in the tested temperature range from 0 to 100 C. Figure 8 presents the mean values of the relative experimental error as a function of the reference temperature T ref . The relative experimental method errors for both thermocouples were approximated on the following functions: The determination coefficient R 2 = 0.9892 (Eq. 7). = −4 • 10 + 0.0007 − 0.0416 (8) The determination coefficient R 2 = 0.9861 (Eq. 8). In both cases, very high values of the determination coefficient were obtained. It means a very good adjustment of these functions which last coefficients in Eqs. (7,8) differ slightly and other coefficients are equal. where: EME -experimental error, -mean values of the relative experimental error, s -experimental standard deviation, CI -confidence interval, MAmethod accuracy. Estimation of uncertainty of the difference in temperature measurement for tested thermocouples In this chapter estimation of uncertainty of difference in temperature measurement received from the tested thermocouples were calculated and discussed. The difference in fluid temperature at the inlet and outlet to/out of the test section with minichannels are necessary for local heat transfer calculations according to mathematical methods based on experimental results from research on flow boiling heat transfer [9 -12, 24 -34]. The calculations of the difference in temperature ΔT between two thermocouples' measurements ware based on the following relationship: where: T TC1 , T TC2 -values of temperature measured by each thermocouple. The uncertainty of the difference in temperature ΔT was calculated using the following formula, similarly as in [21,22] Confidence interval was calculated based on the following relationship [21,22]: where: ∆ -mean difference values of temperature measurement, k -the expansion coefficient for the level of significance  = 0.05, k = 2 [21], u(T) -uncertainty of difference in values of temperature measurement (Eq. 10). Table 3 shows the results of the selected basic parameters for the difference in temperature measurement by thermocouples with presetting temperature on a calibrator of thermocouples. The dependence of mean difference values of temperature measurement ∆ in a function of the reference temperature T ref is presented in Fig. 9. Conclusions The uncertainty of fluid temperature measurements by K-type thermocouples conducted in research on flow boiling heat transfer in minichannels was estimated. It was possible due to the results obtained from temperature calibration experiments conducted on the research stand for calibration of temperature measurement included a calibrator of thermocouples. The temperatures recorded with the thermocouples were compared statistically while measuring the temperature of demineralised water at several characteristic points at liquid phase change or using the reference temperature known from the calibrator of thermocouples. The analysis was performed to determine of the experimental error of the temperature measurement method according to the principles of statistical inference followed by the estimation of: the mean value and the experimental standard deviation of the experimental error and the confidence interval for a single experimental error and the measurement accuracy. Values of corrections of temperature for both thermocouples do not exceed 0.6 °C and are the highest for 50 °C. The maximum value of correction of temperature measurement was approx. 0.704 ± 0.0023 ºC in the tested temperature range. The uncertainty of the difference in temperature was also calculated. Uncertainty (approx. 0.1 ºC) and extended uncertainty (approx. 0.2 ºC) of the difference in temperature for all reference temperatures were similar. The maximum value of confidence interval of the difference in temperature was approx. -0.2 ± 0.21 ºC for each case.
2019-07-26T11:30:38.442Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "10848f259fb27cd38f3410baba3fdd53c9c76608", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1051/epjconf/201921302059", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d0387cbc5d93defd03ee1b0d471309949423d924", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
218817137
pes2o/s2orc
v3-fos-license
Coordination environment evolution of Co(ii) during dehydration and re-crystallization processes of KCoPO4·H2O towards enhanced electrocatalytic oxygen evolution reaction Development of efficient and stable electrodes for electrocatalytic oxygen evolution reaction (OER) is essential for energy storage and conversion applications, such as hydrogen generation from water splitting, rechargeable metal–air batteries and renewable fuel cells. Alkali metal cobalt phosphates show great potential as OER electrocatalysts. Herein, an original electrode design strategy is reported to realize an efficient OER electrocatalyst through engineering the coordination geometry of Co(ii) in KCoPO4·H2O by a facile dehydration process. Experimental result indicated that the dehydration treatment is accompanied by a structural transformation from orthorhombic KCoPO4·H2O to hexagonal KCoPO4, involving a concomitant coordination geometry evolution of Co(ii) from octahedral to tetrahedral configuration. More significantly, the local structural evolution leads to an advantageous electronic effect, i.e. increased Co–O covalency, resulting in an enhanced intrinsic OER activity. To be specific, the as-produced KCoPO4 can deliver a current density of 10 mA cm−2 at a low overpotential of 319 mV with a small Tafel slope of 61.8 mV dec−1 in alkaline electrolyte. Thus, this present research provides a new way of developing alkali metal transition-metal phosphates for efficient and stable electrocatalytic oxygen evolution reaction. Introduction The oxygen evolution reaction (OER) is one of the crucial steps of energy storage and conversion applications, such as hydrogen generation from water splitting, rechargeable metalair batteries and renewable fuel cells. [1][2][3][4][5] However, the intrinsic sluggish kinetics of the OER involving four-electron transfer requires an overpotential to drive the process. [6][7][8] Therefore, efficient electrocatalysts with high activity and high stability are desirable to realize its practical application. 9,10 To date, noblemetal oxides, such as RuO 2 and IrO 2 are considered to be the state-of-the-art electrocatalysts for the OER, 11,12 but their high cost, limited reserves and stability problem prevent their largescale industrial application. 13 Currently, extensive research efforts have been focused towards the development of costeffective transition-metal based OER catalysts. [14][15][16][17][18][19][20] In particular, alkali metal cobalt phosphates have gained increasing attention due to their high activity and favorable kinetics, such as orthophosphate (LiCoPO 4 ), 21 pyrophosphate (NaCoP 2 O 7 ), 22 and metaphosphate (NaCo(PO 3 ) 3 ). 23 The diverse orientations of phosphate ligands lead to various crystal structures, which are usually benecial for the structural stability during OER process. [24][25][26] As illustrated in Fig. S2 27 The result demonstrated that the Na 2 CoP 2 O 7 shows enhanced activity relative to NaCoPO 4 due to a highly distorted tetrahedral geometry. Nevertheless, Wan et al. subsequently reported that the NaCo 4 (PO 4 ) 3 containing a rare 5 coordinated Co(II) outperformed the Na 2 CoP 2 O 7 (T d ). 28 Besides, some other reported alkali metal cobalt phosphates, such as LiCoPO 4 (O h ) 21 and NaCo(PO 3 ) 3 (O h ) 23 exhibited acceptable OER activity. Admittedly, local coordination geometry in alkali metal cobalt phosphates plays a signicant role in improving their OER activity. However, there is still a lack of deep understanding of the origin of this relationship. It is well accepted that OER activity are highly dependent on both geometric conguration and electronic structure of electrocatalysts. Thus, to address this issue, there is the need for reveal the dual effect of coordination geometry of Co(II) in alkali metal cobalt phosphates. In this work, we found that the dehydration treatment of KCoPO 4 $H 2 O is accompanied by a structural transformation from orthorhombic to hexagonal phase, involving a concomitant coordination geometry evolution of Co(II) from octahedral to tetrahedral conguration. By this, special consideration was paid to the inuence of this local structural evolution on electronic structure. Experimental result and theoretical analysis indicated that the co coordination evolution increases the Co-O covalency, resulting in an enhanced OER activity. The asresulted KCoPO 4 exhibited an enhanced OER activity relative to the pristine KCoPO 4 $H 2 O. Specically, it can deliver a current density of 10 mA cm À2 at a low overpotential of 319 mV with a small Tafel slope of 61.8 mV dec À1 in alkaline electrolyte. Moreover, stability test demonstrated it can hold its catalytic OER activity for 50 h at least. Besides, to the best of our knowledge, this work for the rst time demonstrated the OER performance of the hexagonal KCoPO 4 containing Co(II)-T d local coordination environment. Overall, this uniqueness of this material design not only lies in an effective strategy for local coordination evolution, but more signicantly it provides an ideal platform to better understand the local coordinationactivity relationship for alkali metal cobalt phosphates electrocatalysts. Experimental KCoPO 4 $H 2 O nanoplates were synthesized via a simple hydrothermal process. In a typical step, an aqueous solution of CoCl 2 $6H 2 O (0.004 mol, 30 ml) was completely added into an aqueous solution of K 2 HPO 4 (0.04 mol, 30 ml) under continuous stirring. Aer stirred for 2 h at room temperature (25 C), the mixture was transferred into a Teon-lined stainless autoclave and kept at 140 C in an electric drying box for 20 h. The resulting pink product was collected by centrifugation, rinsed with the DI water and absolute ethanol, dried at 50 C for 12 h to obtain the KCoPO 4 $H 2 O nanoplates. The coordinated water molecule in KCoPO 4 $H 2 O can be removed by heating at 300 C under N 2 atmosphere for 3 h to obtain the dehydrated phase, i.e. KCoPO 4 . The step involved in the dehydration process is illustrated in Fig. 1(a). The characterization techniques and the electrochemical measurements were explained in the ESI. † Results and discussion The XRD pattern ( Fig. 2 Fig. 1(b and c) shows the crystal structures of KCoPO 4 $H 2 O and KCoPO 4 , respectively. It is clearly illustrated that the dehydration is accompanied by a structural transformation and re-crystalline process. Observations by the SEM (Fig. 3(a and b)) and TEM (Fig. 3(c and e)) indicate that the as-synthesized KCoPO 4 $H 2 O and dehydrated KCoPO 4 exhibit plate-like nanostructures with an average size tunable from 2 to This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 14972-14978 | 14973 5 mm. Furthermore, the HRTEM images show the interlayer d spacing of 0.41 nm (Fig. 3(d)) and 0.31 nm (Fig. 3(f)) nm of the two phases, which are consistent with the XRD data. Thermogravimetric analysis (TGA) was carried out under owing N 2 to further demonstrate the dehydration and recrystalline process of KCoPO 4 $H 2 O, as shown in Fig. 4(a). The TG curve shows a sharp weight loss of 8.61%, and is characterized by a strong endothermic DTG peak at 195.7 C, which corresponds to the loss of coordinated water molecule in KCoPO 4 $H 2 O (calcd: 8.53%). There is no weight aer that, indicating it transforms to be a stable KCoPO 4 . Signicantly, upon dehydration, the coordination environment around Co(II) changed from octahedral to tetrahedral conguration, which is demonstrated by a concomitant color change from pink to blue. In other words, The Co-O bond from the oxygen in the water molecule coordinated with Co(II) is lost during dehydration, leading to the coordination of the Co(II) becomes to tetrahedral. The diffuse-reectance solid-state UVvis spectra of the two phases was shown in Fig. 4(b). The spectrum of KCoPO 4 $H 2 O shows a broad band located at approximately 539 nm, which correspond to the 4 T 1g (F) 4 A 2g (P) transition (n 3 band) of high-spin Co(II) in octahedral [CoO 5 -(H 2 O)] 2+ , 29 while for KCoPO 4 the bands located at about 526 nm, 575 nm, and 615 nm are assigned to the 4 A 2g (F) 4 T 1 (P) of highspin tetrahedral Co(II). 30 Thus, the UV-vis spectra suggests a structural conversion from octahedral to tetrahedral coordination. Fig. 1(b and c) shows the local coordination environment and the spin state of Co(II) in the two phases, respectively. As mentioned above, the Co geometric coordination has a great impact on the OER activity of Co based phosphates. Therefore, the coordination environment evolution of Co(II) in the two phases motivate us to explore the corresponding change of their OER performance and the original relationship between coordination conguration with OER activity of Co phosphates. However, just knowing the structural transformation is far from enough, there is the need for further efforts to reveal the corresponding electronic effect. The chemical compositions and bonding states in the asprepared KCoPO 4 $H 2 O and dehydrated KCoPO 4 were examined by XPS, which contributes to estimate the electron states of Co in the two phases, and in turn is conductive to reveal the electronic effect of coordination environment evolution of Co(II) on OER activity. First, the full survey spectra (Fig. S3(a and b) †) indicates the presence of cobalt (Co 2p), phosphorus (P 2p), oxygen (O 1s), potassium (K 2p) from both of the two compounds. Furthermore, the O 1s spectrum of KCoPO 4 $H 2 O (Fig. 4(c)) tted into two peaks situated at 530.7 eV and 532.8 eV can be ascribed to the P-O bonds and coordinated water molecule, respectively, 31 while for KCoPO 4 only lattice oxygen can be tested, as illustrated by the peak located at about 531.0 eV. More signicantly, it is found that there is a positive shi (about 0.3 eV) of binding energy for oxygen aer the dehydration process, which indicates the increased valence state of oxygen in KCoPO 4 . Similarly, the valence state of cobalt increases on dehydration, as conrmed by the positive shi (about 0.3 eV) of binding energy, as shown in (Fig. 4(d)). Specically, the Co 2p spectrum of KCoPO 4 $H 2 O tted into two peaks located at 796.8and 781.3 eV, while for KCoPO 4 the peaks situated at 797.1 and 781.6 eV, respectively. Besides, the Co 2p spectra of the two compounds indicate the Co 2+ oxidation state of cobalt in both of them. 32 In a word, the XPS result suggests the increased degree of Co-O hybridization, i.e. Co-O covalency in KCoPO 4 . Based on the above characterizations including crystal structure and chemical bonding states, it is clearly indicated that the dehydration process of KCoPO 4 $H 2 O is accompanied by a structural transformation from orthorhombic-phase to hexagonal-phase. More signicantly, the local coordination geometry evolution of Co(II) from octahedral to tetrahedral conguration increases the Co-O covalency. Previous research demonstrated that the geometric conguration and electronic structure of cobalt phosphates can largely inuence its OER catalytic activity. 33,34 Therefore, the local Co geometry evolution with increased Co-O covalency may imply an enhanced OER activity. The OER performance of the as-prepared two compounds was evaluated via linear scanning voltammetry (LSV) using a conventional three-electrode system at a scan rate of 5 mV s À1 in 1.0 M KOH (electrochemical measurements, see details in ESI †). Fig. 5(a) shows the LSV curves of the as-prepared KCoPO 4 , together with the KCoPO 4 $H 2 O, and RuO 2 for comparison. As illustrated in Fig. 5(a), the KCoPO 4 $H 2 O requires 387 mV overpotential to reach a current density of 10 mA cm À2 . On KCoPO 4 , the overpotential greatly decreased to 319 mV, which is even comparable to that of the reference sample (309 mV for noble RuO 2 ). Not only that, the catalytic performance of KCoPO 4 is comparable to or even superior to many recently reported transition-metal phosphate based electrocatalysts (the detailed comparison can be seen in Table S3, ESI †). Besides, a smaller Tafel slope of 61.8 mV dec À1 of KCoPO 4 than that of KCoPO 4 -$H 2 O (66.2 mV dec À1 ) indicates the favorable reaction kinetic for OER, as shown in Fig. 5(b). It could also be inferred from the electrochemical impedance spectroscopy (EIS) result (Fig. 5(c)). An equivalent circuit model was suggested, as shown in inset of Fig. 5(c). The charge transfer resistance (R ct ), was tested to be 36.1 U for KCoPO 4 , much lower than 139.9 U for KCoPO 4 $H 2 O. Furthermore, CV curves with different scan rates were recorded to evaluate electrochemically active surface area (ECSA) of the two compounds, as shown in Fig. 6(a and b). It can be evaluated with the electrochemical double-layer capacitance (C dl ), which is determined by current density differences plotted against various scan rates. Specically, the C dl of KCoPO 4 was calculated to be 14.15 mF cm À2 , which is very close to that of KCoPO 4 $H 2 O (13.55 mF cm À2 ), indicating an equivalent ECSA value of them. Therefore, it is proposed that the transformation of KCoPO 4 $H 2 O to KCoPO 4 lead to the enhanced intrinsic OER activity. In order to estimate the intrinsic activity, turnover frequency (TOF) was calculated for them. The TOF value of KCoPO 4 , KCoPO 4 $H 2 O and RuO 2 were 0.0635 s À1 , 0.0413 s À1 , 0.015 s À1 , respectively, which are comparable with the previously reported values. 35 Such a high TOF value means a very high intrinsic activity of KCoPO 4 . Based on the above comparison study on structural transformation and OER performance evaluation, it is suggested that the enhanced OER activity is mainly derived from the electronic effect of local Co(II) geometry, i.e. the increased Co-O covalency. In addition, the stability measurement for the as-prepared KCoPO 4 manifests that the long-term durability requirement can be met. The chronopotentiometry result of KCoPO 4 at a constant 10 mA cm À2 , as shown in Fig. 5(d), indicating the potential kept stable at $1.55 V in an OER process up to 50 h. The LSV curve aer 50 h test exhibited a negligible shi, as shown in Fig. S4. † Besides, the structural characterizations, as illustrated in Fig. S5, † further demonstrated the stability. Based on the above analysis, it is suggested that the distinct OER activity is associated with the local electronic structure modulation by coordination environment evolution of Co(II). In other words, the local Co geometry evolution from octahedral to tetrahedral conguration with increased Co-O covalency result in an enhanced OER activity. This analysis is consistent with the previous research. Recent studies demonstrated that the increased Co-O covalency promotes the interfacial charge transfer between Co and oxygen intermediates, resulting in an enhanced OER kinetics. [36][37][38][39] To further conrm the electronic effect, rst-principles density functional theory plus Hubbard U (DFT+U) calculations were performed on the two compounds (computational models and methods can be seen in the ESI †). The calculated partial densities of the states (pDOS) of Co 3d and O 2p band are shown in Fig. 7(a and b). As a feasible descriptor for OER activity, the electronic parameter, i.e., O 2p band center (3 O2p ) can be extracted out from the calculated pDOS, [40][41][42] which also indicates the degree of Co-O hybridization, as illustrated in Fig. 7(c and d). In specic, a larger 3 O2p signify greater covalency and better OER activity. It is observed that the calculated 3 O2p of KCoPO 4 (À2.95 eV) is more positive than that of KCoPO 4 $H 2 O (À3.24 eV), indicating the increased Co-O covalency, which is conrmed by the XPS analysis. Conclusions In summary, KCoPO 4 $H 2 O nanoplates were successfully synthesized via a hydrothermal process. The coordination water molecule in KCoPO 4 $H 2 O can be removed by heating at 300 C under N 2 atmosphere to obtain the dehydrated KCoPO 4 . Signicantly, the dehydration process is accompanied by a phase transformation, and a local coordination environment evolution of Co(II) from octahedral to tetrahedral conguration, which increases the Co-O covalency, resulting in an enhanced OER activity. In specic, the dehydrated KCoPO 4 can deliver a current density of 10 mA cm À2 at a low overpotential of 319 mV with a small Tafel slope of 61.8 mV dec À1 in alkaline electrolyte. Thus, this present research enlightens a new way of developing alkali metal transition-metal phosphates for efficient and stable water oxidation. Conflicts of interest There are no conicts to declare.
2020-04-23T09:15:31.543Z
2020-04-08T00:00:00.000
{ "year": 2020, "sha1": "682646e87ba95d5180a59ce920e208a91c9f9384", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/d0ra01813a", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df0fd11b165d65e30c89ef30541e331f529c47f4", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
235214824
pes2o/s2orc
v3-fos-license
Genetic Variation and Immunohistochemical Localization of the Glucocorticoid Receptor in Breast Cancer Cases from the Breast Cancer Care in Chicago Cohort Simple Summary Breast cancer, one of the leading causes of death among women, is a complex disease in which several factors, such as psychosocial stress, have been implicated in its initiation and progression. The glucocorticoid receptor (GCR) is one of the molecules that transfers the stress signal into the body. We measured the genetic variation and protein expression of GCR and the genes that regulate GCR function or response and examined whether these variations were associated with breast cancer. We found several genetic variants of functionally important SNPs associated with later disease stage, higher grade, and hormone receptor-negative status. The GCR protein expression was reduced in breast cancer tissue and correlated with the basal cell marker CK5/6. Abstract Background: Glucocorticoid, one of the primary mediators of stress, acts via its receptor, the glucocorticoid receptor (GCR/NR3C1), to regulate a myriad of physiological processes. We measured the genetic variation and protein expression of GCR, and the genes that regulate GCR function or response and examined whether these alterations were associated with breast cancer clinicopathological characteristics. Method: We used samples from a multiracial cohort of breast cancer patients to assess the association between breast cancer characteristics and the genetic variants of single nucleotide polymorphisms (SNPs) in GCR/NR3C1, FKBP5, Sgk1, IL-6, ADIPOQ, LEPR, SOD2, CAT, and BCL2. Results: Several SNPs were associated with breast cancer characteristics, but statistical significance was lost after adjustment for multiple comparisons. GCR was detected in all normal breast tissues and was predominantly located in the nuclei of the myoepithelial cell layer, whereas the luminal layer was negative for GCR. GCR expression was significantly decreased in all breast cancer tissue types, compared to nontumor tissue, but was not associated with breast cancer characteristics. We found that high nuclear GCR expression was associated with basal cell marker cytokeratin 5/6 positivity. Conclusion: GCR expression is reduced in breast cancer tissue and correlates with the basal cell marker CK5/6. Introduction Breast cancer, one of the leading causes of death among women, is a complex disease in which genetic, epigenetic, and environmental factors have been implicated in its initiation and progression. Psychosocial stress may play a role in the etiology of breast cancer, but the literature is conflicting. Few studies have found a positive association between psychosocial stress and the risk of having breast cancer [1,2]; other prospective and retrospective studies have yielded conflicting findings, with the majority of studies reporting no association [3][4][5][6][7], and even the reverse relationship [6,8]. Furthermore, systematic reviews and meta-analyses of these studies [9][10][11][12] were equivocal. A limitation in the literature is the lack of epidemiological studies attempting to link psychosocial factors to biologically plausible intermediates. Although further downstream signals converting psychosocial stress into cellular dysregulation and finally into breast cancer are not well understood, animal and in vitro studies have implicated glucocorticoid hormones in this process [13][14][15][16][17][18]. Glucocorticoids play an important role in several cellular processes, including apoptosis, inflammation, mammary development, and tumorigenesis [19]. Glucocorticoid signaling is mediated through the functional isoform, glucocorticoid receptor-alpha that resides predominantly in the cytoplasm. GCR is expressed in almost all human tissues in a cell-specific manner [20,21]. GCR activity is modulated by its level, subcellular localization, and interactions with other genes. Altered GCR response has been associated with the pathogenesis of several diseases, such as altered susceptibility to sporadic breastcancer among Caucasian women [22], metabolic syndrome [23], cardiovascular disease [24], rheumatoid arthritis [25], and depression [26]. GCR is predominantly expressed in myoepithelial cells [27][28][29][30] in normal breast tissue and all stages of breast cancer; however, the relationship between breast cancer progression and GCR expression and subcellular localization appears inconsistent. A wide range of GCR levels (0 to 90% positive cells) in the cytoplasmic and or nuclear compartments has been reported previously in breast cancer tissue [27][28][29][30]. The purpose of this study was to examine the association between breast cancer characteristics and GCR in a series of breast cancer cases with defined clinical and histological characteristics, as we hypothesize that these alterations would be associated with breast cancer subtype or aggressiveness. We examined the association between breast cancer characteristics and genetic variants in GCR/NR3C1 and genes downstream of GCR activation: FKBP5, Sgk1, IL-6, ADIPOQ, LEPR, SOD2, CAT, and BCL2. To investigate GCR protein expression and subcellular localization, we used tissue microarray arrays (TMA) and multispectral digital imaging. Study Population and Biological Samples Patients and samples for this study are from the Breast Cancer Care in Chicago (BCCC) study, conducted by the UIC Center for Population Health and Health Disparities (NCI P50 CA106743). BCCC is a population-based cross-sectional study of women diagnosed with primary invasive breast cancer cases between 1 October 2005, and 29 February 2008 [31]. The parent study protocol was approved by the University of Illinois at Chicago Institutional Review Board (IRB#2010-0519). DNA samples and paraffin-embedded surgical samples were obtained from diagnosing hospitals prior to radiation or pharmacotherapy. Description of the BCCC cohort had been previously reported [32]. We had 656 cases with valid genetic ancestry estimates and linked clinical, sociodemographic, and epidemiological data to assess candidate gene variance. Tumor tissue from the invasive component from 287 cases was available for the immunohistochemical study (IHC). SNP Selection and Genotyping We genotyped 59 functionally important SNPs and tagging SNPs in GCR and GCRassociated genes. The SNPs were selected based on a minor allele frequency greater than 5% and previous association with GCR activity, breast cancer, or downstream related pathways such as inflammation and apoptosis. Genotyping was performed with iPLEX Gold assay on a MALDI-TOF (matrix-assisted laser desorption/ionization time-of-flight) mass-spectrometer (MassArray system) according to the manufacturer's recommendations. Genotyping quality control for all SNPs was assessed using blinded duplicate genotyping for 60 DNA samples. A genotype concordance rate of 99% was observed for all markers. Genotyping call rates exceeded 98.5% for all individuals included in the analyses. 2.3. Self-Reported Race/Ethnicity and Genetic Ancestry with Ancestry Informative Markers (AIMS) Race and ethnicity were each defined at the interview through separate self-identification of Hispanic ethnicity and race. Racial/ethnic groups were categorized as non-Hispanic White, non-Hispanic Black, and Hispanic. Global genetic ancestry for the BCCC cohort was previously reported [32]. Ten cases that self-reported as non-Hispanic White and had more than 70% West African genetic ancestry were excluded. After the exclusions, genotype information was available for a total of 656 cases. Tissue Microarray Immunohistochemical Staining and Scoring Three tissue microarrays (TMA) were constructed from the BCCC breast cancer cases subcohort and stained as previously described [33]. The TMA consisted of tumor tissue for 287 cases and 26 normal breast tissues from unaffected women obtained by reduction mastectomy procedures and five fibroadenomas. A list of antibodies for immunohistochemical staining is summarized in Table A1. Immunohistochemical staining for GCR, performed by the UIC Research Histology and Tissue Imaging Core facility, was optimized by testing different sources and dilutions of the primary antibody and different antigen retrieval methods. Manual and digital scoring was performed as previously described [33]. GCR expression was evaluated based on the percentage of positive tumor cells and staining intensity. H-scores were calculated as the sum of staining intensity (0,1,2,3) and the percentage of cells (0-100%) in each intensity category (0, 1+, 2+ and 3+). The final scores were on a continuous scale between 0 and 300. An average H-score of the triplicate cores was used during analysis. Statistical Analysis Baseline characteristics of the population were compared across self-reported racial/ethnic groups using χ2 test for categorical variables and ANOVA for continuous variables. The primary response variable was GCR expression. GCR expression was dichotomized at the median to assess association with our outcome variables: stage at diagnosis, hormone receptor status, and histologic grade as markers of breast cancer progression or aggressiveness. The stage at diagnosis was categorized using the American Joint Committee on Cancer (AJCC) categories (0-4), with the later stage at diagnosis defined as stage >2 versus stage <1. Histological grade was determined through the Nottingham grading system. The higher grade was defined as grade intermediate and high versus low. ER/PR status was defined as positive if the tumor contained ER and/or PR receptors and negative in the absence of both receptor types. Molecular subtypes were categorized as Luminal A, Luminal B, HER2+, and triple negative. We also fitted logistic regression models to estimate the odds ratios (OR) and 95% confidence intervals (CIs). All reported p-values are two-sided, and a p-value < 0.05 was considered statistically significant. Statistical analyses were conducted using Stata version 11 (College Station, TX, USA). For each SNP, the deviation of genotype frequencies from Hardy−Weinberg equilibrium (HWE) was assessed using χ2 test. The homozygous wild-type genotype served as the reference category. Association analyses were performed under dominant, recessive, or additive modes of inheritance. Separate logistic regression models were run for each self-reported racial/ethnic group (White, Black, and Hispanic), ancestry (European, West African, and Native American), and tumor characteristic to estimate OR (95% CI). We performed separate analyses for each racial/ethnic group because of the potential biologi-cal and environmental differences in factors contributing to breast cancer. The regression models were adjusted for health insurance, income, education, nulliparity, and age at first and last birth. All reported p-values are two-sided. A Bonferroni correction was used to account for multiple comparisons. Statistical analyses were conducted using R-Studio and Stata version 11 (College Station, TX, USA). Baseline Characteristics of the BCCC Sub-Cohort for the Genetic Study The final cohort's tumor and demographic characteristics included 250 White, 273 Black, and 120 Hispanic women ( Table 1). The mean age at diagnosis was 55 years (range 25 to 78 years). Black and Hispanic women were diagnosed at a later stage, with higher grade disease and a higher proportion of ER/PR negative tumors than Whites. In addition, a greater proportion of Black and Hispanic women were overweight/obese, had more comorbidities, were less likely to have their cancer detected through screening mammography, had a lower level of education and income, and less likely to have private insurance than Whites. The predominant genetic ancestry proportion among White cases was European genetic ancestry, with a mean of 90% (±SD 11%). The predominant genetic ancestry among Black cases was West African genetic ancestry, with a mean of 80% (±SD 13%). Hispanic women had a wide range of European (mean 40%), Native American (mean 40%), and West African (mean 20%) genetic ancestry representing a highly admixed group. Characteristics of Studied Markers In the current analysis, we examined polymorphisms in GCR, Sgk1, BCL2, FKBP5, IL6, ADIPOQ, LEPR, SOD2, and CAT. The polymorphisms, including the minor allele frequencies (MAF) and HWE results by self-reported race/ethnicity summarized in Table A2. SNPS that failed the MAF and HWE (p = 0.05) in each self-reported racial/ethnic group were removed (Table A3). We observed different allelic frequency distributions between the racial/ethnic groups for several SNPs (GCR: rs6191, rs33388, rs9324924, rs4607376; Sgk1: rs9493857; BCL 2: rs2279115; LEPR: rs1137101; SOD2: rs4880). Our reported allele frequencies were similar to those in the Single Nucleotide Polymorphism Database [34]. A Bonferroni correction was used to account for multiple comparisons. There were 52 comparisons for the Blacks category with a corrected alpha = 0.001 and 49 comparisons for Whites and Hispanics with a corrected alpha: 0.001. None of the associations between those SNPs and breast cancer characteristics remained statistically significant after adjustment for multiple comparisons. Table 2 summarizes the significant associations (p < 0.06) between higher histological grade at diagnosis and individual SNPs. Among the White cases, a higher grade at diagnosis was associated with the GCR rs33388 TT and rs6191 GG genotypes were associated with two-fold increased odds of high-grade disease. The GCR rs41423247 GC+CC genotype was associated with lower grade disease (OR 0.56: 95% CI 0.32-0.99). IL-6 rs1800797 AG+AA genotype (OR, 1.99:95% CI 1.07-3.73) was associated with higher grade. Among the Black cases, GCR rs10052957 AG+AA , rs258813 AA , rs2918418 AA , rs33388 AA , rs41423247 GC/CC , rs6188 TT , rs6191 TT and rs9324924 GG genotypes were associated with higher grade disease, whereas GCR rs10482616 GA+AA , rs10482672 TC+TT , or rs7701443 AG+GG or rs9296158 AA genotypes were associated with lower grade disease. The FKBP5 rs9296158 AA genotype was associated with lower grade disease (OR 0.45: 95% CI 0.23-0.9). Among the Hispanic cases, only the GCR rs9324924 GT+TT genotype was associated with higher grade disease (OR 3.14: 95% CI 0.99-10). None of the associations between those SNPs and stage at diagnosis remained statistically significant after adjustment for multiple comparisons. Genotypes and Stage at Diagnosis We examined the association between later stage at diagnosis and individual SNPs among breast cancer cases (Table 3). Among Black cases, the A allele of GCR rs10482614 was associated with later stage at diagnosis (OR, 8: 95% CI 2-39), but there were few cases (n = 12) in this category. Several SNPs in the FKBP5 gene were associated with stage at diagnosis. Black cases with FKBP5 (rs3777747 GG ) were associated with a later stage of diagnosis (OR, 2:95% CI 0.98-4.11). However, the FKBP5 genotypes rs3800373 GT+GG , rs9296158 AG+AA , 9470080 CT+TT were associated with a nearly 50% decreased prevalence of the later stage at diagnosis. For Hispanic cases, the ADIPOQ rs1501299 CA+AA genotype was associated with decreased odds of later stage (OR, 0.39: 95% CI 0.17-0.87), while the rs266729 GC+GG genotype was associated with late stage (OR, 3.01: 95% CI 1. 35-6.73). None of the tested SNPs were statistically significant at p < 0.06 level for White cases. None of the associations between the studied SNPs and grade at diagnosis remained statistically significant after adjustment for multiple comparisons. Table 4 summarizes the results of the significant association (p < 0.06) between ER/PR positivity and individual SNPs. Among White cases, we found an inverse association for the CC genotype of GCR rs12656106 and ER/PR positivity (OR, 0.47; 95% CI 0.17-1.35). For Black cases, GCR (rs10482616 GA+AA ), ADIPOQ (rs1501299 CA+AA ) and BCL2 (rs2279115 AA ) were associated with ER or PR receptor positivity. None of the SNPs were significant at alpha < 0.06 among Hispanic cases. Overall, none of the associations between those SNPs and hormone-receptor status remained statistically significant after adjustment for multiple comparisons. All the significant results (p < 0.06) are summarized in Table 5. 3.6. Characteristics of the BCCC Subcohort for the TMA Study Genotypes and Hormone Receptor Status We measured GCR protein expression in breast cancer tissue from 287 cases. The descriptive statistics of this subset are summarized in Table A4. The mean age at diagnosis was 56 years (SD ± 11), and cases consisted of 103 Black, 84 White and 80 Hispanic patients. The cases in the subcohort still showed the racial/ethnic disparity in the distribu-tion of patient characteristics. A greater proportion of Black and Hispanic women were overweight/obese, had more comorbidities, were less likely to have their cancer detected through screening mammography, had a lower level of education and income, and less likely to have private insurance than Whites. Black women were diagnosed at a later stage, with higher grade disease. Most of the cases were of the ductal histological type, luminal A molecular subtype, and were ER and/or PR positive (Table 6). GCR Expression and Subcellular Localization in Normal and Cancer Tissue Representative images of nuclear GCR staining intensity in normal, fibroadenoma, and cancerous breast tissue, along with digital imaging annotation, are shown in Figure 1. In normal breast tissue, GCR was expressed predominantly in the nuclei of the myoepithelial cell layer that surrounds normal ducts and lobules. The luminal layer in normal breast tissue was negative for GCR. Among the fibroadenoma samples, GCR staining was not limited to the myoepithelial layer as nuclear and cytoplasmic staining of luminal epithelial cells was also detected. There was diffuse GCR staining throughout the cancer foci in the breast cancer tissue, which was likely due to the loss of normal glandular architecture and outlining myoepithelium in these malignant cells. in the breast cancer tissue, which was likely due to the loss of normal glandular architecture and outlining myoepithelium in these malignant cells. GCR Expression in Normal and Cancer Tissue Using Digital Scoring GCR was detected in both the cytoplasm and nuclear compartments of the normal myoepithelial cells and the GCR-positive breast cancer cells. Despite the low expression of GCR in the cytoplasm relative to the nuclear compartment, there was a strong correlation between nuclear and cytoplasmic H-scores. (Spearman's Rho = 0.80; p < 0.00001 and r 2 = 0.72). GCR staining was lower in cancer tissue compared with normal tissue and fibroadenoma samples. When we dichotomized nuclear H-scores for breast cancer cases at the sample median for all samples (mean H-score = 17), 44% of breast cancer cases had positive nuclear staining as opposed to 100% in normal breast tissue and fibroadenoma, respectively (Table 6). In breast cancer tissue, cytoplasmic staining (mean H-score = 3) was weaker than nuclear staining (mean H-score = 29); 57% of breast cancer TMA cores had GCR Expression in Normal and Cancer Tissue Using Digital Scoring GCR was detected in both the cytoplasm and nuclear compartments of the normal myoepithelial cells and the GCR-positive breast cancer cells. Despite the low expression of GCR in the cytoplasm relative to the nuclear compartment, there was a strong correlation between nuclear and cytoplasmic H-scores. (Spearman's Rho = 0.80; p < 0.00001 and r 2 = 0.72). GCR staining was lower in cancer tissue compared with normal tissue and fibroadenoma samples. When we dichotomized nuclear H-scores for breast cancer cases at the sample median for all samples (mean H-score = 17), 44% of breast cancer cases had positive nuclear staining as opposed to 100% in normal breast tissue and fibroadenoma, respectively ( Table 6). In breast cancer tissue, cytoplasmic staining (mean H-score = 3) was weaker than nuclear staining (mean H-score = 29); 57% of breast cancer TMA cores had an H-score = 0 for cytoplasmic GCR. We did not observe a statistically significant difference in GCR staining among breast cancer subtypes. However, compared with ductal carcinoma, lobular carcinoma had greater nuclear GCR expression (mean H-score: 27 s. 36, respectively) and a greater percentage of nuclear positive cases (42% versus 48%, respectively). Correlation between Nuclear GCR Expression and Breast Cancer Characteristics We measured the association of GCR staining with clinicopathologic characteristics, histological and molecular breast tumor subtypes. Positive nuclear GCR expression was weakly associated with any strong family history of breast cancer (p = 0.069) but was not associated with self-reported race, BMI, nulliparity, menopausal status, stage or grade at diagnosis, or subtypes of breast cancer. Correlation between Nuclear GCR and CK 5/6 Expression In our immunohistochemical study, nuclear GCR staining strongly correlated with cytoplasmic CK 5/6 expression, a marker of the tumor's basal nature. Figure 2 is a representative staining pattern of CK 5/6 in nontumor and breast cancer tissue, illustrating the correlation between GCR and CK 5/6. We observed diffuse cytoplasmic staining of CK 5/6 in the myoepithelial cells in nontumor and tumor breast tissue. There was a statistically significant difference in the mean H-score of nuclear GCR among CK5/6 high (mean = 36) and CK5/6 low (mean = 19) samples. Multivariate logistic regression of high CK 5/6 regressed on high GCR while adjusting for race, age at diagnosis and stage, grade, and histological category revealed that high GCR expression remained associated with CK5/6 expression (OR 3.3; 95% CI, 1.6-6.9). CK 5/6 was not associated with race/ethnicity, age at diagnosis, hormone receptor status, stage and grade at diagnosis, or breast cancer subtypes. Discussion Several genetic variants were associated with later disease stage, higher grade, and hormone receptor-negative status even after correction for population stratification, before adjustment for multiple comparisons. Two functional SNPs in GCR (rs6191, rs33388) Discussion Several genetic variants were associated with later disease stage, higher grade, and hormone receptor-negative status even after correction for population stratification, before adjustment for multiple comparisons. Two functional SNPs in GCR (rs6191, rs33388) were associated with a higher grade between White and Black cases, but not Hispanic cases. The minor allele associated with the phenotype differed between the racial/ethnic groups. The minor allele G of rs6191 was associated with an increased prevalence of high grade among White cases, while the minor allele T was associated with a higher grade among Black cases. The minor allele T of rs33388 was associated with an increased prevalence of high grade among White cases, while the minor allele A was associated with a higher grade among Black cases. The rs41423247 variant in GCR was associated with lower grade in the White cases and it has been shown to be associated with hypersensitivity to glucocorticoids [35]. The rs9324924 was associated with a higher grade in the Black and Hispanic cases. However, the minor G allele in Black and the GT and TT genotypes among Hispanics were associated with a higher grade. The rs10482616GA+AA was associated with ER or PR receptor positivity among Black cases. It is hard to interpret the impact of these variants on breast cancer characteristics as none of these SNPs have been previously studied in breast cancer. We observed an inverse relationship between stage at diagnosis and 3 FKPB5 SNPs (rs3800373, rs9296158, rs9470080) among Black cases. FKBP5 is a co-chaperone, which belongs to the immunophilin family. Immunophilins are a large, functionally diverse group of proteins defined by their ability to bind immunosuppressive ligands. FKBP5 expression is highly inducible by glucocorticoids and functions as a negative transcriptional regulator of GCR [36]. In addition, over-expression of FKBP5 impairs nuclear localization of GCR (Binder, 2009). The rs3800373, rs9296158 and rs9470080 FKPB5 SNPs have been associated with a higher FKBP5 expression and a more potent induction of FKBP5 mRNA by cortisol [37]. Romano et al. have observed a low/negative protein expression of FKBP5 among ten breast cancer samples [38]. If these associations are real and not a result of type 1 error, it is possible that these FKBP5 polymorphisms might be reducing GCR activation by inhibiting nuclear translocation. We identified associations with two ADIPOQ SNPs (rs1501299 and rs266729) and stage at diagnosis among Hispanic cases. The ADIPOQ rs1501299 CA+AA genotypes were protective against the later stage (OR 0.39, 95% CI 0.17-0.87), while the ADIPOQ rs266729 GC+GG genotypes were associated with a later stage (OR 3.01, 95% CI 1.35-6.73). The ADIPOQ (rs1501299 CA+AA) was associated with ER or PR receptor positivity among Black cases. These two SNPs have been previously associated with circulating levels of ADIPOQ and breast cancer. Kaklamani et al. have previously shown that the rs1501299 was associated with increased breast cancer prevalence among African American women [39]. The G allele at rs266729 is associated with lower adiponectin levels and obesity [40]. Among the White cases, a higher grade at diagnosis was associated with IL-6 rs1800797 AG+AA genotype. IL-6 is an inflammatory cytokine where high serum levels of IL-6 have been shown to correlate with poor outcomes in breast cancer patients [41] and several IL6 SNPs have been associated with breast cancer risk and prognosis [42]. The B-cell CLL/lymphoma 2 (BCL2) gene encodes an antiapoptotic protein, a critical regulator of programmed cell death. Higher levels of BCL2 expression in breast tumors have been shown to be an independent prognostic factor for improved survival from breast cancer [43]. The BCL2 rs2279115 AA was associated with ER or PR receptor positivity. Bachman et al. found that a higher expression of BCL2 was associated with the A-allele, and survival analysis revealed a significant association of the AA genotype with improved survival [44]. It is possible that those SNPs are not causal; it is also possible that causal SNPs exist but are not located in the measured SNP's vicinity. Given the modest sample sizes within racial and ethnic groups and the large number of SNPs analyzed, genetic variants were not associated with breast cancer characteristics after multiple comparison corrections. This was not unexpected given the limited power to detect associations revealed by post hoc power analyses that generally implied the power to detect associations of 60% or less (not shown). Glucocorticoid signaling via GCR regulates many physiological processes, including those involved in mammary development and differentiation. We examined the protein expression of GCR in breast tissue from our breast cancer cases subcohort, BCCC, in a TMA study and compared it to nontumor tissue. We found that GCR was expressed in all normal and fibroadenoma samples and was mainly localized in myoepithelial cells. There was a marked reduction in nuclear GCR expression in breast cancer tissue compared to normal or benign breast tissue lesions that might be due to disruption of the myoepithelial cell layer and basement membrane during tumor invasion [45]. Our findings could reflect either that GCR is involved in a biological pathway leading to breast cancer or is a marker of other causal mechanisms associated with breast cancer development. GCR has been shown to promote both cell survival and cell death, depending on the cell type. High expression of the GCR gene is associated with poor outcomes in ER-patients and better outcomes in ER+ patients [46]. Based on our findings, we propose that GCR has a tumor suppressor role in breast cancer. The downregulation of the nuclear GCR observed in our study has also been observed in prostate cancer, another hormone-sensitive tumor [47]. GCR was shown to exert tumor suppressor effects in a skin cancer mouse model [48]. It would be important to compare GCR levels from adjacent histologically normal areas, in situ, and invasive components from the same patient to examine expression changes during breast tumorigenesis. Several studies from different countries across various ethnic groups have detected both cytoplasmic and nuclear GCR expression using either monoclonal [28][29][30] or polyclonal antibodies against GCR [27]. Our results are in agreement with the pattern of decreased nuclear GCR expression reported in these prior studies. However, we did not observe a decrease in nuclear GCR expression nor increased cytoplasmic GCR with tumor progression [30]. We found that cytoplasmic GCR positively correlated with nuclear GCR expression. Unlike one of these studies [28] we did not find any correlation between GCR expression and age at diagnosis or histological and molecular subtypes of breast cancer. We observed a strong correlation between GCR and CK5/6. Cytokeratins are filamentforming proteins that provide mechanical support in epithelial cells [49]. In normal tissue, CK5/6 is mainly expressed in the basal-myoepithelial cell layer of the prostate, breast, and salivary glands. CK5/6 are also seen in benign and malignant tumors of epidermal, squamous mucosal, and myoepithelial origins [50]. The cytokeratins 5/6 are found in the cells of the basal layer of normal breast ducts [51]. Expression of CK 5/6 has been associated with poor breast cancer prognosis and is an independent indicator for shorter relapse-free survival [52]. Furthermore, immunohistochemical expression of basal CK5/6 is associated with aggressive disease and adversely impacts survival in HER2+ breast cancer patients [53]. It is difficult to reconcile the correlation between a possible tumor suppressor, GCR, with a marker of an aggressive phenotype of breast cancer, CK 5/6. There might be a functional connection between GCR and the CK 5/6 independent of breast cancer. GCR knockout mice have significant skin development defects with impaired keratinocyte differentiation and aberrant proliferation and apoptosis [54]. This study's strength is that the samples came from a population-based study of breast cancer patients' detailed demographic and clinical data and, therefore, may be generalizable to an urban population. This study's limitations include its cross-sectional nature, limiting the ability to assess temporal aspects of our associations. There are also limitations in the tissue microarray and immunohistochemical staining technique used in this and other studies. Tissue stained might not represent the tumor due to tumor heterogeneity. Conclusions To the best of our knowledge, this is the first study to examine the relationship between GCR and GCR-related gene polymorphisms, GCR protein expression and breast cancer characteristics. Activation of the glucocorticoid-mediated pathway plays an essential role in several cellular processes, and disruption of GCR activity could play a role in breast cancer progression and aggression. Using samples from an urban, multiracial study of breast cancer, we found several genetic variants of functionally important SNPs associated with later disease stage, higher grade, and hormone receptor-negative status. GCR protein was expressed in all normal and fibroadenoma samples. GCR expression is reduced in breast cancer tissue and correlated with the basal cell marker CK5/6. Conflicts of Interest: The authors declare no conflict of interest. p-values for categorical variables are from χ2 tests and from ANOVA for continuous variables for differences according to self-reported race/ethnicity.
2021-05-28T05:21:22.046Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "4f350ca9cd87587613cdd787352ac9a4dd0e2ce4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/13/10/2261/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4f350ca9cd87587613cdd787352ac9a4dd0e2ce4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201835774
pes2o/s2orc
v3-fos-license
Leishmania braziliensis: Strain-Specific Modulation of Phagosome Maturation Leishmania (Viannia) braziliensis is responsible for the largest number of American tegumentary leishmaniasis (ATL) in Brazil. ATL can present several clinical forms including typical (TL) and atypical (AL) cutaneous and mucocutaneous (ML) lesions. To identify parasite and host factors potentially associated with these diverse clinical manifestations, we first surveyed the expression of two virulence-associated glycoconjugates, lipophosphoglycan (LPG) and the metalloprotease GP63 by a panel of promastigotes of Leishmania braziliensis (L. braziliensis) strains isolated from patients with different clinical manifestations of ATL and from the sand fly vector. We observed a diversity of expression patterns for both LPG and GP63, which may be related to strain-specific polymorphisms. Interestingly, we noted that GP63 activity varies from strain to strain, including the ability to cleave host cell molecules. We next evaluated the ability of promastigotes from these L. braziliensis strains to modulate phagolysosome biogenesis in bone marrow-derived macrophages (BMM), by assessing phagosomal recruitment of the lysosome-associated membrane protein 1 (LAMP-1) and intraphagosomal acidification. Whereas, three out of six L. braziliensis strains impaired the phagosomal recruitment of LAMP-1, only the ML strain inhibited phagosome acidification to the same extent as the L. donovani strain that was used as a positive control. While decreased phagosomal recruitment of LAMP-1 correlated with higher LPG levels, decreased phagosomal acidification correlated with higher GP63 levels. Finally, we observed that the ability to infect and replicate within host cells did not fully correlate with the inhibition of phagosome maturation. Collectively, our results revealed a diversity of strain-specific phenotypes among L. braziliensis isolates, consistent with the high genetic diversity within Leishmania populations. To identify parasite and host factors potentially associated with these diverse clinical manifestations, we first surveyed the expression of two virulence-associated glycoconjugates, lipophosphoglycan (LPG) and the metalloprotease GP63 by a panel of promastigotes of Leishmania braziliensis (L. braziliensis) strains isolated from patients with different clinical manifestations of ATL and from the sand fly vector. We observed a diversity of expression patterns for both LPG and GP63, which may be related to strain-specific polymorphisms. Interestingly, we noted that GP63 activity varies from strain to strain, including the ability to cleave host cell molecules. We next evaluated the ability of promastigotes from these L. braziliensis strains to modulate phagolysosome biogenesis in bone marrow-derived macrophages (BMM), by assessing phagosomal recruitment of the lysosome-associated membrane protein 1 (LAMP-1) and intraphagosomal acidification. Whereas, three out of six L. braziliensis strains impaired the phagosomal recruitment of LAMP-1, only the ML strain inhibited phagosome acidification to the same extent as the L. donovani strain that was used as a positive control. INTRODUCTION The various species of the protozoan parasite Leishmania cause a spectrum of human diseases ranging from a relatively confined cutaneous lesion to a progressive and potentially fatal visceral infection (Alvar et al., 2012). Upon delivery in the vertebrate host by an infected sand fly, metacyclic Leishmania promastigotes are engulfed by phagocytes. To avoid destruction, these parasites have evolved efficient means of disarming the microbicidal functionality of their host cells (Arango Duque and Podinovskaia and Descoteaux, 2015;Atayde et al., 2016;Martínez-López et al., 2018). To achieve this, infectious promastigotes rely on a panoply of virulence factors including two abundant components of their surface coat, the glycolipid lipophosphoglycan (LPG) and the GPI-anchored zinc metalloprotease GP63 (Moradin and Descoteaux, 2012;Olivier et al., 2012;Arango Duque and Descoteaux, 2015;Atayde et al., 2016). The use of mutants defective in either LPG or GP63 revealed that these molecules are indeed important for the colonization of phagocytic cells by promastigotes of Leishmania donovani (L. donovani) (Desjardins and Descoteaux, 1997;Lodge et al., 2006), Leishmania major (L. major) (Späth et al., 2000;Joshi et al., 2002), and Leishmania infantum (L. infantum) (Lázaro-Souza et al., 2018), all of which live in tight individual vacuoles. These virulence factors exert a profound impact on infected cells, altering signaling pathways (Descoteaux et al., 1991;Shio et al., 2012), inducing the production of inflammatory cytokines (Arango Duque et al., 2014), activating the inflammasome (de Carvalho et al., 2019), and inhibiting phagolysosomal biogenesis and functionality (Desjardins and Descoteaux, 1997;Späth et al., 2003;Lodge et al., 2006;Vinet et al., 2009;Matheoud et al., 2013;. Of note, defective synthesis of LPG has no measurable effect on the ability of Leishmania mexicana (L. mexicana), which lives in large communal vacuoles, to replicate in cultured macrophages and cause lesions in mice (Ilg, 2000;Ilg et al., 2001). These findings underline the fact that the relative contribution of a given virulence factor in the ability of promastigotes to colonize mammalian hosts varies among Leishmania species. Leishmania braziliensis (subgenus Viannia) is responsible for the largest number of American tegumentary leishmaniasis cases in Brazil (ATL) (Alvar et al., 2012;PAHO/WH1O, 2017). ATL may exhibit several clinical forms including typical (TL), atypical (AL), and mucocutaneous (ML) lesions. TL may be confined at the bite site or metastasize to the oronasopharyngeal mucosa to give rise to ML. L. braziliensis AL lesions are scarce and they have been previously reported by Guimarães et al. in Bahia State (Guimarães et al., 2009) and by Quaresma et al. in the Minas Gerais State (Quaresma et al., 2018). Those lesions do not resemble classical TL lesions (round, ulcerated with elevated borders) and their ambiguous nature hinders correct diagnosis. Whether variations in GP63 and LPG levels are associated to the various clinical manifestations of ATL has not been investigated. In this regard, studies aimed at characterizing GP63 in L. braziliensis revealed the presence of nearly 40 copies of this gene, as well as important sequence polymorphisms among clinical isolates (Medina et al., 2016). Characterization of LPG from L. braziliensis promastigotes revealed structural and compositional similarities to that of L. donovani (Soares et al., 2005), as well its strain-dependent capacity to induce inflammatory mediator release (Vieira Td et al., 2019). To date, studies on the modulation of phagolysosome biogenesis by Leishmania promastigotes and on the contribution of LPG and GP63 to this process have focused mainly on species of the subgenus Leishmania. In the present study, we examined the levels of LPG and GP63 in a panel of L. braziliensis strains and surveyed their ability to interfere with phagosome maturation. Ethics Statement This study was carried out in accordance with the recommendations the Canadian Council on Animal Care on animal handling practices. Protocol 1706-07 was approved by the Comité Institutionel de Protection des Animaux of the INRS-Institut Armand-Frappier. Leishmania braziliensis field strains were obtained from patients living in the Xakriabá indigenous community located in São Cell Culture Bone marrow-derived macrophages (BMM) were obtained from the bone marrow of 6-8 week-old female C57BL/6 mice and differentiated in complete DMEM [containing L-glutamine (Life Technologies), 10% v/v heat-inactivated fetal bovine serum (FBS) (Life Technologies), 10 mM HEPES (Bioshop) at pH 7.4, and penicillin-streptomycin (Life Technologies)] supplemented with 15% v/v L929 cell-conditioned medium (LCM) as a source of macrophage colony-stimulating factor. To render BMM quiescent prior to experiments, cells were transferred to tissue culture-treated plates containing glass coverslips for 16 h in complete DMEM without LCM (Descoteaux and Matlashewski, 1989). BMM were kept in a humidified 37 • C incubator with 5% CO 2 . Infections and Phagosome Acidification Assays Late stationary phase promastigotes (5-day cultures at > 50 × 10 6 promastigotes/ml) from an early passage, or zymosan particles, were opsonised with serum from C5-deficient DBA/2 mice, resuspended in cold complete DMEM and fed to BMM (10:1 ratio) that had been seeded onto glass coverslips. Cells were incubated at 4 • C for 5 min, and centrifuged for 2 min at 1,200 rpm (Arango Duque et al., 2013). Particle internalization was triggered by transferring cells to 37 • C (Vinet et al., 2008;Arango Duque et al., 2014). Two hours post-internalization, infected macrophages were washed 3X with 1 ml warm DMEM to remove non-internalized promastigotes. Macrophages were either left at 37 • C for an extra 22 h, or prepared for confocal microscopy. To assay phagosome acidification, BMM were incubated for 2 h with the acidotropic LysoTracker Red dye (diluted 1:1,000; Molecular probes) prior to the 2 h infection. In the case of the 24 h infection, infected macrophages were incubated in diluted LysoTracker for 2 h prior to the end of the infection time point. Cells were then washed and fixed. For intracellular colonization assays, 6, 24, and 72 h-infected BMM seeded on coverlips were washed with PBS1X, stained with the Hema 3 TM Stat Pack (Fisher), briefly washed with deionized water, and air-dried for 10 min. Coverslips were mounted onto a drop of Fluoromount-G and sealed. Images were acquired with a Qimaging camera (Teledyne Technologies International Corp) mounted on a Nikon Eclipse E800 microscope (60X objective). Images were compiled and analyzed with the ImageJ (Rueden et al., 2017) interphase of the Icy image analysis software (de Chaumont et al., 2012). Threshold segmentation was used to differentiate and enumerate BMM and intracellular Leishmania nuclei. Confocal Immunofluorescence Microscopy Infected cells on coverslips were fixed with 2% paraformaldehyde (Thermo Scientific) for 20 min and blocked and permeabilized for 17 min with a solution of 0.1% Triton X-100, 1% BSA, 6% non-fat milk, 20% goat serum, and 50% FBS. This was followed by a 2 h incubation with a monoclonal rat antibody to Lysosomeassociated membrane protein 1 (LAMP-1) (developed by J. T. August (1D4B) and purchased through the Developmental Studies Hybridoma Bank at the University of Iowa and the National Institute of Child Health and Human Development) diluted 1:200 in PBS1X. Subsequently, cells were incubated for 35 min in a solution containing an anti-rat antibody conjugated to Alexa-488 (diluted 1:500; Molecular Probes) and DAPI (1:40,000; Molecular Probes). Coverslips were washed three times with PBS1X after every step. After the final wash, coverslips were mounted cell-side facing a drop of Fluoromount-G (Southern Biotechnology Associates) that was placed on a glass slide (Fisher); coverslips were sealed with nail polish (Sally Hansen). Infected macrophages were imaged with the 63X objective of an LSM780 confocal microscope (Carl Zeiss Microimaging) and image processing was done with the ZEN 2012 software. In regards to LysoTracker-treated cells, fixed samples were incubated in diluted DAPI for 35 min prior to mounting. Recruitment was evaluated by scoring the presence of staining on the phagosome membrane (LAMP-1) and/or the phagosome lumen (LysoTracker) (Vinet et al., 2009;Arango Duque et al., 2014). One hundred phagosomes per coverslip were scored for every experimental condition, each done in duplicate. Statistical Analysis Statistical differences in recruitment levels were assessed using one-way ANOVA followed by Bonferroni post-hoc tests. Data were considered statistically significant when p < 0.05 and univariate column scatter graphs were constructed using GraphPad Prism 6.0 (GraphPad Software Inc). Expression of LPG and GP63 Varies Among Strains of L. braziliensis The ability of Leishmania promastigotes to colonize host cells and impair phagosome maturation and functionality is mediated to a large extent by the virulence factors LPG and GP63 (Chaudhuri et al., 1989;Späth et al., 2003;Moradin and Descoteaux, 2012;Atayde et al., 2016;. Here, we sought to determine the relative levels of LPG and GP63 expressed by promastigotes of a panel of L. braziliensis strains differing in their origin ( Table 1). We included in our analysis L. major (NIH Seidman A2) and L. donovani (LV9) promastigotes as controls. Western blot analysis performed on promastigote lysates showed notable variations in the levels of LPG among the tested strains (Figure 1). Particularly, whereas the levels of LPG expressed by L. braziliensis RR410 were similar to those observed for L. donovani LV9, the levels detected in the other L. braziliensis strains were lower. In the case of GP63, we observed important differences among the L. braziliensis strains (Figure 1). Both L. braziliensis strains RR051 and M15991 expressed GP63 at levels comparable to those observed for L. donovani LV9. In contrast, GP63 levels were very low in the other strains. Interestingly, when we assessed the proteolytic activity of GP63 present in the Leishmania promastigotes lysates, we observed a lack of correlation with the GP63 levels detected by Western blot (Figure 1). Notably, L. braziliensis strains with low levels of GP63 (M2903 and RR418) showed high GP63 proteolytic activity, whereas L. braziliensis strains expressing higher GP63 levels (RR051 and M15991) showed reduced GP63 activity. These observations clearly demonstrated important intra-specific variations in the levels of detected LPG and GP63 (as well as GP63 activity) expressed by L. braziliensis strains isolated from patient with diverse ATL manifestations and from the insect vector. Cleavage of GP63 Substrates by L. braziliensis Strains Given the variations in GP63 levels and activity observed among the L. braziliensis isolates, we investigated the impact of these differences on the cleavage of phagosomal host cell proteins known to be targeted by GP63 (Matheoud et al., 2013). To this end, we performed Western blot analyses to assess the levels and integrity of the soluble N-ethylmaleimide-sensitive-factor attachment protein receptors (SNAREs) VAMP8 and Stx5, in lysates of BMM infected for 6 h with promastigotes of selected L. braziliensis strains (M2903, RR418, M15991, and M8401) and promastigotes of L. major NIHS A2 as control. As shown in Figure 2, VAMP8 was cleaved to the same extent by all L. braziliensis strains and by L. major NIHS A2, regardless of the levels and activity of GP63 detected in the cell lysates. In contrast, cleavage of the endoplasmic reticulum (ER)-and Golgi-resident SNARE Stx5 was strain-dependent and did not entirely correlate with the levels and activity of GP63 detected in the cell lysates (Figure 2). Collectively, these results indicate that cleavage of host cell GP63 substrates occurs in BMM infected with all L. braziliensis strains tested, albeit with some differences in the extent of cleavage. These findings also suggest that sensitivity to GP63 cleavage is substrate-specific. L. braziliensis Impairs Phagosomal Recruitment of LAMP-1 in a Strain-Specific Manner Given the variations observed among our panel of L. braziliensis strains in LPG and GP63 levels and activity, as well as substrate cleavage, we investigated the impact of L. braziliensis promastigotes on phagosome maturation. To this end, we incubated BMM with promastigotes from our panel of L. braziliensis strains for 2 and 24 h and assessed the FIGURE 2 | Proteolytic cleavage of phagosome-associated proteins by Leishmania braziliensis (L. braziliensis). BMM were infected with opsonized stationary phase promastigotes from selected L. braziliensis strains, and the cleavage of phagosomal proteins VAMP8 and Stx5 was assessed via Western blot. GP63 activity was also assayed via gelatin zymography and β-actin was used as loading control. The long form of Stx5 is localized at the Golgi and ER, and the short one at the Golgi. Asterisks denote cleavage fragments. Lm, L. major; Lb, L. braziliensis. recruitment of the lysosomal marker LAMP-1 to phagosomes. Promastigotes of L. donovani (LV9 strain), which efficiently inhibit phagosome maturation and phagosomal recruitment of LAMP-1 (Scianimanico et al., 1999), and zymosan were used as controls. At 2 h after the initiation of phagocytosis, we observed a higher recruitment of LAMP-1 to phagosomes containing L. braziliensis strains RR051 and M15991 compared to phagosomes containing L. donovani LV9 (Figures 3A,B). As expected, recruitment of LAMP-1 to phagosomes containing zymosan was higher to that observed for phagosomes induced by promastigotes of L. donovani LV9 and of those from L. braziliensis isolated from an AL lesion (RR410) (Figure 3). At 24 h post-infection, the presence of LAMP-1 on phagosomes harboring L. donovani promastigotes LV9 remained very low, as was also the case for phagosomes containing L. braziliensis M2903, RR410, and RR418. However, recruitment of LAMP-1 to phagosomes harboring L. braziliensis RR051 and M8401 was significantly higher than the levels observed for phagosomes containing the other L. braziliensis strains and L. donovani LV9 (Figure 3). These results suggest that the ability L. braziliensis promastigotes to interfere with phagosome maturation varies among strains. Phagosome Acidification Is Differentially Modulated by L. braziliensis Strains Consistent with their ability to inhibit phagolysosome biogenesis (Desjardins and Descoteaux, 1997), we previously reported that L. donovani promastigotes efficiently impair phagosome acidification (Vinet et al., 2009). To further characterize the impact of L. braziliensis promastigotes on phagosome maturation, we used the lysotropic dye LysoTracker Red to monitor acidification kinetics of phagosomes harboring the various L. braziliensis strains. Consistent with previous studies, at 2 h post-infection, acidification occurred in the majority of zymosan-harboring phagosomes but was hindered in phagosomes containing L. donovani LV9 promastigotes (Figures 4A,B). Similar impairment of phagosome acidification was observed for all L. braziliensis strains, with the exception of the strain isolated from an AL lesion (RR410) (Figures 4A,B). At 24 h post-infection, most phagosomes harboring L. donovani LV9 (80%) and the L. braziliensis ML isolate (M15991) (70%) remained negative for LysoTracker Red (Figure 4). In contrast, over 70% of phagosomes containing L. braziliensis isolates RR418 and RR410 were positive for LysoTracker Red at 24 h ( Figure 4B). These data indicate that most L. braziliensis strains in our panel inhibit phagosome acidification during the early phase of macrophage infection. However, at later time points, the capacity to hinder phagosome acidification varies in a strainspecific manner. Colonization of Macrophages by L. braziliensis Strains Does Not Fully Correlate With the Ability to Inhibit Phagosome Maturation and Acidification Previous studies with L. donovani and L. major (Desjardins and Descoteaux, 1997;Späth et al., 2003;Vinet et al., 2009) revealed a correlation between the ability of these parasites to impair phagosome maturation and the ability to colonize macrophages. To investigate whether such a correlation exists for the L. braziliensis strains under study, we incubated BMM for 2 h with promastigotes of selected strains (M2903, RR418, M15991, and M8401) and promastigotes of L. major NIHS A2 as control. We then quantified the number of parasites per 100 macrophages and the percentage of infected macrophages at 6, 24, and 72 h post-phagocytosis. As shown in Figure 5, the ability to survive and replicate over time within BMM varied among the L. braziliensis strains analyzed. At the exception of strain M2903, which displayed reduced ability to survive in BMM over 72 h, all the other strains persisted and two of them (RR418 and M8401) replicated as was the case for L. major NIHS A2. Interestingly, L. braziliensis strain M2903, which survived poorly in BMM, was among the most efficient strains at inhibiting phagosome maturation (Figures 3, 4). For strains RR418 and M8401, their ability to replicate in BMM correlated with their capacity to impair the phagosomal recruitment of LAMP-1 and acidification during the early phases of infection (Figures 3, 4). These data are consistent with the notion that factor(s) other than the capacity to impair phagosome maturation are required for colonization of host cells by L. braziliensis. DISCUSSION The Leishmania virulence factors LPG and GP63 contribute to the ability of promastigotes to colonize phagocytic cells by targeting key host cell host defense mechanisms, including the biogenesis of microbicidal phagolysosomes. In the present study, we sought to examine the levels of LPG and GP63 expressed by promastigotes of L. braziliensis (subgenus Viannia) strains isolated from patients exhibiting various clinical manifestations of ATL and from the insect vector. We also characterized the ability of these L. braziliensis strains to impair phagosome maturation and to infect and replicate within macrophages. Our results revealed an unexpected diversity of expression patterns for both LPG and GP63 among the evaluated L. braziliensis strains. Although some strains expressed LPG levels similar to those of L. donovani LV9 promastigotes, other strains expressed very low LPG levels. Similarly, some L. braziliensis strains expressed GP63 levels comparable to those observed in L. donovani LV9, whereas other strains expressed very low GP63 levels. Interestingly, we noted that GP63 activity varies from strain to strain, and does not correlate with GP63 levels detected by Western blot. Whether the polymorphims detected in the GP63 genes of L. braziliensis (Medina et al., 2016) affected the recognition of GP63 by our anti-GP63 antibody is however unclear. Clearly, the significance of these observations deserves to be further investigated. As part of their strategy to colonize host phagocytes, Leishmania promastigotes alter the composition and properties of the parasitophorous vacuole (Moradin and Descoteaux, 2012;Séguin and Descoteaux, 2016). Phagosomal recruitment of the lysosomal protein LAMP-1 is a widely used marker of phagosome maturation (Huynh et al., 2007). In the case of Leishmania promastigotes, delayed phagosomal acquisition of LAMP-1 following phagocytosis supported the notion that these parasites impair phagolysosomal biogenesis (Scianimanico et al., 1999;Lerm et al., 2006;Verma et al., 2017). Interestingly, we found that the ability to inhibit the phagosomal recruitment of LAMP-1 varies significantly among our panel of L. braziliensis strains. Phagosome acidification is an important consequence of the maturation process and we previously reported that it is efficiently inhibited by L. donovani promastigotes (Vinet et al., 2009). Similar to the recruitment of LAMP-1, we observed an important variation among promastigotes of the L. braziliensis strains tested in their capacity to inhibit phagosomal acidification. Interestingly, whereas promastigotes of L. donovani LV9 efficiently inhibited both phagosome acidification and recruitment of LAMP-1, we observed no correlation between the ability to inhibit phagosomal recruitment of LAMP-1 and phagosome acidification among the L. braziliensis strains. Previous work from our group revealed that acquisition of LAMP-1 and of the v-ATPase by phagosomes occurs through two distinct mechanisms (Vinet et al., 2009). In the case of L. donovani, LPG is the molecule responsible for inhibiting both the phagosomal recruitment of LAMP-1 and acidification (Scianimanico et al., 1999;Vinet et al., 2009). However, the ability of L. braziliensis strains to interfere with phagosome maturation does not appear to correlate with LPG levels. In addition to LPG, Leishmania promastigotes use the metalloprotease GP63 to modulate the composition and function of phagosomes through the cleavage of host proteins such as VAMP3, VAMP8, and Synaptotagmin XI (Matheoud et al., 2013;Arango Duque et al., 2014;Casgrain et al., 2016;. Since VAMP8 is required for antigen cross-presentation (Matheoud et al., 2013), its cleavage by the various L. braziliensis strains suggests that they efficiently inhibit antigen cross-presentation. Future experiments will specifically address this issue. On the other hand, the endoplasmic reticulum-and Golgi-resident SNARE Stx5 is partially cleaved, to varying extents, by our L. braziliensis strains. This SNARE regulates trafficking between the phagosome and the secretory pathway (Cebrian et al., 2011;Arango Duque et al., 2019) and contributes to the expansion of communal parasitophorous vacuoles harboring L. amazonensis (Canton and Kima, 2012). The significance of its cleavage by L. braziliensis for establishment and replication within macrophages is an issue that will deserve further investigation. In L. braziliensis, GP63 is present on chromosome 10 and strains isolated from different clinical manifestations from the same geographical region have conserved domains and display specific polymorphisms in their catalytic sites (Medina et al., 2016;Sutter et al., 2017;Quaresma et al., 2018). This variability could result in different virulence patterns and clinical outcomes. Of interest, a recent genomic analysis of Leishmania clinical isolates revealed important differences among genetically highly related Leishmania strains, including both in amplification and in loss of genes linked to parasite infectivity such as GP63 (Bussotti et al., 2018). Whether the diversity of GP63 levels and activity portrayed by the L. braziliensis strains is the consequence of gene amplification associated to environmental adaptation is a likely possibility that deserves further investigation. Similar to LPG and GP63, GIPLs are highly expressed on the Leishmania surface (Assis et al., 2012). They are inhibitory molecules impairing NO and cytokine production by murine macrophages and their role on phagosome maturation and intracellular survival will be assayed in prospective studies. Together, our findings underline the importance of performing functional genetic analyses with these clinical L. braziliensis strains to directly assess the importance of LPG and GP63 in the colonization of host phagocytes, and ultimately in the pathogenesis of ATL. For the past several decades, research on virulence or immune subversion mechanisms of Leishmania has been for the most part performed with reference or laboratory strains. Results obtained with those strains allowed for the discovery of several biological processes. For instance, the Th1/Th2 dichotomy and the importance of IL-4 in mediating susceptibility to infection were discovered using a particular L. major strain (Heinzel et al., 1989). However, studies using other L. major strains led to opposite results (Noben-Trauth et al., 1996. In the case of L. braziliensis ATL strains, our study revealed an unexpected diversity in terms of expression of virulence molecules and ability to interfere with phagosome maturation. Clearly, these studies highlight the fact that it is important to exert caution when drawing broad conclusions based on observations obtained with a single strain or isolate of a given Leishmania species. DATA AVAILABILITY All datasets generated for this study are included in the manuscript and/or the supplementary files. ETHICS STATEMENT Mice were manipulated under the guidelines of protocol 1706-07 of the Comité Institutionel de Protection des Animaux of the INRS-Institut Armand-Frappier, which respects animal handling practices promulgated by the Canadian Council on Animal Care. Leishmania braziliensis field strains were obtained from patients living in the Xakriabá indigenous community located in São João das Missões municipality, Minas Gerais State, Brazil. Isolates from other endemic areas were obtained from the outpatient care facility at Centro de Referência em Leishmanioses-Instituto René Rachou/Fiocruz Minas from 1993 to 1998. Patient samples were obtained under informed consent procedures approved by the IRR Research Ethics Committee in Human Research, the National Committee for Research Ethics (Comissão Nacional de Ética em Pesquisa-CONEP) n • 355/2008, and the National Indian Foundation (Fundação Nacional dÍo ndio-FUNAI) n • 149/CGEP/08. AUTHOR CONTRIBUTIONS GA, TS, RS, and AD conceived and designed the study, contributed to the data analysis, and drafted and revised the manuscript. GA, TS, and KO performed the experiments. CG provided the L. braziliensis strains. GA, TS, RS, and AD wrote and revised the manuscript. All authors read and approved the final version of this manuscript. FUNDING This work was supported by the Canadian Institutes of Health Research (CIHR) (grants PJT-156416 and MOP-125990 to AD), by the Conselho Nacional de Pesquisa e Desenvolvimento (CNPq) (grant 305065/2016-5 to RS), and the Fundação de Amparo do Estado de Minas Gerais (FAPEMIG) (grant 00202-18 to RS). AD was the holder of the Canada Research Chair on the Biology of intracellular parasitism. TS was the recipient of a scholarship from FAPEMIG and from the Emerging Leaders in the Americas Program (Global Affairs Canada). GA was supported by a CIHR Banting and Best Doctoral Award.
2019-09-06T13:01:52.364Z
2019-09-06T00:00:00.000
{ "year": 2019, "sha1": "f30d7427722c89f43d89aaa53bff7aa68ecce106", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2019.00319/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f30d7427722c89f43d89aaa53bff7aa68ecce106", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
264098810
pes2o/s2orc
v3-fos-license
On the preconditioning of the primal form of TFOV-based image deblurring model To address the staircasing problem in deblurred images generated by a simple total variation (TV) based model, one approach is to use the total fractional-order variation (TFOV) image deblurring model. However, the discretization of the Euler–Lagrange equations for the TFOV-based model results in a nonlinear ill-conditioned system, which adversely influences the performance of computational methods like Krylov subspace algorithms (e.g., Generalized Minimal Residual, Conjugate Gradient). To address this challenge, three novel preconditioned matrices are proposed to improve the conditioning of the primal model when using the conjugate gradient method. These matrices are designed based on circulant approximations of the matrix associated with blurring kernel. Experimental evaluations demonstrate the effectiveness of the proposed preconditioned matrices in enhancing the convergence and accuracy of the conjugate gradient method for solving the primal form of the TFOV-based image deblurring model. The results highlight the importance of appropriate preconditioning strategies in achieving robust and high-quality image deblurring using the TFOV approach. To address the staircasing problem in deblurred images generated by a simple total variation (TV) based model, one approach is to use the total fractional-order variation (TFOV) image deblurring model.However, the discretization of the Euler-Lagrange equations for the TFOV-based model results in a nonlinear ill-conditioned system, which adversely influences the performance of computational methods like Krylov subspace algorithms (e.g., Generalized Minimal Residual, Conjugate Gradient).To address this challenge, three novel preconditioned matrices are proposed to improve the conditioning of the primal model when using the conjugate gradient method.These matrices are designed based on circulant approximations of the matrix associated with blurring kernel.Experimental evaluations demonstrate the effectiveness of the proposed preconditioned matrices in enhancing the convergence and accuracy of the conjugate gradient method for solving the primal form of the TFOV-based image deblurring model.The results highlight the importance of appropriate preconditioning strategies in achieving robust and high-quality image deblurring using the TFOV approach. Variational methods are an approach used in image deblurring in the last few decades, to restore sharpness and clarity to blurry images.They involve formulating an optimization problem that aims to find the best estimate of the original sharp image given the observed blurry image.In variational methods, following mathematical model is defined to describe the degradation process that caused the image blur. where u is an original image, z is a recorded image, ε is the noise function, and If the blurring operator K is given, then the corresponding approach is referred to as non-blind deconvolution [1][2][3] .However, when the blurring operator is unknown, the corresponding approach is referred to as blind deconvolution 4-7 .In this paper, our primary focus is on non-blind deconvolution.Here K represents a first kind Fredholm-integral operator.Therefore, it is compact and the problem (1) becomes ill-posed [8][9][10] .Let be a twodimensional square domain.This model typically involves convolution with a blurring kernel that represents the blurring effect.The goal is to find the original image that, when convolved with the blurring kernel, closely matches the observed blurry image.The following optimization problem is formulated by constructing an objective function that consists of two terms: a data fidelity term (first term) and a regularization term (second term). The data fidelity term measures the mismatch between the observed blurry image and the estimated sharp image after convolution with the blurring kernel.The regularization term J(u) encourages the restoration algorithm to produce visually desirable solutions by imposing certain constraints or promoting specific image properties. The total variation of an image measures the amount of variation or changes in pixel intensities across the image.In a blurry image, neighboring pixels tend to have similar intensities, resulting in a low total variation.By promoting images with higher total variation, we can encourage the restoration of sharp edges and fine details.The idea behind TV regularization is to find an image that simultaneously fits the observed blurry image data and has a high total variation.This is achieved by solving an optimization problem that involves minimizing a cost function, which combines a fidelity term that measures the difference between the observed and reconstructed images, and a regularization term that quantifies the total variation.Minimizing the total variation encourages the reconstruction algorithm to produce an image with sharp transitions between regions of different intensities.It helps in removing blur and enhancing edges while preserving important image structures.By incorporating total variation regularization into the deblurring process, it is possible to obtain visually pleasing and sharper images, effectively reducing the impact of blur and noise in the original blurry image.While TV regularization is a widely used method in image deblurring, it does have some drawbacks.One of the major drawbacks is staircase effect.TV regularization can introduce a "staircase effect" artifact, where edges appear as a series of steps rather than smooth transitions.This effect occurs because TV regularization promotes piecewise constant regions, resulting in a blocky appearance around edges instead of accurately representing their continuous nature. The TFOV regularization offers a powerful regularization approach for image deblurring problems, combining the advantages of edge preservation, flexibility, noise robustness, and reducing staircase effects.Its effectiveness has been demonstrated in numerous studies and has contributed to the advancement of image deblurring techniques.However, discretizing the TFOV-based model's Euler-Lagrange (EL) equations results in a large nonlinear ill-conditioned system.To solve such systems efficiently is quite challenging for numerical methods.Even the powerful numerical algorithms like Krylov subspace methods (Generalized Minimal Residual (GMRES), Conjugate Gradient (CG) etc.) get slow convergence.One of the remedy for slow convergence is to use preconditioning. Preconditioning is a technique that uses to transform a linear system of the form Ax = b into another system to improve the spectral properties of the system matrix.A preconditioner is a matrix P such that this matrix is easy to invert and the preconditioned matrix P −1 A has a good clustering behavior of the eigenvalues.Because rapid convergence is often associated with a clustered spectrum of P −1 A .In the Preconditioning technique, we solve the system P −1 Ax = P −1 b instead of Ax = b because the new system will converge rapidly when we use a suitable preconditioner.To apply the preconditioner matrix P within a Krylov subspace technique, we should calculate a matrix times a vector at each iteration.Hence, evaluating this product must be cheap.In the literature, several preconditioners [26][27][28][29][30][31][32][33][34] are developed for the nonlinear systems.In this study, we consider the following non-linear system of equations This system is derived by discretizing the EL equations associated with the TFOV based image deblurring problem.The coefficient matrix of this system is of size 2N 2 by 2N 2 , where N is the number of pixels.This matrix is non symmetric, ill-conditioned, dense and huge.These properties make the development of an effective computational method more challenging.We know that using direct methods for solving (6) requires O(N 3 ) and hence they are not applicable here.For this system, iterative methods like Krylov subspace (GMRES, CG etc.) methods are applicable.However, their convergence is too slow because they are sensitive to the condition numbers.Hence, the idea of preconditioning is needed to accelerate the convergence of the Krylov subspace methods.In this study, we propose three circulant symmetric positive definite (SPD) preconditioners for system (6).The proposed preconditioners not only increase the convergence rate of numerical method but also contribute in the quality of deblurred images. The manuscript makes the following contributions: (i) It introduces an efficient and fast algorithm for solving the TFOV-based image deblurring problem.(ii) It introduces three novel preconditioned circulant matrices to address the nonlinearity and ill-conditioned characteristics of the large system in the TFOV model.(iii) It offers an improved treatment for the computationally expensive TFOV regularization functional. The remaining paper is organized into several sections as follows: section "The TFOV-model" describes the TFOV-based image deblurring model.The Euler-Lagrange equations of the TFOV-model are also discussed in this section.Section "Euler-Lagrange equations" presents the discretization of the TFOV-model and its matrix structure.Section "Discretization of the TFOV-model" introduces the numerical implementation.The proposed circulant preconditioners for the PCG (Preconditioned Conjugate Gradient) method are also explained in the same section.In section "Numerical solution algorithm", the numerical experiments are presented.Section "Conclusion" presents conclusions. The TFOV-model www.nature.com/scientificreports/with the α-BV norm ||u|| BV α = ||u|| L 1 + � |∇ α u|dx , where α is the order of the fractional derivatives.TV α is defined by where div α φ = ∂ α φ 1 ∂x + ∂ α φ 2 ∂y .∂ α φ 1 ∂x and ∂ α φ 2 ∂y are the fractional-order derivative along the x and y directions, respectively.The space T denotes where |φ(x)| = � 2 i=1 φ 2 i and C ℓ (�, R 2 ) is the space of α-order continuously differentiable functions.Hence, when the total fractional-order variation (TFOV) model is applied, the previous problem is transformed into the equivalent task of identifying a u ∈ BV α (�) ∩ L 2 (�) that minimizes the functional where TV α β is the modified total fractional-order variation and defined by where 2 and β is employed to make TV α β differentiable at zero.D α x and D α y are the fractional derivatives along the subscript directions.The existence and uniqueness of a minimizer for the problem (7) have been extensively investigated, as discussed in works such as 35,36 . Fractional-order derivatives Various definitions of fractional-order derivatives have been proposed in the literature to describe such derivatives 37,38 .In this paper, we will present some of these definitions.For a comprehensive mathematical treatment, a fractional-order derivative is represented as a function operator denoted by D α [a,x] .It is important to note that the order α satisfies the condition 0 Euler-Lagrange equations For the functional (7) and 1 < α < 2 , the Euler-Lagrange equations are as follows: Here, K * is an adjoint operator and L α (u) is given as Proof From ( 7), we have . By applying the Taylor series, we have where Applying α-order integration by parts 35 , we get Then, it suffices to choose ν ∈ C 1 0 (�, R) and we have Hence, (20) reduces to (16). Case-II: Therefore, the boundary terms in (3) can only become zero if Vol Discretization of the TFOV-model To discretize � = (0, 1) × (0, 1) , we divide it into N 2 a uniform grid, where N is a positive integer.Let (x k , y l ) for k, l = 0, 1, . . ., N + 1 within 35,36 .Assuming that u has a homogeneous Dirichlet boundary condition and utilizing the shifted Grünwald approximation approach 39,40 , we consider the following derivative: where f l s = f s,l and ω α j = (−1) j α j j = 0, 1, . . ., N and ω α 0 = 1, ω α j = (1 − 1+α j )ω α j−1 for j > 0 .By using the homogeneous boundary condition, we can obtain the following expression: Now, we have the following properties: (1) Hence, using the Gershgorin circle theorem, we can obtain B α N which is a symmetric and negative definite Toeplitz matrix.We define the solution matrix at (khx, lhy), where k, l = 1, . . ., N .The ordered solution vector of U is represented as . The discrete version of dif- ferentiation for an α-order derivative is given as Similarly, we have where , and ⊗ denotes the Kro- necker product 38,41 .Now, if we use a finite difference scheme and the discrete fractional derivative shown above, then ( 16) and ( 17) lead to the following primal system Let N F be the number of Fixed Point Iterations and K h be a matrix satisfying where [K h U] ij,lm = h 2 k(x i − x j , y l − y m ) .By utilizing the lexicographical order, the matrix K h is structured as a block Toeplitz with Toeplitz block matrix.The discrete numerical method for the matrix L α h (U m ) is given by: Here, • represents the pointwise multiplication operation, and m corresponds to the m-th Fixed Point Iteration.The matrix U is obtained by reshaping the vector u into an N × N matrix.D 1 (U m ) and D 2 (U m ) are diagonal matrices consisting of the element-wise reciprocals of the non-zero matrices B x α (U m ) and B y α (U m ) , respectively. Vol:.( 1234567890 where Here, more details about the term L TV h (U m ) can be found in 10 .The structure of matrix B h is as follows: where both G 1 and G 2 are of size N(N − 1) × N 2 , and is a matrix of size (N − 1) × N .The matrix H h is a diagonal matrix, which are calculated by discretizing |∇u| 2 + β 2 .It possesses the following structure: where H x and H y are of sizes (N − 1) × N and N × (N − 1) , respectively. Numerical solution algorithm We describe the algorithms for solving the TFOV-based linear system (25).Before delving into the details, we present several essential properties of (25). The Hessian matrix K ) is extremely large in practical applications.When the value of is small, K * h K h + L α h (U m ) becomes highly ill-conditioned.This is mainly due to the clustering of eigenvalues of K around zero 10 . K * h K h is symmetric positive definite.Despite K * h K h is dense, K has the translation invariant property, which allows the application of the FFT method to compute K * h K h u in just O(nlogn) operations 10 .3. In the TFOV model (25), the fractional matrix L α h (U m ) is dense, resulting in an expensive matrix-vector multiplication.On the other hand, in the TV model (28), the non-fractional matrix ) is symmetric positive semidefinite 10 .Consequently, the system (25) is symmetric positive definite. Preconditioned matrices The conjugate gradient (CG) algorithm is an appropriate iterative algorithm for solving the system (25).The CG algorithm is an iterative algorithm commonly used to solve systems of linear equations.It is particularly wellsuited for large, sparse, and symmetric positive definite matrices.The CG algorithm iteratively finds the solution by minimizing the residual error in each iteration along conjugate directions.It does not require the explicit storage of the entire matrix, making it memory-efficient for large-scale problems.The CG algorithm achieves convergence in a number of iterations equal to the size of the problem, making it an efficient solver.However, the CG algorithm may converge slowly for ill-conditioned matrices, so preconditioned conjugate gradient (PCG) algorithm are often employed to improve convergence speed.In order to achieve an efficient solution, a preconditioning matrix having SPD property is required.In this context, we introduce two SPD circulant preconditioning matrices.The first one is denoted as P 1 , and it is defined as follows: where γ > 0 , Kh is an approximate circulant form of K h , and I h is an identity matrix.The second precondition- ing matrix P 2 is; where diag(L TV h (U m )) is a diagonal matrix having entries from L TV h (U m ) (29).The third preconditioning matrix P 3 is a product type preconditioner.(25), the inversion of the preconditioner matrices ( P 1 , P 2 , and P 3 ) becomes necessary.The inversion of P 1 and P 2 can be easily performed since their second terms are sparse matrices, and the inversion of K * h Kh requires only O(n log n) floating-point operations using the FFTs, as explained in detail in 10 .The inversion of the preconditioning matrix P 3 involves inverting its middle term, γ diag(L TV h (U m )) + I h .Since this middle term is a sparse matrix, its inversion can be done straightforwardly as well. For the inversion of ( K * h Kh + γ I h ) 1/2 , we also require only O(n log n) floating-point operations using the FFTs.The PCG method is summarized as follows. Next, let the eigenvalues of K * h K h and L α h (U m ) be µ K i and µ L α i , respectively, such that µ K i ↓ 0 and µ L α i ↑ ∞ .So the eigenvalues of P −1 1 Ā and P −1 2 Ā are respectively.Here µ L TV i are eigenvalues of Clearly, for γ ≡ , Hence, for γ ≡ , P −1 1 Ā and P −1 2 Ā exhibit better spectrum when compared with the Hessian matrix Ā .Now, let eigenvalues of P −1 3 Ā be Therefore, P −1 3 Ā exhibits better spectrum when compared with Ā. Preconditioning techniques can help mitigate issues like noise amplification and ringing artifacts that are often encountered in deblurring processes.By carefully designing the preconditioner, the optimization process can become more stable and effective, leading to better convergence towards a higher quality deblurred image.The preconditioners we introduce serve a dual purpose: they not only address the ill-posed characteristics of (33) Vol:.( 1234567890) www.nature.com/scientificreports/ the problem but also aid in the retrieval of high-frequency intricacies within the deblurred images.This effect becomes evident through the numerical illustrations we provide. Numerical examples Three numerical examples for the TFOV-based image deblurring problem are now presented.Various values of N were employed, resulting in a system with N 2 unknowns.MATLAB program, running on an Intel(R) Core(TM) i7-4510U CPU @ 2.60 GHz, was used for numerical computations and to obtain the numerical results.The evaluation of the deblurred images' quality was conducted using PSNR (Peak Signal to Noise Ratio) and SSIM (Structured Similarity Index Measure).The ke − gen(N, r, σ ) kernel [43][44][45] was employed for numerical calculations. Initial guess and parameters The works by 20,35,[46][47][48] delve into the intricate details of automatically selecting parameter , a topic that goes beyond the scope of this current paper.The careful choice of value for holds paramount importance in effectively eliminating both blur and noise.It is advised to avoid extreme values, whether they are exceedingly large or exceptionally small.In the context of the TFOV-based model, the range deemed optimal for lies between 1e−3 and 1e−7.As for the optimal range encompassing α and β , we conducted www.nature.com/scientificreports/computational experiments employing the cameraman image.Our observations indicate that the most suitable range for α is situated between 1.1 and 1.5, while the optimal realm for β spans from 0.6 to 1.The results of these computations are outlined in Tables 1 and 2. It is evident from our experiments that, for lower values of α , higher values of β are conducive.In our specific experimental setup, we made the deliberate selections of α = 1.4 , = 1e -6, and β = 1 to advance our research. Example 1 The cameraman image was utilized as an example in this study.It is a complex image consisting of both small-scale texture (peacoat) and large-scale cartoon (face) components.Figure 2 illustrates different aspects of the cameraman image, with each subfigure sized at 512 × 512 .These subfigures include: (a) the exact and (b) blurry images; (c), (d), (e), and (f) the deblurred images using TFOV-based CG, P 1 CG with γ = 1e-3, P 2 CG with γ = 1e-4, and P 3 CG with γ = 1e-5, respectively.Figure 3 illustrates the computation of the relative residual at each iteration with respect to γ .Numerical calculations were performed using the ke − gen(N, 300, 5) kernel.The 5. Additionally, it is noted that both P 1 CG and P 3 CG methods achieve slightly higher PSNR and SSIM compared to P 2 CG.Thus, in this example, the performance of preconditioners P 1 and P 3 proves to be more effective than that of preconditioner P 2 . Example 2 The Coins image, which consists of both real and synthetic components, was used in this example. A comparison was made between our TFOV-based algorithm and a TV-based method.Because the TV-based method generates a SPD matrix system, the CG (Conjugate Gradient) method was employed for its solution.www.nature.com/scientificreports/ to 1e−3 for P 1 CG, 1e−5 for P 2 CG, and 1e−4 for P 3 CG.The stopping criterion for the numerical methods was set to a tolerance of tol = 1e-3. Remark 1.The comparison presented in Table 4 indicates that the TFOV-based methods (CG and PCG) achieve slightly higher PSNR and SSIM metrics compared to the TV-based CG method.This observation is further supported by Fig. 4b-f.Hence, the TFOV-based methods (CG and PCG) generate superior image quality results.2. The effectiveness of preconditioning can be clearly observed from Fig. 5.The number of iterations required by TFOV-based PCG is significantly fewer compared to both TFOV-and TV-based CG methods in order to achieve the desired accuracy of tol = 1e-3.4 shows that the TFOV-based PCG method achieves slightly higher PSNR and SSIM values compared to the regular TFOV-based CG method.However, the PCG method achieves its PSNR and SSIM in significantly fewer iterations.The P 1 CG method requires only 50 iterations, the P 2 CG method requires only 14 iterations, and the P 3 CG method requires only 18 iterations to achieve the desired results.In contrast, the TV-based CG method and the TFOV-based CG method both require more than 50 iterations to reach their respective PSNR and SSIM values.This demonstrates that the TFOV-based PCG algorithm is faster than both the TFOV-based CG method and the TV-based CG algorithm for real and synthetic images.Additionally, the effectiveness of preconditioner P 2 surpasses that of preconditioners P 1 and P 3 (Fig. 5). Table Example 3 Here, we have utilized a nontexture Moon image.The various aspects of the Moon image are illustrated in Fig. 6.Each subfigure has a size of 512 × 512 .They represent: (a) the exact, (b) blurry; (c), (d), (e) and (f) deblurred images using TFOV-based CG, P 1 CG, P 2 CG, and P 3 CG, respectively.The numerical calculations were performed using the ke − gen(N, 300, 3) kernel.To facilitate comparison, we considered three different values of N: 128, 256, and 512.The stopping criteria for the numerical methods was set to a tolerance of tol = 1e-4. Remark 1.The similarity between Fig. 6c-f indicates that all methods produce results of the same quality. 2. Figure 7 clearly shows that for all values of N, the number of iterations required by PCG ( P 1 CG, P 2 CG, and P 3 CG) is significantly fewer compared to TFOV-based CG in order to achieve the desired accuracy of tol = 1e -4. 3. Table 5 demonstrates that the PSNR and SSIM values obtained by the PCG method are almost identical to those achieved by the regular TFOV-based CG method for all values of N.However, the PCG method achieves these PSNR and SSIM values in significantly fewer iterations.For instance, for N = 64 , the P 1 CG method requires only 40 iterations, the P 2 CG method requires only 10 iterations, and the P 3 CG method requires only 19 iterations to attain the desired PSNR and SSIM values.In contrast, the CG algorithm requires over 100 iterations to achieve the same results.Similar observations can be made for other values of N. Thus, the PCG algorithm is faster than TFOV-based CG for nontexture images.Furthermore, the performance of all preconditioners is nearly the same (Fig. 7). Example 4 In this example, a pair of satellite images from 49 were employed.These images were intentionally subjected to blurring and corruption by Poisson noise, leading to the presence of blurring artifacts.For the blurring process, we utilized a kernel with parameters fspecial( ′ gaussian ′ , 9, sqrt(3)) .The introduction of Poisson noise to the images poses a significant challenge for the majority of deblurring techniques.This type of noise commonly arises in scenarios involving photon counting within various imaging modalities.At the same time, blurring is an inevitable outcome due to the underlying physical principles of the imaging system, which can be conceptualized as the convolution of the image with a point spread function.For the purpose of comparison, we opted to utilize the approach by Chaudhury et al. 49 , known as the non-blind fractional order TV-based algorithm (NFOV).The restored images of the Galaxy can be observed in Fig. 8, each possessing dimensions of 256 × 256 .Similarly, the restored images of Satel are depicted in Fig. 9, each sized at 128 × 128 .For the NFOV method, we configured the parameters as outlined in 49 .The stopping criterion for the computational technique is determined by a tolerance value of tol = 1e-7.Further details regarding this experiment can be found in Table 6. Remark By referring to Figs. 8, 9, and Table 6, it becomes evident that the outcomes produced by all techniques are nearly indistinguishable.However, our suggested PCG methods yield slightly superior PSNR values while requiring significantly less CPU time.This observation highlights the enhanced effectiveness and swiftness of our proposed PCG methods compared to the NFOV method. Conclusion We presented a numerical method (PCG) for solving the primal form of the total fractional order variation (TFOV) based nonlinear image deblurring problem.We introduced three novel circulant preconditioned matrices ( P 1 , P 2 , and P 3 ) and tested them with three examples using PCG.Various types of images (real, complicated, non-texture, synthetic and satellite) were also tested using our new circulant preconditioned matrices.Additionally, we compared the TFOV-based algorithm (CG and PCG) with the TV (total variation) based algorithm and NFOV method 49 .The convergence rates and residual norms at each iteration were provided for each example. The numerical tests we conducted highlight the swift convergence achieved by the PCG method when employing the novel circulant preconditioners.Beyond the accelerated speed, the PCG method also exhibits efficacy in addressing the TFOV-based nonlinear image deblurring challenge.For this study, the images under consideration were grayscale; however, we anticipate extending our approach to encompass color images in future research.Additionally, we intend to apply our proposed method to images characterized by varying degrees of blurriness.Moreover, the crucial role in the deblurring process is also attributed to the ideal settings of parameters such as α and β .In the present study, we empirically analyze these parameters, while we intend to establish their precise theoretical constraints in subsequent research. is the greatest integer function. 1 . 2 . 3 . Definitions of the left-and right-sided Riemann-Liouville (RL) functional derivatives: and where Definitions of the left-and right-sided Grünwald-Letnikov (GL) functional derivatives: and where Definitions of the left-and right-sided Caputo (C) functional derivatives: Figure 1 . Figure 1.Schematic illustration of the blurring kernel. Figure 5 . Figure 5.The TV-based CG, TFOV-based CG and PCG convergence at fixed point iteration m = 1 for Example 2. Blue asterisk represents TV-based CG iterations, red asterisk represents TFOV-based CG iterations, yellow circle represents P 1 CG iteration, purple box represents P 2 CG iteration and green line represents P 3 CG iteration. Figure 7 . Figure 7.The TFOV-based CG and PCG convergence at fixed point iteration m = 1 for Example 3. Blue asterisk represents TFOV-based CG iterations, red circle represents P 1 CG iteration, yellow box represents P 2 CG iteration and black line represents P 3 CG iteration. Table 3 . Comparison of TFOV-based CG and PCG for Example 1. Table 4 . Comparison of TV-based CG, TFOV-based CG and PCG for Example 2. Table 5 . Comparison of TFOV-based CG and PCG for Example 3.
2023-10-15T06:18:12.684Z
2023-10-13T00:00:00.000
{ "year": 2023, "sha1": "b5ca949c9ba0f778c885754802bd0c6a252000fc", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-44511-x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "233ca1b0cf2669cf01eaed84f78923a2de477649", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Medicine" ] }
119255444
pes2o/s2orc
v3-fos-license
Full Monte Carlo simulations of radio emission from extensive air showers with CoREAS CoREAS is a Monte Carlo simulation code for the calculation of radio emission from extensive air showers. It is based on the"endpoint formalism"for radiation from moving charges implemented directly in CORSIKA. Consequently, the full complexity of the air-shower physics is taken into account without the need for approximations or assumptions on the emission mechanism. We present results of simulations for an unthinned shower performed with CoREAS for both MHz and GHz frequencies. At MHz frequencies, the simulations predict the well-known mixture of geomagnetic and charge excess radiation. At GHz frequencies, the emission is strongly influenced by Cherenkov effects arising from the varying refractive index in the atmosphere. In addition, a qualitative difference in the symmetry of the GHz radiation pattern is observed when compared to the ones at lower frequencies. We also discuss the strong increase in the ground area subtended by the radio emission when going from near-vertical to very inclined geometries, making very inclined air showers the most promising ones for cosmic ray radio detection. Introduction The modelling of radio emission from extensive air showers has made great progress in the past few years. With CoREAS 1.0 [1], we have developed a Monte Carlo simulation code which takes into account the full complexity of cosmic ray air-shower physics and predicts the radio emission from charged particles in the cascade on the basis of pure classical electrodynamics. No specific "radio emission mechanism" has to be assumed, and the simulation is free of any tunable parameters. Therefore, CoREAS can be used for quantitative predictions of air shower radio emission which can be compared directly with experimental results [2,3]. CoREAS makes use of the "endpoint formalism" [4] for the calculation of the electromagnetic radiation directly in CORSIKA [5]. In this formalism, any acceleration of charged particles leads to radiation which is then superposed for all particles in the cascade. The effects of the refractive index gradient in the atmosphere are correctly taken into account and have interesting consequences for the emission in particular at frequencies above 100 MHz. A wealth of options provided by CORSIKA, such as the different available hadronic interaction models, the configurable atmospheric profile, and the curved geometry of the atmosphere, are fully supported. In the following, we present some results gathered with the CoREAS code. We first show the emission predicted by CoREAS for an unthinned air shower at the site of the LOPES [6] and CROME [7] experiments. Afterwards, we discuss how strongly the zenith angle of the extensive air shower influences the size of the radio emission footprint, making very inclined air showers the most interesting target for large-scale air-shower radio detection. Simulations for an unthinned shower We have simulated a vertical air shower with an energy of 10 17 eV induced by an iron primary at the site of the LOPES and CROME experiments (110 m above sea-level, geomagnetic field of 48 nT with 65 • inclination). 320 antenna locations positioned on concentric rings with a radial step size of 5 metres were simulated. The interaction models used in CORSIKA were QGSJETII.03 and UrQMD1.3.1. The shower was simulated entirely without particle thinning -the presented simulation is thus truly a "full Monte Carlo simulation" of the radio emission of an extensive air shower. In figure 1, we present maps of various components of the electric field vector after filtering to observing frequency windows used by various experiments. From top to bottom, maps for the projections of the electric field vector on the north-south, east-west and vertical axes are shown, followed by the absolute amplitude of the complete vector. From left to right, the observing frequency windows of 40 to 80 MHz (LOPES [6]), 300 to 1200 MHz (ANITA [8]) and 3.4 to 4.2 GHz (CROME [7]) are shown. To create these maps, the raw output of CoREAS, corresponding to time-pulses with an unlimited observing bandwidth, were digitally filtered to the desired frequency windows. Afterwards, the maximum amplitude of the given electric field vector component was read-off and normalized by the effective bandwidth. Emission at tens of MHz Looking at the first column of figure 1, the typical emission characteristics as observed at MHz frequencies become apparent. The signal is consistent with a superposition of Askaryan charge-excess radiation [9,10] and geomagnetic radiation [11], as is by now also evident from experimental data [12]. The former contribution has linear polarization with an electric field vector oriented radially with respect T. Huege, C.W. James -Simulating radio emission from EAS with CoREAS 33RD INTERNATIONAL COSMIC RAY CONFERENCE, RIO DE JANEIRO 2013 to the shower axis. The latter has linear polarization with an electric field vector aligned in the direction given by the Lorentz force, i.e., in the east-west direction for a vertical air shower. In the case discussed here, the north-south component of the electric field is thus generated purely by Askaryan emission, whereas the east-west component constitutes a superposition of the two components, leading to the well-known east-west asymmetry in the signal. The vertical component of the electric field is very small, but not exactly zero, particularly very close to the shower axis. The predicted field strengths of ≈ 2 µV/m/MHz approximately correspond to the detection threshold of the LOPES experiment in the noisy environment of the Karlsruhe Institute of Technology [3]. Emission at hundreds of MHz In the second column of figure 1, the results for a frequency window of 300 to 1200 MHz are shown. This is the frequency band of the ANITA experiment, which has reported the successful detection of several cosmic ray events in its second flight [8]. Polarization-wise, the same superposition of Askaryan charge-excess and geomagnetic emission is apparent. However, the emission pattern is strongly influenced by Cherenkov effects arising from the density gradient and thus varying refractive index of the atmosphere [13,14]. This refractive index gradient changes the coherence conditions of the radiation. For an observer on the "Cherenkov ring" visible in the maps, the time-pulses of the Askaryan and geomagnetic radio emission are compressed to very short time-scales, leading to significant power at frequencies as high as hundreds of MHz. 1 The diameter of the ring is governed by the geometrical distance of the shower maximum to the observer, and is thus directly related to the depth of the air shower shower maximum X max . The spectral field strength drops about an order of magnitude with respect to the values at tens of MHz. To judge the detectability, this drop has to be compared with the evolution of Galactic and atmospheric noise as a function of rising frequency. The vertical electric field component is non-zero and exhibits both a Cherenkov ring and an east-west asymmetry. Emission at a few GHz In the third column of figure 1, we present the signal predicted for a frequency window of 3.4 to 4.2 Ghz. This is the frequency range probed by the CROME experiment and other experiments originally designed to search for "Molecular Bremsstrahlung" radiation at GHz frequencies. Due to the same Cherenkov-effects compressing the Askaryan and geomagnetic emission to short time-scales, significant signal levels can be visible for observers sitting on the "Cherenkov ring", although the spectral amplitude is again significantly smaller than at hundreds of MHz. In fact, CROME has detected such events [7] and made comparisons with CoREAS simulations [15]. It should be noted that the CoREAS simulation code was developed primarily with the MHz regime in mind, but it clearly has predictive power also at these much higher frequencies. While the emission at GHz frequencies is still dominated by the geomagnetic emission with the Askaryan charge-excess emission being a secondary effect (as has been verified with a simulation without magnetic field, not shown here), there are notable differences in the emission pattern as compared to lower frequencies. The Cherenkov ring is visible in the dominant east-west component of the signal, but the ring is "broken" along the north-south observer axis. Even more interestingly, the north-south component of the radio emission shows a "clover-leaf" pattern [1]. When the geomagnetic field is switched off, the cloverleaf pattern vanishes. This means that the north-south component has a contribution due to geomagnetic emission, unlike the patterns seen at lower frequencies, where the geomagnetic emission contributes purely to the east-west component of the electric field. Such a clover-leaf pattern in the north-south component of the electric field was predicted by the early "geosynchrotron emission" models [16,17]. A possible explanation is that the "geosynchrotron effect" is significant at high frequencies only, whereas it is swamped by the emission from time-varying transverse currents at lower frequencies. Other interesting features are the "ripples" visible in the emission pattern. As the shower is unthinned, these appear to be interference effects of some sort. Zenith-angle dependence The second set of simulations we present were made for the site of the Auger Engineering Radio Array (AERA [2]), taking into account the observer altitude of 1400 m, the local geomagnetic field (with a strength of 23 nT and an inclination of -38 • ) and the observing frequency window of 30 to 80 MHz used by AERA. The same hadronic interaction models as detailed in the previous chapter were used to simulate the radio emission on a dense rectangular grid with up to 2500 observer positions. A thinning level of 10 −6 with optimized weight limitation was used for the simulations presented here. In figure 2, we show the simulated emission for proton-induced air showers with energies of 10 18 eV (top) and 10 19.5 eV (bottom). From left to right, we demonstrate the changes induced by the increase of the zenith angle of the extensive air showers. The left-most showers have a zenith angle of 30 • . The emission footprint on the ground is mostly circular and falls off very rapidly. At 10 18 eV, the detection threshold of ≈ 2 µV/m/MHz is already reached at radial distances of only 100 to 200 metres. This is consistent with the experience gathered from LOPES [6], CODALEMA [18] and AERA [2]. At 10 19.5 eV, we need to keep in mind that due to the applied particle thinning, there is a "noise floor" in the simulations so that the spectral amplitudes do not fall below a certain value even as the lateral distance increases. A less aggressive thinning in the simulations would solve this problem. However, it is easily possible to extrapolate from the higher field strengths and conclude that the detection threshold even for 10 19.5 eV is predicted to be reached after at most 400 to 500 m lateral distance for a zenith angle of 30 • . The middle showers have a zenith angle of 50 • . The emission footprint becomes markably elliptical and less steep, with lower maximum spectral amplitudes than for the more vertical geometry. The flatter lateral distribution of the radio signal is caused by the greater geometrical distance of the shower particles from the observing radio 1. It should be stressed that the Askaryan effect is not "classical" Cherenkov radiation in the sense of unaccelerated particles propagating through a medium with a velocity higher than the speed of light in the medium [4]. In fact, "classical" Cherenkov radiation contributes negligibly to the radio-emission from particle cascades and is not modelled at all in CoREAS and comparable models. antennas as X max (geometrically) recedes with increasing zenith angle [16,19]. The region with a detectable signal grows, however it still remains of the order of only 600 m radius at 10 18 eV and 900 m radius at 10 19.5 eV. Finally, the right-most showers were simulated with a zenith angle of 75 • . The change with respect to 50 • is dramatic. Even for 10 18 Conclusion With CoREAS, a powerful Monte Carlo simulation code for the simulation of radio emission from extensive air showers is openly available. It is being used already by many experiments such as LOPES, LOFAR, AERA, Tunka-REX, ANITA and CROME, and is freely available upon request from the authors. We have demonstrated the capabilities of CoREAS with simulations for an unthinned vertical air shower, studying the evolution of the emission pattern with increasing frequency. Cherenkov effects lead to a time-compression of the Askaryan and geomagnetic emission, making the air shower radio signal observable at high frequencies for observers sitting on the Cherenkov angle. At GHz frequencies the emission characteristics change noticably, with a clover-leaf pattern appearing which might be related to the similar patterns observed in early "geosynchrotron emisson" calculations. CoREAS is also a very powerful tool for simulation studies aiming at optimizing the design of future radiodetection experiments. We have demonstrated that the footprint of the radio signal at 30 to 80 MHz stays small even at high primary particle energies for showers with zenith angles of up to 50 • . However, for zenith angles of 75 • or higher, the footprint becomes extremely large, which means that a sparse radio detector array should readily be able to detect radio emission from air showers at very inclined geometries. In addition, particle detectors mostly measure the muonic component of very inclined air showers, and radio detectors give a pure measurement of the electromagnetic component, making their combination a potentially very powerful tool for cosmic-ray composition studies in particular for very inclined air showers.
2013-07-29T13:07:32.000Z
2013-07-29T00:00:00.000
{ "year": 2013, "sha1": "16c5e71e9576c5eae798801b1fb96031d8ae0ed5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "16c5e71e9576c5eae798801b1fb96031d8ae0ed5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
2564314
pes2o/s2orc
v3-fos-license
Preschool Education in Belize: Research on the Current Status and Implications for the Future Early childhood education is the foundation of a child’s education. Belize is a developing country and is striving towards the development of its educational system. This study describes the status of preschool education in Belize using three methods of collecting data: survey, observations, and interviews. A national survey described the education, qualifications, and experience of preschool teachers and directors/principals. Three instruments, Early Childhood Environment Rating Scale-Revised (ECERS-R), Early Childhood Environment Rating Scale-Extension (ECERS-E), and the Careg iver Interaction Scale (CIS), were scored in 41 preschools to describe the physical environment, curricu lum implementation, and teacher-child interaction, respectively. A modification of the ECERS-R and ECERS-E, named the Early Childhood Environment Rating for Belize (ECERS-B), was created. The modified version is an instrument that is compatible with the culture of Belize. The quality of preschools environments were at the min imal with evidence of positive teacher-child interaction. A one-way ANOVA indicated significant differences with teacher education level, suggesting that teachers with education qualifications higher than a high school diploma had higher scores on the ECERS-B. Finally, four major stakeholders were interviewed to find out their perception of early childhood education in Belize. Belize has implemented initiatives to improve the quality of early childhood education. Recommendations for policy, practice, and future research are suggested. Introduction At the time of the research, the problem concerning early childhood education in Belize was that little was known concerning the current status of early ch ildhood education. Limited, if any, empirical studies had been conducted directly related to early childhood education in Belize. Preschools are typically rated on two dimensions of quality-process quality and structural quality. Process quality consists of the experiences that occur in the preschool settings such as child-teacher interactions, activities, learn ing opportunities, health and safety routines, and the relationship maintained with parents. Structural quality consists of the group-size of children in the setting, space availability for children to move around, the adult-child ratio, teachers' inco me, and the education and training of teachers and support staff [1]. M inimal, if any, informat ion was known about the process quality and the structural quality of the preschools in Belize. Preschoo ls were not equally d ist ribut ed around the country of Belize. Preschoo l p rog rams benefit young children of low-inco me backg round [2]. In Belize, the rural areas, especially those of Stann Creek, Toledo, and Orange Walk, are the poorer areas in the country and the low numbers of preschools in these areas do not adequately serve these areas. Many preschools were privately managed, especially those in the urban areas. This meant that young children fro m lower socio-economic backgrounds did not have an opportunity to attend preschools due to the high cost of attending. The registration fee and monthly tuition varied among the different institutions. For examp le, in 2006, one school charged a registration fee of BZE $120.00 per term (a term is equivalent to one semester and three terms are in a 1-year program) p lus a tuition of BZE $10.00 per week. Another school in the same location charged a registration fee of BZE $80.00 per term and a monthly tuit ion of BZE $40.00. Preschool directors/owners were free to determine the fees and tuition for attending their schools. No official Early Childhood Education Policy existed in Belize. A draft copy was in progress, but the need for an official policy was urgent if the intention was to improve the quality of preschool education in Belize. Belize had a well-defined education structure adopted fro m England. The church-state system that operated and managed the primary school education system was stable and exerted much effort to educate children fro m 5 to 14 years of age. The high school system was well-defined and maintained a similar church-state system to provide children with a h igh school diploma. At least one tertiary level institution was established in each district around the country to provide higher level education degrees to those who can afford it or had interest to pursue a higher level degree. The one education area which may be considered the most crucial but lacked a vivid definit ion concerning its current status was preschool education, hence the purpose of this research study. Purpose and Research Questions Prior to this research, little was known about the current status of preschool education in Belize. No emp irical studies had been conducted that reported the demographic data of preschool teachers and the structural and process quality of preschool programs in Belize. Many preschools were private institutions that were unevenly distributed around the country and were not catered to young children it benefited the most-students of low-inco me backgrounds. The purpose of this research study was to describe the current status of preschool education in Belize. This descriptive study was done by addressing five specific questions that were derived fro m one general question. The general question addressed was the following: What is the current status of preschool education in Belize? This question was answered by five specific questions. 1) What are the education, qualifications, and experiences of preschool teachers? 2) What is the quality of the internal and external structural environ ment of preschools? 3) What is the quality of the curriculu m, the instructional strategies, and the activities used by preschool teachers? 4) What is the quality of social interaction between caregivers and young children? 5) What are the perceptions of preschool education by major stakeholders? Methodology A survey that was previously created [3] was modified for the cultural aspects of Belize and was used to collect demographic data (such as education, qualificat ions, and experiences) fro m 100 preschool teachers and directors/prin cipals. Forty-one (41) preschool centers from the six d ifferent districts in the country were selected via stratified random sampling to collect data using three instruments: the Early Childhood Environ ment Rat ing Scale-Rev ised (ECERS-R) [4], th e E arly Childho od E nviro n ment R ating Scale-Extension (ECERS-E) [5], and the Caregiver Interaction Scale (CIS) [6]. These instruments were used to collect data on the physical environment, curriculu m strategies and activities, and teacher-child interaction, respectively. Ten open-ended questions were developed to interv iew major stakeholders in ad min istrative ro les in the area of early childhood education. These stakeholders included a person fro m the National Preschool Unit, t wo preschool district coordinators, and an early childhood education instructor of a higher-degree preschool program. Results The results from the ECERS-R, ECERS-E, the ECERS-B, and the CIS are indicated below. The ECERS yielded a statistically significant score of 2.71, α = .88 with the sample of 43 preschools that was observed in Belize. A Pearson Correlations test indicated that teachers with higher education (above high school diplo ma) scored higher on the items assessed than teachers with lo wer education (a high school diplo ma or below). After mod ification of the ECERS, the ECERS-B was developed. The ECERS-B factored out all the culturally inappropriate items. This instrument is suitable to use in Belize. The results of a Pearson Correlat ions test indicated that teachers with h igher education (above high school diplo ma) scored higher on the items assessed than teachers with lower education (a high school diploma or below). Cul tural Modificati ons of ECERS After modification of the ECERS, the ECERS-B (Ea rly Childhood Environment Rating Scale for Belize) was developed. The ECERS-B factored out all the cu lturally inappropriate items. The score increased to 3.02, α = .75. This instrument is suitable to use in Belize. CIS Results Because the nature of the CIS instrument was not designed for an overall score, separate alpha coefficients and mean scores were obtained for each subscale, positive relationship For the positive relationship subscale, a high score indicated good quality regarding teacher-ch ild interaction, 1 being the lowest and 4 being the highest. The mean score of the positive relationship subscale (M = 2.60, SD = 0.56) indicated that some positive relat ionships existed between teacher and children in the 41 classrooms that were observed. One item (when children misbehave, exp lains the reason for the rule they are breaking) on this subscale with a very low score (M = 1.71, SD = .60) suggested that teachers rarely explained the rules children were b reaking. One item (talks to the children on a level they can understand) with a high mean score (M = 3.02, S D = .69) indicated that teachers spoke to the children at a level that the child ren understood. Scores on the other items in this subscale were close to the mean score. Low scores on the punitiveness subscale indicate good quality in terms of teacher-ch ild interaction. The mean score for this subscale was 1.98 (SD = 0.58), indicating that, in general, teachers in the 41 classrooms were not harsh, hostile, and overly controlling in their interactions with the child ren. One item (places high value on obedience) had a h igh mean score of 3.66 (S D = .53) suggesting that, generally, preschool teachers in the observed classrooms expected the children to obey them most of the time. One other item (seems to prohibit many of the things the children want to do) had a mean score of 2.71 (SD = .93), indicating that the preschool teachers of the 41 classrooms frequently prohib it many of the things that the children want to do. The other six items had mean scores in the range of the subscale mean score. The permissiveness subscale had a mean score of 2.10 (SD = 0.53), suggesting that the preschool teachers who participated in the observations frequently avoided disciplin ing the children even when firmness was necessary to discipline the children. Lo wer scores on this subscale indicate better quality regarding teacher-child interaction. One item (expects the children to exercise self-control) had a very low mean score of 1.56 (S D = .71). The other three items of the subscale had mean scores in close range to the mean score of the subscale. Lowe r scores on the detachment subscale also indicate better quality in regard to teacher-ch ild interaction. This subscale mean score of 1.71 (SD = 0.49) suggested that teachers who took part in the observation did not seem to be distant from the children. Generally, teachers spent much time with children. One item (doesn't seem interested in the children's activ ities) had a mean score of 2.32 (SD = .82), indicating that teachers demonstrated some disinterest in children activit ies. The other four items of the subscale had scores in pro ximity to that of the subscale mean score. Inferential Statistical Analysis of EC ERS-B After the development of the ECERS-B, a Pearson correlation test was conducted (a) to determine the significant correlat ions between subscales and the total score, (b) to determine the significant correlat ions between the ECERS-B subscales and the total score with the CIS subscales, and (c) to determine the significant correlat ions of the ECERS-B and CIS subscales with six variables, rural or urban area, center status, number of children, ethnic identification, teacher experience, and teacher education level. A Pearson correlation test was also conducted with the CIS to determine evidence of statistically significant correlations between the subscales. Finally, after the results of these tests indicated that teacher education level correlated significantly with the ECERS-B and CIS subscales, an analysis of variance (ANOVA ) test was conducted to determine how much of the variance can be accounted for by teacher education. The ECERS-B was developed from the ECERS-R and ECERS-E after modify ing items in the subscales to create five robust subscales with acceptable reliab ility estimates for scores on each subscale (space, α = .56; activ ities, α = .76; interaction, α = .79; math and science, α = .71; and language and literacy, α = .69) and the total instrument, α = .75. The Pearson correlation test indicated that most of the subscales correlated significantly with p ≤ .01. Interaction with math and science subscale had a weak but statistically significant relationship, r = .298, p ≤ .1. The space subscale with the interaction subscale and language and the literacy subscale had no statistically significant correlations. The total ECERS-B significantly correlated with the five subscales with p ≤ .01. The total ECERS-B with the act ivities subscale had a very strong, positive correlat ion, r = .896, p ≤ .01. The interaction subscale, math and science subscale, and language and literacy subscale had a strong positive relationship with the total ECERS-B. And, the space subscale had a moderate relat ionship, r = .560, p ≤ .01). ECERS & CIS Correlated The ECERS-B subscales and the total were correlated with the CIS. The results indicated that the CIS positive relationship subscale had statistically significant correlations with the total ECERS-B and four of its subscales (activities, interaction, math and science, and language and literacy). Two subscales, interaction and language and literacy, had a very strong, positively relat ionship, r = .840, p ≤ .01 and r = .833, p ≤ .01, respectively. The activit ies subscale and the math and science subscale had moderate correlat ion, and the total ECERS-B had a high correlation. The space subscale had no statistically significant correlation with the CIS positive relationship. The CIS detachment subscale had moderate, negative correlations with the ECERS-B subscales except for the space subscale. The total ECERS-B also had a negative, moderate correlation with the CIS detachment subscale, r = -.594, p ≤ .01. The negative correlations were expected for this subscale because lower scores indicated better teacher-child interaction. The CIS punitiveness subscale had expected negative correlations with the ECERS-B subscales-except for math and science, but these were not statistically significant. The space and activities subscales were negatively correlated but were not statistically significant. The interaction subscale and the language and literacy subscale had moderate correlations, r = -.562, p ≤ .01 and r = -.434, p ≤ .01, respectively. The total ECERS-B had a weak correlat ion but statistically significant, r = -.293, p ≤ .1. The CIS permissiveness subscale was correlated to a statistically significant level with the interaction subscale of the ECERS-B, r = .311, p ≤ .05. Th is had an unexpected weak, positive relationship. Although the space subscale and the math and science subscale were negatively correlated with the permissiveness subscale, these were not statistically significant. The total ECERS-B was not significantly correlated with the permissiveness subscale. ECERS & CIS Variables Correlation The ECERS-B and CIS subscales were correlated with six independent variables, rural or urban area, center status, number of children, ethnic identificat ion, teacher experience, and teacher education level. Of the six variab les, teacher education level was the only variable that had statistically significant correlations with different subscales of the ECERS-B and the CIS. This is worth reporting because the mean scores of teachers with more than a high school diplo ma were better than the mean scores of teachers with a high school diplo ma or less, suggesting that preschool teachers with higher education provide better quality of instruction and care in the preschool classroom. This positive relationship is supported in the literature [7][8]. Two of the five ECERS-B subscales had weak but statistically significant correlations with teacher education level. These were interaction (r = .294, p ≤ .1) and language and literacy (r = .264, p ≤ .1). The total ECERS-B also had a weak but statistically significant correlat ion with teacher education level (r = .269, p ≤ .1). The CIS positive relat ionship subscale had a weak but statistically significant correlation with teacher education (r = .328, p ≤ .05). Teachers with more than a h igh school diplo ma had a higher mean score than teachers with less than a high school diploma, suggesting that teachers with higher level o f education have more positive interactions with young children [9]. The CIS detachment subscale had an expected negative correlation that was statistically significant (r = -.437, p ≤ .01). The mean score of teachers with more than a high school diplo ma was lower than for teachers with only a high school diplo ma or less. This was expected because lower scores on punitiveness,permissiven ess, and detachment indicated better teacher-child interaction. Although teachers with more than a high school diploma had a lower mean score on the punitiveness subscale, it was not statistically significant. Teachers with more than a high school diplo ma had a higher mean score on the permissiveness subscale but this was not statistically significant. CIS & Pearson Correlati on A Pearson correlation test was conducted with the CIS subscales. Positive relationship had a moderate, negative correlation with punitiveness and a strong, negative correlation with detachment that were both statistically significant. Punit iveness had a strong, negative correlation with permissiveness that was also statistically significant. No other statistically significant correlations were present. ECERS & CIS ANOVA Test After teacher education was established to be a variable that had statistically significant correlations with the ECERS-B subscales, the total ECERS-B, and the CIS subscales, an ANOVA test was conducted to test mean differences using teacher education level as the independent variable. Eta squared was calculated to determine how much variance can be accounted for by teacher education. The one-way ANOVA test was completed using alpha = . 10. Regarding the CIS subscales, teacher education accounted for 11% of the variance in the positive relationship subscale (F = 4.715, df = 1/ 39, p ≤ .05) and 19 % of the variance in the detachment subscale (F = 9.207, df = 1/39, p ≤ .01). The effect for punitiveness was not statistically significant but 7% of the variance was accounted for by teacher education. In summary, the ECERS-B is a scale that was developed fro m the ECERS-R and the ECERS-E. It is a useful instrument that yields acceptably reliable scores using Belizean data. Pearson correlation tests indicated statistically significant correlations among the subscales, the total score, and the CIS subscales. After testing the instrument with six independent variables, the scores indicated that teacher education level was a variable that correlated significantly with the ECERS-B subscales, the total, and the CIS subscales. Teacher education was also accountable for a reasonable amount of the variance. The ECERS-B may be used with confidence in the Belizean culture. Major Conclusions Fro m this study, several conclusions can be drawn fro m the data and analysis. These conclusions address a move toward the improvement of process quality and structural quality in an effort to develop the early childhood education programs in Belize. The quality of preschool education offered in the rural and urban area and the status of preschools (whether public, private, co mmunity, or government-aided) did not differ. Different kinds of schools in rural and urban areas had similar scores on the ECERS-R, ECERS-E, and CIS. This can be explained by the fact that a standardized curriculu m, a standardized schedule, and a standardized list of items for display are reco mmended by the National Preschool Unit. The district coordinators oversaw all preschools to provide support to preschools in their respective district. This is a positive step toward improving quality. The major stakeholders commented about child-centered instruction to provide quality preschool education to young children. The national curricu lu m pro moted child-centered activities. This mode of instruction encourages children's self-development and self-exp ression. Ch ild-centered activities are a suitable means of leaning to use in preschool classrooms with young children in Belize. The preschool classroom environ ment should be stimulat ing to young children. It should be equipped with child-sized furn iture, health and safety measures, sensory materials, activity centers, and colorful displays to capture children's attention and hold their interest. Efforts should be made to furn ish the preschool classrooms with sufficient materials to acco mmodate all children and allow them to engage in visual, auditory, and kinesthetic explorat ions. The CIS prov ided a good measure of teacher-child interactions in Belizean preschools. It had high acceptable reliability scores for each subscale and significantly correlated with teacher education. The CIS positive relationship indicated that some positive teacher-child interactions occur in the classrooms. Although teachers lack qualifications in early childhood education, teachers with many years of teaching experience and training in early childhood workshops accounted for the positive scores on the positive relationship subscale. The CIS is an appropriate measure to co llect data on teacher-child interactions in preschool classrooms in Belize. The recent appointment of district coordinators to assist preschool teachers in their respective districts and the attachment of preschool to the public schools are two initiat ives to improve the quality of preschool education in Belize and to provide access. The major stakeholders spoke positively of the district coordinators and the attachment program. These programs are important steps toward improving quality and access of preschool education in Belize and should be continued. The lack of early childhood education may affect the quality of preschool programs provided by teachers. According to research, teachers with h igher qualifications provide higher quality of early childhood care and interaction [10][11][12][13][14][15][16]. The ECERS-R and the ECERS-E did not provide a good measure of the quality of preschool environments in Belize because of the cultural setting and standardized practices performed by teachers. But after mod ify ing these two instruments and developing the ECERS-B with five robust subscales, the ECERS-B measures well because it was uniquely created to accommodated the Belizean culture. The ECERS-B subscale and total had acceptable reliab ility scores. The total mean score of the ECERS-B suggests minimal quality of the preschool environments in Belize. This minimal quality can be accounted for by two factors: the need to improve teacher qualifications in early childhood education and the lack of resources in the classrooms. The scale may be considered and used as an alternative measure for preschool quality in Belize. Recommendati ons for Policy and Practice During the interviews of the major stakeholders, they indicated little knowledge of the draft early childhood education policy for Belize. I reco mmend that a committee be formulated to comp lete the draft policy, publish the document, distribute the document to each preschool center, and provide easy access to the document. The document should contain guidelines for preschool teachers, guidelines for a healthy and safe preschool environment, and the procedures and stipulations for licensing, opening, and operating a preschool program. Many research studies [10][11][12][13][14][15][16] as well as this study indicated that teachers with higher education levels provide better quality of p reschool education to young children. I recommend that provisions are made to allow preschool teachers in each district to have the opportunity to acquire at least an associate degree in early childhood education at a reasonable rate. After a reasonable period, a policy should be developed that stipulates the need of an associate degree in early ch ildhood education to certify preschool teachers before they take responsibility of a group of children in any preschool classroom. The National Preschool Unit currently provides annual workshops for teachers. These workshops are conducted to assist preschool teachers in the development of visual aids, games, puzzles, and materials to use with young children. I recommend that workshops be conducted to address topics such as safety practices, health practices, toileting procedures, routine procedures, informal co mmunicat ions, logical relationships, and the concept of play as a learning strategy. For their attendance, I recommend that teachers acquire credits toward the continuance of their preschool teacher certification. One majo r stakeholder mentioned that she works hard to assist teachers who have problems with concept mapping and lesson planning. Preschool teachers also commented that they spend much time developing and writ ing lesson plans rather than preparing activities for the classroom lessons. There is a national curriculu m, but the content is very broad and teachers spend too much time try ing to put together concept maps and lesson plans. I recommend that a team of early ch ildhood professionals develop detailed curriculu m guides with goals for the month, objectives for the week, suggested activities for each day, and suggested materials. This comprehensive curriculu m would allow teachers to spend more t ime preparing and organizing act ivit ies for daily lessons rather than spending the time figuring concept maps and writ ing lesson plans. Young children need attractive materials that would capture and hold their attention to stimu late their learn ing. I recommend the establishment of a program to develop the physical environ ment of preschools by locating appropriate physical resources, such as books, materials for fine motor development, and materials for gross motor develop ment. District coordinators visit preschool classrooms in their respective areas to observe, evaluate, and support teachers. Teachers are provided with a written evaluation of their performance during the observations. Preschool teachers also need to know the quality of their classroom environ ment. The use of ECERS-B can provide this informat ion to teachers. I recommend that the district coordinator or an observation team use the ECERS-B to measure the environmental quality of preschool classrooms. The scores of each item, each subscale, and the total could be reported to each preschool teacher so that they can use these as indicators of the quality of their preschool environment. The scores would suggest specific areas of strengths and weaknesses. Teachers could then use the scores to address specific areas for the imp rovement o f their classroom environment. I reco mmend that these observations occur annually so that preschool teachers, district coord inators, and the National Preschool Unit can track the imp rovements of each preschool environment. Preschool teachers are observed in their classrooms on a regular basis and evaluated on their performance annually by district coordinators. Workshops are conducted nationally and at the district level to address concerns observed during the visits. Preschool teachers are provided with strategies on how to improve the presentations of lessons, but telling preschool teachers what to do to improve and demonstrating are two different concepts. I recommend that a coaching program be developed to assist preschool teachers in the classroom. Coaches can demonstrate effective early literacy strategies, early math and science strategies, and strategies for early physical development with preschool children in the classroom. Preschool teachers would be observers and then discuss the lesson presentation with the coach. The coaches would provide interactive support rather than directive. Attaching a preschool program to the public schoolsespecially in the ru ral v illages-provided access to young children of lo w-income backgrounds. Few preschools were attached to public schools in the urban areas. Many children of low-income families live in the urban areas. I reco mmend that more preschool programs be attached to the primary schools in the urban area. In the long term, this will help to allev iate the poverty situation in towns and major cities in Be lize. Recommendati ons for Future Research Many opportunities for future research in the field of early childhood education are available in Belize. The ECERS-B was developed after mod ifying the ECERS-R and ECERS-E. The ECERS-B was tested to be reliable for the data collected during this study. I reco mmend that the ECERS-B be tested in future research to measure the quality of the environ ment with a larger sample in Belize and to test its reliability with this larger sample. The ECERS-B should also be tested in other developing countries with similar early childhood conditions. Several research studies indicated that children who attend a preschool program perform better than those who do not. It would be interesting go beyond this notion and find out the influence of the physical environ ment on children's learning in Belize. Because the ECERS-B was developed with cultural considerations to measure the preschool environme nts in Belize, I reco mmend that the instrument be considered as a quality measure and studies be conducted to examine the relationship to student achievement. No standardized measures to test and track the development of young children exist in Belize. I reco mmend that major stakeholders take the init iative to identify appropriate instruments to trace the development of young children in different areas such as literacy develop ment, number awareness, and critical thinking. When necessary, these instruments should be modified to accommodate the Belizean culture. I would reco mmend that the data be kept over a period years for longitudinal studies. This research indicated that teacher education had statistically significant correlations with the ECERS-B and accounted for a low but significant percentage of the variance. Note that the teachers in this study did not have education qualifications in early childhood. Since August 2007, only one early childhood education associate degree pilot program is offered to teachers by one institution. I recommend that after the first year, an experimental design study be conducted to find out the effectiveness of the program. This can be done by using the ECERS-B or a similar appropriate instrument. Th is study would find out if teachers in the pilot program in Belize provide higher quality preschool education than the teachers in the control group. A similar study can be done after the graduation of the first cohort to find out the effectiveness of the associate degree program in Belize. Research studies also suggested that highly qualified teachers with early childhood education qualifications provide higher quality of preschool programs. I reco mmend that after the graduation of the first cohort in the early childhood education pilot program a study be conducted to compare the perfo rmance of teachers who had an associate degree in early childhood education and teachers who had a higher degree in another area other then early childhood education. Conclusions In the early 1990s, I saw the importance of early childhood education when I wrote my thesis in Belize and developed a passion in that area. All the preschools were private and few, if any, existed in the villages. The results of the study suggested how important a preschool education is to the cognitive, social, and physical development of young children. I was imp ressed with the effects of a preschool program on my sample and was also concerned that all young children did not have access to the opportunity. Looking back after 18 years, I must say that I am very proud of how far my country has progressed regarding preschool education programs. I am delighted that I took the opportunity to conduct this study that described the current status of preschool education in Belize. When I first exp lored the idea of this national study, I was a litt le uneasy about what I would find. I d id not want to submit a report with all negative statements to describe the status of preschool education to my co lleagues in Belize. As I conducted my observations, I was imp ressed by the efforts that were being made to provide access to young children and to improve the quality o f preschool education in Belize. I was ecstatic. The first program that imp ressed me was the attachment program that the government recently init iated. Attaching preschools to the public primary schools was a great way to provide access to young children especially those in the lower-income areas, the v illages. The second program that astonished me was the preschool district coordinator program that co mmenced in September 2007. The major stakeholders did not simp ly decide to establish mo re preschool programs without support. Rather, they provided coordinators in each district to mon itor and to assist teachers to develop the quality of their programs. Getting deeper in the study with the observations and the interviews, I was amazed by the ro le and involvement of the National Preschool Unit to develop the quality of early childhood programs. The development of a standardized curriculu m, standardized classroom routine schedule, and a standardized list of basic display chart and materials for each preschool classroom was imp ressive. This standardization was applied to all preschool classrooms around the country, rural, urban, public, and private, along with annual workshops. This was impressive and brought to my attention that this may have been the reason why the scores of the observation instruments did not differ in regard to area and type of preschool. Then to top it off was the development of an associate degree pilot program in early childhood education that commenced in fall 2007. Although the program caters to a small group of early ch ildhood education teachers concentrated in one district, it is a great step toward getting preschool teachers qualified. Increasing the nu mber of teachers in the program and providing the program to the other five districts would be the next step to increase the rate of qualifying preschool teachers. Although the quality of the preschool environments was scored in the minimal range due to the lack of financial resources and physical resources, these four positive initiat ives will improved the quality of preschool education in Belize. With a conscious effort to promote and provide safe, healthy, and attractive learning environments for young children, the preschool education will continue to develop to the point that it can be co mpared with the standards and quality of preschool programs in other countries around the world. The process used might be applicable to developing countries in similar circu mstances. One majo r stakeholder emphasized, "I have seen a lot of improvements. And when I say a lot of improvements, I mean a lot of improvements because the teachers, the teachers are excited and I am excited …." I must say that I too am extremely excited about the direction of the progress and development of preschool education in Belize, and I can see why preschool stakeholders were excited about the program. The country has recognized the importance of preschool education and has embarked on in itiat ives to develop the program. It would be my honor to participate in this endeavor. This attention to early childhood education has the possibility to take Belize's economy and social structure to the next level. This is certainly something to be excited about and to celebrate.
2019-05-07T14:20:25.311Z
2012-12-01T00:00:00.000
{ "year": 2012, "sha1": "e3f945d0a82a8f1f2f24b06bbaeb9217098936a5", "oa_license": "CCBY", "oa_url": "http://www.sapub.org/global/showpaperpdf.aspx?doi=10.5923/j.edu.20120207.03", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2483f191d666cbfefd9207f85efd6baa45449921", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
16809510
pes2o/s2orc
v3-fos-license
Moving Beyond Cardio: The Value of Resistance Training, Balance Training, and Other Forms of Exercise in the Management of Diabetes IN BRIEF Traditionally, aerobic training has been a central focus of exercise promotion for diabetes management. However, people with diabetes have much to gain from other forms of exercise. This article reviews the evidence and recommendations on resistance, balance, and flexibility training, as well as other, less traditional, forms of exercise such as yoga and Tai Chi. H ealth-related physical fitness is "the ability to perform daily tasks with vigor and alertness, without undue fatigue and with ample energy to enjoy leisure-time pursuits and meet unforeseen emergencies" (1). Although cardiorespiratory capacity is a key part of physical fitness, improving and maintaining other components of fitness, such as strength, flexibility, and balance, are also of primary importance to health and longevity. Muscular fitness has tended to receive less attention than cardiorespiratory fitness as a prescription for improving overall and diabetes-related health. Activities that improve muscular fitness can slow the age-related loss of muscle mass (sarcopenia), improve mobility, and enhance functional status, all of which are significant health benefits to the aging population with diabetes (2,3). American Diabetes Association (ADA) guidelines (4) recommend that people with diabetes accumulate at least 150 minutes/week of aerobic exercise, plus at least two sessions per week of resistance exercise (strength training). Most adults with type 2 diabetes fail to meet the minimum recommended level of aerobic activity, and even fewer are meeting the recommendations for muscular fitness. For example, in one population-based survey (5), 55% of respondents reported engaging in walking for exercise, whereas only 12% reported engaging in resistance training. The purpose of this article is to highlight the importance and benefits of forms of exercise training other than aerobic exercise, in hopes of moving beyond a cardio-centric view of promoting exercise in adults with diabetes. Muscular fitness activities can include resistance, balance, and flexibility training, as well as other alternative forms of exercise such as yoga and Tai Chi. Resistance Training Resistance, or strength, training increases muscular fitness, which includes both muscular strength and muscular endurance. Muscle strength is the ability of the muscle to exert force, whereas muscle endurance is the ability of the muscle to continue to perform without fatigue (1). This form of training includes exercises performed using weights, weight machines, resistance bands, or one's own body weight as resistance (e.g., pushups and squats). Regular resistance training can increase muscular strength, muscular endurance, and functional capacity and has been shown to improve F R O M R E S E A R C H T O P R A C T I C E musculoskeletal health, maintain independence in performing daily activities, and reduce the possibility of injury (6)(7)(8). Low muscular strength is associated with greater risks of disability, morbidity, and mortality (9). In a large cohort study, men with low muscular strength had increased allcause mortality (23%) (10), cancer mortality (23%) (10, 11), and cardiovascular disease (CVD) mortality (29%) (10) compared to men with higher strength. Moreover, diabetes is an independent risk factor for low muscular strength (12) and for accelerated decline in muscle strength and functional status over time (13), highlighting the need to promote the preservation of strength within this population. Biological aging is typically associated with a loss of lean body mass, particularly of skeletal muscle. Observational studies indicate that there is a gradual reduction in skeletal muscle and strength starting in the third decade of life, with a more rapid absolute decline after the fifth decade (14,15). Older patients with type 2 diabetes have an accelerated decline in muscle mass and strength compared to agematched nondiabetic control subjects (16,17). Resistance training can delay, prevent, and, in some cases, reverse the effects of sarcopenia (18,19). Furthermore, resistance training can promote the maintenance of muscular strength and enhance mobility and functional independence further into old age (3,20). Resistance training has been shown to increase lean muscle mass (21) and prevent or limit the loss of lean body mass in individuals who lose weight (22), and other studies have demonstrated that resistance training can improve bone mineral density (23,24), leading to the prevention of osteoporosis. In a systematic review, Gordon et al. (25) reported that, of seven randomized, controlled trials (RCTs), all but one reported strength improvements of at least 50% in subjects with type 2 diabetes after completing resistance training. Many types of exercise can acutely improve insulin action in people with diabetes, but resistance training can be particularly beneficial over time because of its ability to increase and maintain muscle mass. The main tissues in the body that are sensitive to insulin are muscles and adipose cells; by increasing the quantity and insulin sensitivity of skeletal muscles with resistance exercise, most individuals can better manage blood glucose levels and body weight (20). Several studies have demonstrated that resistance training can improve glycemic control in people with type 2 diabetes. A systematic review of RCTs concluded that resistance training improves glycemic control (as reflected by reduced A1C), decreases insulin resistance, and increases muscular strength in adults with type 2 diabetes (25). In a more recent meta-analysis, Umpierre et al. (26) reported a 0.57% reduction in A1C in four studies comparing resistance training alone to a control. Such training improves overall glycemic control and insulin sensitivity through a number of training adaptations, including increased levels of glucose transporter 4, insulin receptors, protein kinase B, glycogen synthase, and glycogen synthase total activity in trained muscles after acute training (27,28). There have been relatively few randomized trials of resistance training in people with type 1 diabetes. D'Hooge et al. (29) randomized 16 children to 20 weeks of combined aerobic and resistance training or no exercise training. They found no impact on A1C, but substantial reductions in insulin dose in the group randomized to exercise. Ramalho et al. (30) randomized 13 adolescents and adults with type 1 diabetes to either aerobic exercise (n = 7) or resistance exercise (n = 6). A1C decreased 0.6 percentage points in the resistance training group and increased 1.1 percentage points in the aerobic training group, but the difference was not statistically significant. Insulin doses were reduced in both groups, suggesting decreased insulin resistance. Acutely, resistance exercise causes less hypoglycemia than aerobic exercise (31), and performing resistance exercise before aerobic exercise causes less hypoglycemia than performing aerobic exercise before resistance exercise (32). The article by Jane E. Yardley and Ronald J. Sigal in this Diabetes Spectrum From Research to Practice section (p. 32) offers additional details on this topic. The design of a resistance training program should consider the experience, functional ability, and goals of the individual. Ideally, a strength training program should include several characteristics, including the use of concentric (muscle-shortening contractions), eccentric (during which a muscle lengthens while it is under contraction), and isometric (contraction of a muscle without changing length) muscle actions, along with bilateral and unilateral single-and multiple-joint exercises. To allow for optimal preservation of exercise intensity, large muscle-group exercises should be performed before small muscle-group exercises, multiple-joint exercises should precede single-joint exercises, and higher-intensity exercises should come before lower-intensity exercises (2). Resistance training is recommended to be performed on a minimum of two, and more ideally three, nonconsecutive days per week (4,(33)(34)(35). In general, resistance training should start at a low intensity and progress slowly enough to prevent injury and motivational issues. Table 1 outlines a sample resistance training program. Combined Resistance and Aerobic Exercise Training There is evidence to suggest that performing both resistance training and aerobic training (i.e., "combined training") is the most beneficial type of training for people with type 2 diabetes. The combination of aero-bic and resistance exercise improves glycemic control and reduces several cardiovascular risk factors more than either type of exercise alone. An early meta-analysis (36) reported that aerobic, resistance, and combined training reduced A1C levels by 0.7, 0.5, and 0.8%, respectively. Subsequent to this, the effect of combined training has been demonstrated in two large trials (22,37). In the Diabetes Aerobic and Resistance Exercise (DARE) trial (37), 251 previously inactive adults with type 2 diabetes and a median baseline A1C of 7.5% were randomized to aerobic exercise training, resistance exercise training, combined aerobic and resistance training, or a waiting-list control group. The combined training group performed the full aerobic training program plus the full resistance training program, thereby undergoing more weekly time in exercise than the other groups. Absolute A1C changes compared to control were -0.51% in the aerobic training group, -0.38% in the resistance training group, and -0.97% in the combined aerobic and resistance training group. Resistance exercise training, either alone or in combination with aerobic exercise training, reduced levels of atherogenic remnant-like particles (38) and also significantly improved vitality and mental health dimensions of quality of life compared to either aerobic exercise training alone or the nonexercise control condition (39). Combined exercise was also superior to aerobic or resistance exercise alone in its impact on quality-adjusted life expectancy (40), according to the U.K. Prospective Diabetes Study Outcomes Model (41). In the second study, the Health Benefits of Aerobic and Resistance Training in Individuals With Type 2 Diabetes (HART-D) trial (22), participants with type 2 diabetes who were 30-75 years of age were randomized to thrice-weekly aerobic training, resistance training, combined training, or a control condition. Unlike in the DARE trial, the combined group performed smaller amounts of aerobic and resistance exercise than those performing just one type of exercise, such that the total amount of weekly exercise time was similar among the three groups, and no efforts were made to minimize dietary or medication co-intervention. The absolute reduction in A1C in HART-D was significantly lower for the combined group compared to the control group (-0.34%), but not when compared to aerobic (-0.24%) or resistance (-0.16%) exercise. The combined group also had the greatest decrease in the use of antidiabetic medication. This trial provided additional support for combined aerobic and resistance exercise rather than either type of exercise alone, when total exercise time was equivalent. However, if strength gains are an individual's primary goal, undertaking combined training may not be as effective as strength training alone. For example, muscular strength gains were assessed in the DARE trial in individuals performing resistance training or resistance plus aerobic training (42). Those doing only resistance training experienced greater increases in upper-and lower-body strength. Thus, although both types of training can lead to improved strength over 6 months, strength gains are greater with resistance training alone. It has been suggested that the aerobic component of combined training may induce an acute state of local fatigue (43), leading to an insufficient training stimulus when performing aerobic and resistance training on the same day. Fatigue effects have been shown whether aerobic training is performed before or after resistance training (44). Additionally, studies in healthy individuals have suggested that alterations in neural recruitment patterns (45) and differences in cortisol responses (46,47) leading to attenuation effects on anabolic hormones (e.g., testosterone and growth hormone) may also play a role in the differences in strength improvements. Another form of combined training is "circuit training," one form of which occurs when aerobic training and resistance training are alternated, and each is usually performed for a a r m s t r o n g e t a l . F R O M R E S E A R C H T O P R A C T I C E given amount of time (e.g., 45-60 seconds). To date, there have not been many studies evaluating this mode of training, although one study by Maiorana et al. (48) evaluated the effects of an 8-week circuit training program in 16 subjects with type 2 diabetes. They reported significant improvements in heart rate response, cardiorespiratory fitness, body composition, and A1C. Although these results are promising, more studies of longer duration and with larger sample sizes of people with diabetes are needed. Other Forms of Resistance Exercise To date, the majority of published studies have carried out resistance exercise using weight machines or free weights. Accordingly, the evidence is much more robust for these types of training and cannot necessarily be generalized to other types of resistance exercise such as resistance bands or exercises using only one's own body weight. In real-world practice, training with elastic resistance bands is attractive because of lower associated costs and minimal equipment requirements. In addition, resistance bands offer easier access to strength training and greater feasibility of home-based training. Although the use of resistance bands is appealing, their efficacy within the diabetes population is unclear. In a recent meta-analysis (49), we identified seven trials evaluating the use of resistance bands in people with type 2 diabetes. Most studies reported increases in strength with resistance band training, but A1C changes were variable. The overall A1C change between resistance band training and the control condition was nonsignificant (-0.18%). In the identified studies, resistance band training did not significantly affect upper-extremity or hand-grip strength, but significantly increased strength in the lower extremities. There were several serious limitations to the identified studies, including small sample sizes, inadequate durations of the training protocol, and limited room for progression. Given the limitations in the evidence, the current widespread adoption of resistance band training in exercise programs designed for people with diabetes may be premature. However, the low cost and easy accessibility of resistance band exercise make it attractive, if it is indeed effective. There is, therefore, a need for additional, higher-quality research evaluating resistance band exercise training. Resistance Training Precautions There are several precautions to consider when prescribing resistance training to patients with diabetes. For people with diabetes who want to begin resistance training workouts, no clear evidence is available to determine whether a pre-exercise evaluation involving graded exercise stress testing is necessary or beneficial before participation in this type of exercise. Moreover, coronary ischemia is less likely to occur during resistance training than during aerobic exercise eliciting the same heart rate, and resistance exercise may not induce ischemia at all (50,51). For example, even in men in cardiac rehabilitation programs and with known coronary ischemia and electrocardiogram (ECG) changes inducible by moderate aerobic exercise, no evidence of angina, ST depression, abnormal hemodynamics, ventricular dysrhythmias, or other complications was documented during high-intensity resistance workouts (52). A study of 12 men with known coronary ischemia and ECG changes inducible by moderate aerobic exercise found that even maximal-intensity resistance exercise did not induce ECG changes (50). Until the mid-1990s, resistance training was generally not prescribed to anyone with CVD because it was feared that increases in blood pressure would put the individual at increased risk for an adverse cardiac event. Resistance training, however, is now recommended for individuals with known CVD, including even those who have suffered a myocardial infarction or stroke. Such individuals experience less angina (chest pain due to ischemia) during resistance training than during aerobic treadmill training (50). During resistance work, both systolic and diastolic blood pressures rise in parallel, possibly helping to maintain coronary perfusion, whereas in aerobic exercise, systolic pressure rises significantly more than diastolic pressure. There is also a lesser rise in cardiac output with resistance training and more rest between resistance sets compared to continuous aerobic exercise bouts (53). Thus, for individuals who are diagnosed with coronary artery disease, moderate weight training actually may be a safer activity than most high-intensity aerobic exercise. In addition, a randomized trial demonstrated that resistance training increased quality of life in patients undergoing cardiac rehabilitation (54). With regard to glycemic balance, in individuals whose diabetes is controlled by lifestyle modification or oral antidiabetic agents, the risk of developing hypoglycemia during resistance exercise is minimal, and most individuals will not need supplemental carbohydrates or other regimen changes. Although resistance training has a long-term impact on glycemic control similar to that of aerobic exercise, the acute effects of a single bout of this type of exercise result in a lower risk for both postexercise and late-onset hypoglycemia than aerobic training in adults with type 2 (55) or type 1 diabetes (31). Because the risk of hypoglycemia is low, monitoring blood glucose levels before and after a resistance training session is most likely unnecessary. However, if an individual on insulin or insulin secretagogues is new to this type of training, it may be useful to monitor blood glucose levels for initial sessions because individual glycemic responses may vary. F R O M R E S E A R C H T O P R A C T I C E / D I A B E T E S A N D E X E R C I S E Retinopathy is a concern, and those with pre-proliferative or proliferative retinopathy should be treated and stabilized before starting a resistance exercise regimen. Balance Training Normal aging is associated with slower cognitive processing (56), slower postural reactions (57), and decreased muscle strength (58), all of which are essential for optimal balance (59). The ability to optimally control one's balance is essential for mobility, avoidance of disability, and preservation of independence in older people. Maintaining balance and preventing falls is a significant concern in older adults, particularly those with diabetes, and balance training is an important intervention to reduce the risk of falling (60). Causes of Loss of Balance and Falls in Diabetes Older people with diabetes must contend with both age-related declines in balance control and health-related issues associated with diabetes. Older adults with type 2 diabetes, on average, have impaired balance, slower reactions, impairments in gait, and, consequently, a higher risk of falling than their nondiabetic counterparts (60)(61)(62). Diabetes-related complications such as peripheral neuropathy, visual deficits, cognitive impairments, autonomic dysfunction with orthostatic hypotension, and the use of various medications that can cause lightheadedness and instability can all have an additive effect on the risk of falling. Diabetic neuropathy and its associated decline in sensory function are major contributing factors to the overall increase in falls risk in people with diabetes. Neuropathy develops as the result of chronic hyperglycemia, microvascular insufficiency, oxidative stress, and advancing age (63). One of the most common forms of neuropathy is distal symmetric diabetic polyneuropathy, which affects sensation and balance at the ankles and feet. The loss of nerve functioning can have dramatic implications for standing and walking tasks; people with diabetic neuropathy can exhibit increased postural motion and slower gait speed, with increased stride time variability (64)(65)(66)(67). Additionally, there is slowing of reaction time, loss of ability to prevent progression to a fall after its initiation, and dorsiflexion weakness, all of which increase susceptibility to falls (60,68). Cognitive decline resulting from aging and diabetes is another factor contributing to instability, particularly when it is related to executive functioning, which is defined as a set of cognitive skills that are necessary to plan, monitor, and execute a sequence of goal-directed complex actions (69). Studies have reported an increased risk of cognitive impairment and dementia in older patients with diabetes (70), and diabetes itself has been recognized as an independent risk factor for the development of cognitive impairment in large, prospective, population-based studies with follow-up durations of up to 18 years (71). Changes in executive functioning have been shown to be associated with gait performance (72), and falls in older people have been associated with changes in the pre-frontal cortex, leading to failures of executive control (69). Another significant risk factor for falls is the use of medication, and particularly the use of psychotropic medications and polypharmacy (i.e., the use of four or more medications simultaneously). The American Geriatric Society states that evidence supports withdrawal of psychotropic medication to reduce falls (73). Although some clinicians believe that selective serotonin reuptake inhibitors (SSRIs) are generally safer to use in older adults than tricyclic antidepressants with regard to falls prevention, SSRIs may increase falls risk as much as, or even more than, the older tricyclic antidepressants (74). Reducing psychotropic medication as a single intervention has been found to reduce the fall rate by 66% (75). Assessment, adjustment, and discontinuation of some medications as part of a multifactorial intervention have also been found to be effective in reducing falls (73). Benefits of Balance Training Unfortunately, many people who are at risk of falling develop a fear of falling, which results in a further limitation of activity, leading to reduced mobility and decreased physical fitness (76). Although older individuals with diabetes often exhibit an increased risk of falling, exercise training interventions can significantly improve their balance and gait and reduce their risk of falling (77). Morrison et al. (60) demonstrated that a 6-week program of thrice-weekly, supervised balance and resistance training ( Table 2) had positive effects on balance, proprioception, lower-limb strength, and reaction time. This program resulted in a decreased falls risk in older individuals with type 2 diabetes, regardless of whether they had neuropathy. An updated Cochrane review (78) included 59 studies evaluating the effectiveness of exercise in reducing falls risk and reported a 29% reduction in the rate of falls when comparing group exercise interventions (mostly resistance and balance training) to a nonexercise control group. This review also reported a 28% reduction in falls with Tai Chi exercise classes. Overall, the authors concluded that home-based, group-based, and Tai Chi exercise programs reduce the rate of falls and the risk of falling. Accordingly, most older adults are advised to undertake exercises that maintain or improve balance two to three times per week (2,73). Many lower-body and core-strengthening exercises concomitantly improve balance and may be included as a part of both resistance and balance training. Flexibility Training Flexibility, the ability to move a joint though a complete range of motion, is considered by the American College of Sports Medicine to be an import-a r m s t r o n g e t a l . F R O M R E S E A R C H T O P R A C T I C E ant part of physical fitness (2). Some types of physical activity, along with various activities of daily living, require more flexibility than others. In the elderly and people with all types of diabetes, limited joint mobility is frequently observed, likely resulting from formation of advanced glycation end-products (AGEs), which accumulate in the plasma and tissues during the normal aging process, but to an accelerated degree in diabetes (79). The most extensive accumulation of AGEs occurs in tissues that contain proteins with low turnover such as the collagen in the extracellular matrix of articular capsule, ligaments, and muscle-tendon units. An increase in collagen cross-linking alters the mechanical properties of these tissues, resulting in a decrease in elasticity and tensile strength and an increase in mechanical stiffness. Because of the potential for AGErelated damage with fluctuations in blood glucose levels, people with diabetes are more prone to developing structural changes to joints that can limit movement. These include shoulder adhesive capsulitis ("frozen shoulder"), carpal tunnel syndrome, metatarsal fractures, and neuropathy-related joint disorders (e.g., Charcot foot), among others. Aging itself also results in a reduction in flexibility and joint movement (79). Many guidelines (2,7,80,81) recommend flexibility training as an adjunct to aerobic and resistance training. However, it is unknown whether flexibility training reduces the risk of acute, exercise-related injury (82,83). Stretching exercises are effective in increasing flexibility and thereby may allow people to more easily do activities that require greater flexibility. However, unlike aerobic and resistance training, the benefits, if any, of flexibility training on diabetes management are not clear. We are not aware of any studies in people with diabetes demonstrating a beneficial impact of a pure stretching program on metabolic control, injury risk, or any diabetes-related outcome. The ADA recommendation regarding flexibility training is that it may be included as part of a physical activity program, but not as a substitute for other training (4). Flexibility exercises, combined with resistance training, have been shown to increase joint range of movement in individuals with type 2 diabetes (84), and a more recent study suggests that flexibility as a component of a balance training program can reduce the risk of falls in older individuals with diabetes (60). One study in nondiabetic older adults comparing an aerobic and resistance training program to a pure flexibility program reported that the flexibility group reported better bodily pain scores, whereas the aerobic and strength group had better endurance and strength scores (85). Regular stretching can be considered an option to include in the fitness plan, in particular for older adults. However, time spent on flexibility exercise should not be counted toward meeting the aerobic or musclestrengthening guidelines. Alternative Forms of Fitness Training: Yoga and Tai Chi Nontraditional exercises have become increasingly popular in recent years, both in practice and in the literature. Both yoga and Tai Chi are multifaceted and involve varying combinations of flexibility, balance, and resistance exercise as part of the instruction. Gentle movement such as that undertaken during both of these exercise modalities can benefit flexibility, and both activities can assist adults in meeting the recommended levels of participation in flexibility exercise. Benefits of Yoga Yoga has been investigated for its potential to improve blood glucose management. In a recent systematic review and meta-analysis (86), 11 studies were identified evaluating the effects of yoga in people with type 2 diabetes. In meta-analysis of seven of these studies, relative to usual care, yoga improved A1C, with a mean difference of -0.49% (95% CI -1.03 to -0.05). It should be noted that the quality of these studies was low, there was significant heterogeneity, and the risk of bias using the Cochrane risk of bias tool (87) was high or unclear. Within the identified studies, the training regimens varied greatly, with many involving practice 5-7 days/week for >60 min- utes per session. Other systematic reviews of trials evaluating yoga as an intervention for type 2 diabetes found modest reductions in A1C and fasting glucose (88,89) but stated that the limitations characterizing most yoga studies preclude drawing firm conclusions. Participation in yoga may provide other health benefits. A recent Cochrane review of yoga for the primary prevention of CVD (90) reported that small, beneficial effects were seen in HDL cholesterol, triglycerides, and diastolic blood pressure, yet the effect on LDL cholesterol was uncertain. Again, the trials included were at risk of bias, and larger, more methodologically rigorous trials are needed. Given the limited evidence to date, the authors of that review were unable to determine the effects of yoga in CVD prevention. The metabolic cost of yoga also has been investigated in several studies. Exercise intensity can be expressed in metabolic equivalents (METs), with 1 MET being the energy expenditure while sitting quietly at rest. One study reported that the average metabolic cost for the majority of yoga poses was 1-2 METs in young male Hatha yoga instructors (91), and another study found that Hatha yoga required ~55% lower metabolic costs than walking at 3.5 mph on a treadmill (92). Another study concluded that the metabolic cost of Hatha yoga averaged across the entire session was similar to walking at 1.9 mph (93), much less than the moderate level of intensity recommended by guidelines. All of these studies concluded that Hatha yoga was a low-intensity activity and would not contribute to cardiovascular fitness. The health benefits of yoga may lie more in its muscular fitness and flexibility effects, as well as in its relaxation and stress management properties. However, the wide array of yoga styles, practices, and applications makes it hard to draw firm conclusions regarding its effectiveness. Benefits of Tai Chi Tai Chi originated in China and evolved from a form of martial arts. It involves a series of movements performed in a slow, focused manner and accompanied by deep breathing and is considered a form of moving meditation (94). The effects of Tai Chi on glucose control in people with type 2 diabetes have been mixed. Some studies have reported reductions in A1C, whereas others have not. Recent systematic reviews (95,96) found that Tai Chi had no significant effects on glycemic control. Yan et al. (96) pooled four RCTs and five nonrandomized, controlled trials and reported a nonsignificant weighted mean difference of -0.19% (P = 0.09) A1C for the RCTs, whereas the pooled change in A1C for the nonrandomized trials was -0.41% and significant. Because most Tai Chi studies have had small sample sizes, larger and better-designed randomized trials are needed to clarify its potential health effect for people with diabetes. A recent Cochrane systematic review (97) concluded that, although some beneficial effects of Tai Chi on CVD risk factors may exist, they were inconsistent across studies, and no conclusions could be drawn regarding its effectiveness. Of note, in 374 nondiabetic subjects, a 12-week Tai Chi or walking exercise intervention (98) produced similar and significant beneficial effects on body composition, aerobic fitness, and fasting blood glucose. However, this study also reported that, although Tai Chi and walking both elicited significant cardiorespiratory responses and energy expenditures, walking elicited an ~46% higher metabolic cost than Tai Chi. Another meta-analysis (99) reported a wide range of exercise intensities associated with Tai Chi, ranging from 1.5 to 4.6 METs, which may explain some of the variance in outcomes. Tai Chi has been reported to be beneficial for balance, mobility, and falls prevention in some studies. In one 6-month study, weekly Tai Chi training improved plantar sensation and balance in elderly adults with and without diabetes who had a large plantar sensation loss (100). In another study (101), diabetic participants with peripheral neuropathy were assigned (not randomized) to a Tai Chi group or a control group. After 12 weeks of training, the Tai Chi group showed improvement in balance and neuropathic symptoms, as well as glucose control and quality of life. However, the dropout rate was high (34%). In an attempt to consolidate the overall evidence on Tai Chi, a systematic review of systematic reviews (102) was performed. The authors concluded that the evidence for Tai Chi was contradictory for many outcomes. However, for reducing the risk of falls and improving psychological health, particularly in the elderly, the evidence was convincingly positive. The aforementioned Cochrane review supported this and found that Tai Chi was effective in reducing falls in older individuals (78). To date, evidence for the beneficial effects of yoga and Tai Chi is not as extensive or as supportive as for aerobic and resistance exercise. Current diabetes and exercise guidelines (4,35) are unable to conclusively support the inclusion of yoga or Tai Chi because of the variable results with regard to glycemic benefits. Such exercises can be included based on individual preferences to increase flexibility, muscular strength, and balance, as well as for the psychological benefits, including stress reduction and relaxation. However, their effects on aerobic fitness are likely minimal, and their impact on glycemic control is variable. Conclusion Many different forms of exercise offer distinctive and substantial health benefits for people with diabetes. a r m s t r o n g e t a l . F R O M R E S E A R C H T O P R A C T I C E Given that adults with diabetes are a heterogeneous group, ranging from fit and well to frail with many comorbidities and functional disabilities, many will need to start at a low dose and intensity of exercise training and gradually progress from there. An exercise regimen that promotes the accumulation of at least 150 minutes of moderate to vigorous aerobic activity per week, together with activities that build muscular strength two to three times per week, is recommended by most guidelines (4,(33)(34)(35) to promote health. Muscular fitness is a vital component of healthy aging and key for maintaining functional independence with age. Resistance training, ideally with free weights or weight machines, is an essential component of healthy living that helps to maintain muscle mass and functional independence. It should be recommended and promoted for most adults with diabetes. In discussing exercise goals with adults with diabetes, it is important to endorse exercise training that builds and sustains muscular fitness and to not solely promote aerobic activities. For older individuals, it may be advisable to also encourage exercises to improve balance. It is also important that individuals find activities that are at least acceptable to them, and hopefully enjoyable, so that they are more likely to sustain their exercise regimen over the long term.
2017-04-29T00:32:06.528Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "30921c6ab93e23ab52374ae783c937bd89943bc6", "oa_license": "CCBYNCND", "oa_url": "https://spectrum.diabetesjournals.org/content/diaspect/28/1/14.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "30921c6ab93e23ab52374ae783c937bd89943bc6", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235297268
pes2o/s2orc
v3-fos-license
In Silico Predicted Antifungal Peptides: In Vitro and In Vivo Anti-Candida Activity It has been previously demonstrated that synthetic antibody-derived peptides could exert a significant activity in vitro, ex vivo, and/or in vivo against microorganisms and viruses, as well as immunomodulatory effects through the activation of immune cells. Based on the sequence of previously described antibody-derived peptides with recognized antifungal activity, an in silico analysis was conducted to identify novel antifungal candidates. The present study analyzed the candidacidal and structural properties of in silico designed peptides (ISDPs) derived by amino acid substitutions of the parent peptide KKVTMTCSAS. ISDPs proved to be more active in vitro than the parent peptide and all proved to be therapeutic in Galleria mellonella candidal infection, without showing toxic effects on mammalian cells. ISDPs were studied by circular dichroism spectroscopy, demonstrating different structural organization. These results allowed to validate a consensus sequence for the parent peptide KKVTMTCSAS that may be useful in the development of novel antimicrobial molecules. Introduction The development of antimicrobial drugs in the middle of the last century greatly improved the prognosis of infectious diseases, thus increasing life expectancy. Nonetheless, even today, infectious diseases, with particular reference to those caused by new and re-emerging etiologic agents, represent a common cause of death in different areas of the world [1]. Opportunistic fungal infections pose a special threat to particular at-risk populations, such as severely immunocompromised people, transplanted individuals, and oncologic patients. Among these, infections due to Candida spp. are the most common worldwide. Drugs to treat invasive fungal infections are limited to only a few approved classes and, despite the available treatments, mortality rates remain unacceptably high. Further problems are due to the increasing spread of resistance phenomena, although to a lesser extent compared with antibacterial drugs [2]. While significant efforts are ongoing in identifying novel antifungal compounds and classes, and optimizing the agents within the present antifungal arsenal, new strategies have also been approached to drug development, including the screening of approved drugs for drug repurposing [3][4][5]. Recent reviews described antifungal agents currently in various stages of clinical development [6][7][8][9]. Among the potential candidate drugs, a great number of antimicrobial peptides (AMPs) from different sources have been studied [10][11][12][13][14]. Thanks to their features, AMPs are attractive molecules for translational application, and dozens of them are currently being evaluated in clinical trials, although only a few as antifungals [15,16]. Moreover, new methods such as template-based, docking simulations, and other sequence-based methods allow for novel in silico prediction of antifungal peptides [17,18]. In this work, based on the sequence of previously described antibody-derived peptides with recognized antifungal activity, an in silico analysis was conducted aimed at the identification of novel antifungal peptides. The selected candidates proved to be more active in vitro than the parent peptide against a reference Candida albicans strain, without showing toxic effects on mammalian cells. All of them also exhibited a therapeutic effect in vivo in Galleria mellonella candidal infection. These results allow to validate a consensus sequence that could be useful to obtain optimized molecules from a recognized antimicrobial peptide. In Silico Analysis Computational analysis was performed starting from three previously described peptides endowed with anti-Candida activity. In particular, peptides K10S (KKVTMTC-SAS) [19], D5A (TCRVAHRGLTF) [20], and N1A (AQVSLTCLVK) [21] were selected to determine the correspondences between residues of the three sequences. For this purpose, we exploited MOE's sequence alignment tool, a modified version of the alignment methodology originally introduced into molecular biology by Needleman (Molecular Operating Environment, MOE, 2020.09, Chemical Computing Group ULC, Montreal, QC, Canada, 2020). The alignment was computed through a function based on residue similarity score (obtained from applying BLOSUM 40 substitution matrix) and gap penalties. Starting from the amino acidic sequence of K10S peptide, random peptides were generated through sample sequence methodology and analyzed through the mutational analysis tool. Peptide Synthesis K10S parent peptide and in silico designed peptides (ISDPs) derived from K10S by amino acidic substitutions were synthesized using the fluoren-9-ylmethoxycarbonyl (Fmoc) solid-phase synthesis chemistry, purified by HPLC, and analyzed by mass spectroscopy at CRIBI-Peptide Facility (University of Padua, Padua, Italy), as previously described [21]. A stock solution (20 mg/mL) of peptides was prepared in dimethyl sulfoxide (DMSO) and stored at 4 • C. Dilutions were made for evaluation of biological activities. Controls (without peptides) always contained DMSO at proper concentrations (maximum 0.5%). Evaluation of the In Vitro Candidacidal Activity of ISDPs The candidacidal activity of ISDPs was evaluated by conventional colony forming unit (CFU) assays, as previously described [22]. ISDPs were tested at serial dilutions to determine the half maximal effective concentration (EC 50 ) values. Briefly, approximately 500 germinating C. albicans SC5314 cells were suspended in 100 µL of distilled water in the presence or absence (control growth) of ISDPs. After incubation at 37 • C for 6 h, cell suspensions were plated on Sabouraud dextrose agar. CFUs were enumerated after 48-72 h of incubation at 30 • C, and candidacidal activity was determined as a percentage of CFU reduction. Each assay was carried out in triplicate and at least two independent experiments were performed for each condition. EC 50 was calculated by nonlinear regression analysis using GraphPad Prism 5 software. Afterwards, the kinetics of ISDPs killing activity, at their 2× EC 50 value concentration, was determined by CFU assays up to 6 h. Samples were collected for CFU determination after 5,10,15,20,30,60,120,240, and 360 min incubation. Evaluation of the Hemolytic and Cytotoxic Effects of ISDPs Hemolytic and cytotoxic effects were determined as previously described [22]. In particular, ISDPs (final concentrations of 25, 50, and 100 µM) were tested for their hemolytic activities against human red blood cells (hRBCs) (group 0 Rh+). After 30 and 120 min incubation at 37 • C, the release of hemoglobin was monitored by measuring the absorbance of the supernatant at 540 nm. Controls for zero hemolysis (blank) and 100% hemolysis consisted of hRBCs suspended in PBS and Triton 1%, respectively. MTT assay, based on the ability of metabolically active cells to convert the yellow watersoluble tetrazolium salt into formazan crystals, was used to evaluate peptide cytotoxicity against monkey kidney epithelial cells (LLC-MK2). LLC-MK2 cells were treated with ISDPs (50, 100, and 200 µM) for 24 h. Cells in medium without peptide served as control. After this period, cells were incubated with MTT (5 mg/mL, 10 µL/well) in serum-free medium for 2 h at 37 • C, the medium was removed, and the crystal formazan dye was solubilized by adding isopropanol with 5% HCl 1 M (100 µL). Absorbance was measured at 540 nm. Circular Dichroism (CD) Spectroscopy Circular dichroism (CD) experiments were carried out using a Jasco 715 spectropolarimeter (JASCO International Co. Ltd., Tokyo, Japan), coupled to a Peltier PTC-348WI system for temperature control. Far-UV spectra were recorded at 20 • C in the range 250-190 nm, 0.5 nm wavelength steps, 50 nm/min scanning speed, 1.0 nm bandwidth, and four accumulations, using a 1 mm path length quartz cuvette. A starting aqueous solution (1 mM) of the parent K10S peptide and ISDPs was prepared and stored at 4 • C. For CD experiments, samples were diluted to a final concentration of 100 µM and analyzed immediately or 7 days and 24 months later. Following baseline correction, the observed ellipticity θ (millidegrees) was converted to molar mean residue ellipticity [θ] (deg cm 2 dmol −1 ). Evaluation of Apoptosis Induction and Reactive Oxygen Species (ROS) Production in C. albicans after Treatment with ISDPs Induction of apoptosis after treatment with ISDPs was evaluated as previously described [21]. Yeast cells were suspended in water (5 × 10 5 cells/mL), in absence (control) or presence of ISDPs, at their 2× EC 50 value, for 30 min. For the evaluation of the apoptotic profile, the Muse Annexin V & Dead Cell Assay reagent (Merck Millipore, Merck KGaA, Darmstadt, Germany) was used. Data were acquired by the Muse Cell Analyzer (Merck Millipore) according to the manufacturer's instructions. At least two independent experiments were performed for each condition. Data are reported as the mean ± standard deviation. Differences between ISDPs-treated groups and control or parent-treated group were assessed by unpaired two-tailed Student's t-test. A value of p < 0.05 was considered significant. ISDP-induced ROS production in C. albicans SC5314 cells was evaluated as previously described [19]. Briefly, yeast cells (2 × 10 7 cells/mL) were suspended in 110 µL water in the presence or absence of 25 mM ascorbic acid and incubated for 30 min. Then, ISDPs were added at concentration 30× their EC 50 value in a final volume of 220 µL. As positive control, yeast cells were incubated in presence of 20 µg/mL caspofungin. After 30 min incubation at 37 • C, cells were centrifuged and resuspended in 220 µL PBS pH 7.4 with 10 µg/mL 2 ,7 -dichlorofluorescin diacetate (DCFH-DA). Then, 100 µL of suspensions were transferred in 96-well microplates for fluorescence (PerkinElmer™, Waltham, MA, USA) and incubated for 4 h at 37 • C. Fluorescent signal due to DCFH (derived by oxidation of DCFH-DA) was measured at time 0 up to 4 h on an EnSpire plate reader (PerkinElmer™) at excitation and emission wavelength of 485 and 540 nm, respectively. Each assay was carried out in duplicate, and at least two independent experiments were performed for each condition. Fluorescence Microscopy Studies The potential role of ISDPs in membrane permeabilization in living C. albicans SC5314 cells was studied by fluorescence microscopy using a Nikon Eclipse 80i optical microscope, equipped with a Nikon Digital Sight DS-2Mv camera, and images were acquired with NIS Elements F control software (Nikon Co., Tokyo, Japan). Yeast cells grown in yeast extract, peptone, and dextrose broth overnight at 30 • C with shaking (100 rpm) were washed once with water and 4 × 10 7 cells/mL were loaded with 500 µM Lucifer Yellow (LY) and 1.5 µM propidium iodide (PI). LY is a fluorescent molecule used as a quantitative marker of the cell membrane permeabilization [23], while PI is a non-vital nuclear stain commonly used for identifying dead cells. Yeast cell suspensions (10 µL) were seeded on Polysine Adhesion Slides (Thermo Scientific™, Thermo Fisher Scientific Inc., Waltham, MA, USA) and, after 10 min, ISDPs were added (final concentrations in the range 90-97 µM). Images were taken up to 30 min at a 40× magnification. Evaluation of In Vivo Therapeutic Activity of ISDPs in Galleria mellonella In vivo potential therapeutic effects of ISDPs were studied in G. mellonella larvae injected with a lethal dose of C. albicans SC5314 cells, as previously described [24]. Thirty minutes after Candida infection (5 × 10 5 cells/larva in 10 µL of saline), larvae were injected via the last right pro-leg with ISDPs (13 µmol/kg) or saline (control) (16 larvae/group). Further control groups consisted of larvae untouched or inoculated with only 10 µL of saline solution. Larvae were then transferred into clean Petri dishes, incubated at 37 • C in the dark for 9 days, and scored daily for survival. Survival curves of ISDP-treated and control animals were compared by the Mantel-Cox log-rank test. A value of p < 0.05 was considered significant. In Silico Peptide Analysis and ISDPs' Selection The sequence alignment obtained for the three selected antimicrobial peptides (K10S, D5A and N1A, Table 1) suggests a pairwise percentage identity (PPI) of 10.0 for K10S versus D5A, 30.0 for K10S versus N1A, and 20.0 for D5A versus N1A. Notably, despite the low PPI, a slight trend in the amino acid sequences of the three peptides can be identified. Indeed, all three peptides present two hydrophobic residues, one of which is represented by a conserved valine, spaced by four (D5A and N1A) or five (K10S) amino acids. Based on this rationale, we focused on the K10S peptide in order to identify further active derivatives and, consequently, a possible consensus sequence that could be decisive for their antifungal activity. For this purpose, we have generated a small database of K10S derivatives characterized by substitution with basic, hydrophobic, or Ser/Thr residues. In particular, we have substituted the two Lys at the N-terminus with two Arg (R10S-RR), the Ala residue at C-terminus with an Ile (K10S-I), and inverted Thr with Ser, and vice versa in the two triplets T-X-T (becoming S-X-S, K10S-SS) and S-X-S (becoming T-X-T, K10T-TT). Table 1. Sequence alignment of selected antimicrobial peptides. Peptide Sequence Alignment Red: identical residue in the three sequences; blue: identical residue in two sequences; green: residue with similar chemical-physical characteristics. In Table 2, the sequences and characteristics of the parent peptide and the selected ISDPs are reported. In Vitro Candidacidal Activity of ISDPs All ISDPs showed an increased candidacidal activity against C. albicans SC5314, in comparison with the parent peptide K10S, with EC 50 values ranging between 0.159 and 0.260 µM ( Table 3). The highest activity was observed with Thr/Ser and Ser/Thr substitutions. The rates of C. albicans killing of ISDPs over time are reported in Figure 1. Candidacidal activity of all tested peptides was very fast. After 30 min, in fact, percental killing ranged from 68% (K10S-SS) to nearly 100% (R-10S-RR and K10S-I). All ISDPs demonstrated a more rapid candidacidal effect in comparison with the parent K10S peptide, whose percental killing at 30 min was 36.93% [19]. In Vitro Candidacidal Activity of ISDPs All ISDPs showed an increased candidacidal activity against C. albicans SC5314, in comparison with the parent peptide K10S, with EC50 values ranging between 0.159 and 0.260 µM ( Table 3). The highest activity was observed with Thr/Ser and Ser/Thr substitutions. The rates of C. albicans killing of ISDPs over time are reported in Figure 1. Candidacidal activity of all tested peptides was very fast. After 30 min, in fact, percental killing ranged from 68% (K10S-SS) to nearly 100% (R-10S-RR and K10S-I). All ISDPs demonstrated a more rapid candidacidal effect in comparison with the parent K10S peptide, whose percental killing at 30 min was 36.93% [19]. Moreover, none of the investigated peptides showed a significant cytotoxicity against LLC-MK2 cells, as assessed by the MTT assay. After 24 h incubation with peptides, at all concentrations tested, mean absorbance values were higher than the ones of the untreated cells, with the only exception of K10S-SS 200 µM (in this case, cell viability was 92.5 ± 0.11% vs. 100% of untreated control cells). ISDPs Conformational State CD spectra of all ISDPs were acquired at time 0, 7 days, and 2 years after the preparation of the starting aqueous solution. CD spectra observed at time 0 and 7 days showed a similar profile for all ISDPs, with a negative band around 198 nm, typical of random coil structures (Figure 2). Different CD spectra were observed after 2 years. While the parent K10S peptide was not able to undergo any transition toward a recognizable organized structure [19], all its derivatives were able to acquire a well-defined secondary structure. In particular, a β-sheet structure was observed for R10S-RR, K10S-TT, and K10S-I, while K10S-SS showed an α-helix conformation (Figure 3). Moreover, none of the investigated peptides showed a significant cytotoxicity against LLC-MK2 cells, as assessed by the MTT assay. After 24 h incubation with peptides, at all concentrations tested, mean absorbance values were higher than the ones of the untreated cells, with the only exception of K10S-SS 200 µM (in this case, cell viability was 92.5 ± 0.11% vs. 100% of untreated control cells). ISDPs Conformational State CD spectra of all ISDPs were acquired at time 0, 7 days, and 2 years after the preparation of the starting aqueous solution. CD spectra observed at time 0 and 7 days showed a similar profile for all ISDPs, with a negative band around 198 nm, typical of random coil structures (Figure 2). Different CD spectra were observed after 2 years. While the parent K10S peptide was not able to undergo any transition toward a recognizable organized structure [19], all its derivatives were able to acquire a well-defined secondary structure. In particular, a β-sheet structure was observed for R10S-RR, K10S-TT, and K10S-I, while K10S-SS showed an α-helix conformation (Figure 3). Apoptosis Induction and ROS Production in C. albicans Cells after Treatment with ISDPs Flow cytometry based on the detection of phosphatidylserine on the surface of yeast cells was used to assess if treatment with ISDPs could induce apoptosis in C. albicans SC5314 cells. Under the experimental conditions adopted, only R10S-RR peptide was able to induce apoptosis (p < 0.05), although in a low number of cells, unlike the parent K10S peptide (Figure 4). Apoptosis Induction and ROS Production in C. albicans Cells after Treatment with ISDPs Flow cytometry based on the detection of phosphatidylserine on the surface of yeast cells was used to assess if treatment with ISDPs could induce apoptosis in C. albicans SC5314 cells. Under the experimental conditions adopted, only R10S-RR peptide was able to induce apoptosis (p < 0.05), although in a low number of cells, unlike the parent K10S peptide (Figure 4). Apoptosis Induction and ROS Production in C. albicans Cells after Treatment with ISDPs Flow cytometry based on the detection of phosphatidylserine on the surface of yeast cells was used to assess if treatment with ISDPs could induce apoptosis in C. albicans SC5314 cells. Under the experimental conditions adopted, only R10S-RR peptide was able to induce apoptosis (p < 0.05), although in a low number of cells, unlike the parent K10S peptide (Figure 4). value. Data represent the mean ± standard deviation from at least two independent experiments. The percentage of apoptotic cells after treatment with R10S-RR, although low, was significantly different from that of untreated (control, black *) or parent peptide-treated (red *) cells, as assessed by Student's t-test (p < 0.05). Intracellular ROS production was evaluated in C. albicans cells after treatment with ISDPs for 30 min. A green fluorescence resulting from the oxidation of DCFH-DA into DCFH, indicating the presence of ROS, was seen for all ISDPs. As observed with the positive control caspofungin, ROS production was inhibited by the previous treatment with the well-known antioxidant ascorbic acid. In Figure 5, the Δ fluorescence value at 4 h was reported. Intracellular ROS production was evaluated in C. albicans cells after treatment with ISDPs for 30 min. A green fluorescence resulting from the oxidation of DCFH-DA into DCFH, indicating the presence of ROS, was seen for all ISDPs. As observed with the positive control caspofungin, ROS production was inhibited by the previous treatment with the well-known antioxidant ascorbic acid. In Figure 5, the ∆ fluorescence value at 4 h was reported. Fluorescence Microscopy Studies on C. albicans Cells after Treatment with ISDPs Fluorescence microscopy allowed to investigate membrane permeabilization in living C. albicans cells following ISDPs treatment. An irreversible membrane permeabilization was observed already after 10 min of treatment with R10S-RR and K10S-I, as demonstrated by simultaneous internalization of LY and PI in many yeast cells. In Figure 6, C. albicans cells after 20 min treatment with R10S-RR are shown. Instead, membrane permeabilization was very limited in Candida cells after 30 min treatment with K10T-TT and K10S-SS peptides. Fluorescence Microscopy Studies on C. albicans Cells after Treatment with ISDPs Fluorescence microscopy allowed to investigate membrane permeabilization in living C. albicans cells following ISDPs treatment. An irreversible membrane permeabilization was observed already after 10 min of treatment with R10S-RR and K10S-I, as demonstrated by simultaneous internalization of LY and PI in many yeast cells. In Figure 6 In Vivo Therapeutic Activity of ISDPs in G. mellonella Therapeutic activity of ISDPs against C. albicans infection was evaluated in G. mellonella larvae. After the inoculum of a lethal dose of yeast cells, a single injection of peptides led to a significant increase in survival of larvae in comparison with infected animals inoculated with saline. The median survival time was 48 h for larvae treated with R10S-RR and K10S-I, and 24 h for larvae treated with K10T-TT and K10S-SS, versus 24 h for the control group. In a previous work, a median survival time of 72 h was reported for larvae treated with the parent K10S peptide in comparison with 24 h for controls [25]. Survival curves in Figure 7 report the pooled results obtained in three independent experiments. In Vivo Therapeutic Activity of ISDPs in G. mellonella Therapeutic activity of ISDPs against C. albicans infection was evaluated in G. mellonella larvae. After the inoculum of a lethal dose of yeast cells, a single injection of peptides led to a significant increase in survival of larvae in comparison with infected animals inoculated with saline. The median survival time was 48 h for larvae treated with R10S-RR and K10S-I, and 24 h for larvae treated with K10T-TT and K10S-SS, versus 24 h for the control group. In a previous work, a median survival time of 72 h was reported for larvae treated with the parent K10S peptide in comparison with 24 h for controls [25]. Survival curves in Figure 7 report the pooled results obtained in three independent experiments. Discussion The epidemiological relevance of fungal infections requires to expand the limited therapeutic repertoire currently available, with the ultimate goal to reduce the mortality from fungal diseases [25]. In this perspective, peptides of different origin showing antifungal activity, also against drug-resistant strains, are interesting candidate molecules. Recently, a new open-access database (DRAMP) containing over twenty thousand peptides, of which more than three thousand are characterized by antifungal activity, was developed [26]. To overcome the drawbacks of AMPs in relation to their efficacy, in vivo stability, toxicity, and expensive large-scale production, new strategies have been focusing on designing synthetic mimics and developing new delivery systems [15,[27][28][29][30]. Moreover, new strategies have been developed for molecular modifications and many peptides have been studied intensively, such as histatin-5 [29], lactoferricins [31], and anoplin [32]. In this contest, in silico studies were performed aiming to identify novel derivatives of the peptide K10S endowed with antifungal activity against C. albicans. Amino acid substitutions have been designed in order to understand the role of specific residues in determining the activity of peptides, maintaining a good balance among optimum hydrophobicity, positive charges, hydrogen bonding attitude, and their distribution within the peptide sequence. The results were in agreement with what has already been described by Agrawal et al. [17]. All K10S derivatives showed an increased candidacidal activity in comparison with the parent peptide. In particular, the highest anti-Candida activity was observed with Ser/Thr substitutions, confirming the significance of having polar uncharged residues in precise positions within the peptide sequence. Ala replacement with the more hydrophobic Ile amino acid in K10S-I derivative and the double replacement of Lys with Arg residues in R10S-RR peptide at the N-terminus also improved peptide activity in comparison with K10S, demonstrating the importance of maintaining a certain basicity at N-terminus and hydrophobicity at C-terminus. Lys-Arg substitution has already been reported for Discussion The epidemiological relevance of fungal infections requires to expand the limited therapeutic repertoire currently available, with the ultimate goal to reduce the mortality from fungal diseases [25]. In this perspective, peptides of different origin showing antifungal activity, also against drug-resistant strains, are interesting candidate molecules. Recently, a new open-access database (DRAMP) containing over twenty thousand peptides, of which more than three thousand are characterized by antifungal activity, was developed [26]. To overcome the drawbacks of AMPs in relation to their efficacy, in vivo stability, toxicity, and expensive large-scale production, new strategies have been focusing on designing synthetic mimics and developing new delivery systems [15,[27][28][29][30]. Moreover, new strategies have been developed for molecular modifications and many peptides have been studied intensively, such as histatin-5 [29], lactoferricins [31], and anoplin [32]. In this contest, in silico studies were performed aiming to identify novel derivatives of the peptide K10S endowed with antifungal activity against C. albicans. Amino acid substitutions have been designed in order to understand the role of specific residues in determining the activity of peptides, maintaining a good balance among optimum hydrophobicity, positive charges, hydrogen bonding attitude, and their distribution within the peptide sequence. The results were in agreement with what has already been described by Agrawal et al. [17]. All K10S derivatives showed an increased candidacidal activity in comparison with the parent peptide. In particular, the highest anti-Candida activity was observed with Ser/Thr substitutions, confirming the significance of having polar uncharged residues in precise positions within the peptide sequence. Ala replacement with the more hydrophobic Ile amino acid in K10S-I derivative and the double replacement of Lys with Arg residues in R10S-RR peptide at the N-terminus also improved peptide activity in comparison with K10S, demonstrating the importance of maintaining a certain basicity at N-terminus and hydrophobicity at C-terminus. Lys-Arg substitution has already been reported for lactoferricin [33]. It is now well established that an increase in hydrophobicity and amphipathicity usually correlates with an increase in antifungal activity. These features are crucial for peptide-membrane interactions and membrane permeabilization [34] and are important variables to consider for the design of synthetic peptides. All the results obtained with the ISDPs derived from K10S allow to suggest a consensus sequence endowed by basic residues at the N-terminus and two hydrophobic aminoacid spaced by five residues characterized by Ser or Thr alternating with generic amino acids (Basic-Hyd-T/S-X-T/S-X-T/S-Hyd). Enough time for peptides to find the best possible conformation from an energetic point of view was offered by performing CD analysis 2 years after the preparation of the stock aqueous solution. Acquired CD spectra showed well-defined secondary structures for the investigated derivatives, unlike the parent peptide. This diversity could be traced back to specific characteristics of the substituted amino acids. In particular, Arg residues in R10S-RR peptide are involved in a greater number of hydrogen bonds than Lys in K10S parent peptide, favoring the secondary structure acquisition; in the same way, Ala substitution with a more hydrophobic amino acid (Ile) in K10S-I peptide favored interstrand hydrophobic interactions. Thr residues introduced the propensity for the formation of β-sheet structure [35], while Ser substitutions owing to minor bulkiness favored the α-helical conformation. The activity of K10T-TT and K10S-SS correlates to the GRAVY parameter, which considers the hydrophobicity of amino acid residues. However, the structural differences between peptides do not seem to correlate with antifungal activity; indeed, the two peptides with the highest anti-Candida activity, K10S-TT and K10S-SS, are structured in β-sheet and α-helix, respectively. The individual amino acid substitutions also influenced the possible mechanism of action of the peptides. For the parent peptide K10S, a non-membranolytic mechanism of action has been hypothesized, with the induction of intracellular ROS production, a rapid decrease in mitochondrial transmembrane potential, and the lack of induction of apoptotic processes [19]. All ISDPs were characterized by the ability to induce intracellular ROS production ( Figure 5). K10T-TT and K10S-SS showed a behavior similar to the parent peptide, although with faster killing kinetics, which could explain their lower EC 50 value. Instead, for R10S-RR and K10S-I, a peptide-induced membrane permeabilization has been assumed. In particular, R10S-RR was characterized by a very fast killing kinetics; indeed, after 5 min of treatment with this ISDP more than 70% cells were dead (Figure 1). This behavior is compatible with a membranolytic mechanism of action, also confirmed by studies in fluorescence microscopy ( Figure 6). After a few minutes of peptide treatment, many yeast cells showed a simultaneous internalization of LY and PI. The reasons for this change from the parent peptide could be mainly due to physico-chemical characteristics of the Arg residue, positively charged at physiological pH as Lys, but with a lower hydropathicity value [36], a critical feature for the interaction of the peptide with the target microbial cells. The ability to induce irreversible membrane permeabilization has also been hypothesized for K10S-I, in relation to the greater hydrophobicity of this peptide. A unique aspect of the R10S-RR derivative in comparison with the parent K10S and other ISDPs was its ability to induce apoptosis under experimental conditions, though in a low percentage of treated yeast cells. The possible hemolytic and cytotoxic effects of all ISDPs were also investigated, excluding damage against human red blood and LLC-MK2 cells, respectively. Last, but not least, the potential interest of the investigated peptides was confirmed by the observation that all ISDPs showed a therapeutic effect in vivo in the experimental systemic candidiasis model in G. mellonella larvae. In conclusion, the in silico design of K10S derivatives followed by their synthesis has proven to be an effective strategy in producing novel antifungal peptides and in determining a rather precise consensus (Basic-Hyd-T/S-X-T/S-X-T/S-Hyd) for the identification and/or development of new peptides endowed with anti-Candida activity. As a future perspective, we plan to characterize further substitutions of the K10S peptide and to apply our approach to the optimization of D5A and N1A peptides.
2021-06-03T06:17:23.187Z
2021-05-31T00:00:00.000
{ "year": 2021, "sha1": "07ff057a269d716f83b8e08e86b6ebc1c4e34d09", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2309-608X/7/6/439/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "165c0fcb37ded3c28de7a7b680c243d6eeda2c93", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
54461289
pes2o/s2orc
v3-fos-license
Association of invasive treatment and lower mortality of patients ≥ 80 years with acute myocardial infarction: a propensity-matched analysis Objective To investigate whether invasive strategy was associated with lower mortality in Chinese patients ≥ 80 years with acute myocardial infarction (AMI). Methods We used retrospective data from our center between 2013 and 2017. During a median of 17.4 (interquartile range: 7.3–32.3) months follow-up, 120 deaths were recorded among 514 consecutive patients ≥ 80 years with AMI. The patients were divided into two groups: invasive treatment group (IT group, n = 269) and conservative treatment group (CT group, n = 245), which were also then compared with propensity score matching. Results High mortality was found in CT group compared with that in the IT one. Cox proportional hazard regression analysis showed that invasive treatment was associated with lower mortality of patients ≥ 80 years. Moreover, the results revealed that the patients in IT group had lower in-hospital mortality (3.35% vs. 9.39%, P = 0.005). Besides, the Kaplan-Meier analysis revealed that the mortality was significantly lower in IT group compared with that in CT group using entire and propensity-matched cohort analysis (P < 0.001, respectively). Conclusions Our data suggested that IT appeared to be associated with lower mortality in Chinese patients ≥ 80 years with AMI, which consists with previous studies in spite of either ST elevated myocardial infarction (STEMI) or non-STEMI (NSTEMI) patients. Introduction  The global population was expected to grow from 6 billion at present to about 9.4 billion by 2050, with the ageing as the most pressing population issue facing humanity in the near future. [1,2] In very elderly patients, cardiovascular diseases now count among the principal causes of death. [3] Acute myocardial infarction (AMI) results in more complications, poor clinical outcomes and incremental mortality in patients 80 years compared with patients at younger age, [47] and advanced age is associated with an increased mortality in AMI. [10] Primary percutaneous coronary intervention (PCI) was recommended by present guidelines in patients presenting with AMI. Should very elderly patients admitted with AMI accept for a routine invasive approach? Limited data are available on the outcome of PCI in patients  80 years, for the subgroup of patients aged 80 or older are under-represented in randomized controlled trials comparing the effect of invasive strategy and conservative medical strategy, and the subsequent analysis of benefits and disadvantages about in-hospital and follow-up mortality in this particular population are uncertain. [811] The present study aimed to identify whether invasive treatment could improve outcomes of patients  80 years with AMI during nearly two years follow-up period compared with conservative treatment. We put forward a hypothesis that invasive treatment, which including PCI and coronary artery bypass graft (CABG), is associated with lower in-hospital and intermediate-term mortality in patients  80 years with AMI. Study design Participants in the present study were divided into two groups according to the follow-up survival status: alive http://www.jgc301.com; jgc@mail.sciencep.com | Journal of Geriatric Cardiology group and death group. Then, in order to investigate whether invasive treatment was associated with mortality benefit, we divided all patients into two groups according to different treatment strategies: invasive treatment group (early coronary angiography with immediate assessment for PCI and CABG) and conservative treatment group (optimal medical treatment alone). Univariate and multivariate Cox proportional hazards regression analysis of the mortality was carried out to examine the independent risk factors of death. Study population Our study abided by the Declaration of Helsinki, and the study protocol was approved by the ethical review board of the Fu Wai hospital & National Center for Cardiovascular Diseases, Beijing, China. Their informed, written consents were received for all patients. From January 2012 to August 2017, 514 consecutive patients  80 years with AMI were enrolled in the present study. We made a diagnosis of patients with ST elevated myocardial infarction (STEMI) according to the third universal definition of myocardial infarction. [12] Briefly, diagnosis was given if the detection of positive myocardial markers of necrosis (MB fraction of creatine kinase, preferably cardiac troponin I and T) with typical temporal evolution associated with at least one of the evidence of ischemia based on the following conditions existed: acute onset of typical ischemic chest pain 20 min or more; imaging evidence of new loss of viable myocardium or new regional wall motion abnormality; ST-segment elevation of at least 1 mm in two or more contiguous leads, or development of pathological Q waves in electrocardiogram with dynamic change or new left bundle branch block. [12] The diagnosis of Non-STEMI was made if such conditions existed covering the typical chest pain symptom or breath shortness, electrocardiogram showing normal findings or pathological Q wave, or persistent or dynamic electrocardiographic change of ST depression > 0.5 mm, or new deep T-wave inversion in more than 2 contiguous leads, imaging evidence of new loss of viable myocardium or new regional wall motion abnormality, with an elevation of troponin T or I. [12] Date collection The demographic and clinical characteristics of all patients were recorded: age, gender, body mass index (BMI), past history including history of hypercholesterolemia, diabetes, hypertension, and smoking status; prior angina, prior myocardial infarction, prior PCI and prior CABG; [13] comorbidities: prior stroke and history of chronic kidney dis-ease; and clinical presentation: Killip class, [14] blood pressure, heart rate; in-hospital medications and invasive procedures. The chronic kidney disease was defined as an estimated glomerular filtration rate (eGFR) < 60 mL/min per 1.73 m 2 . [15] Stroke was defined as a new focal neurological deficit of vascular origin lasting more than 24 h. [16] Gensini score calculation The severity of coronary artery was commonly evaluated according to the Gensini score (GS) in our groups. The process was computed by assigning the severity score of each coronary stenosis, and the GS was expressed as the total of the score of all the coronary arteries. [17] GS was approximately equal to the score of luminal narrowing multiplied by the score of its geographic importance as demonstrated by numbers of studies. Follow-up We prospectively followed up discharged patients every six months by standardized telephone interviews conducted by trained doctors, who did not know the purpose of our research in advance. We defined the primary endpoint as the happening of all-cause death during the follow-up. The secondary follow-up clinical endpoint included non-fatal MI, unstable angina needed for hospitalization, stroke, and unexpected coronary revascularization (including PCI and CABG) because of clinical deterioration. Non-fatal MI was defined as increased myocardial zymogram along with typical chest pain or typical electrocardiogram changes or new dysfunction of ventricular wall motion. For dead patients, data were collected from their families and hospitals. Statistical analysis Continuous variables were presented as the mean  SD or median with interquartile range and were assessed by Student's t-tests, one-way ANOVA, or Mann-Whiteney U tests as appropriate. Categorical variables were expressed as numbers and percentages and were examined by chi-square tests. Because patients were not randomly received either type of treatment, clinical follow-up outcomes of both groups were compared using the propensity score (PS) matching. A logistic regression analysis was used to evaluate propensity scores among patients receiving invasive treatment or conservative treatment with baseline and clinical variables included as predictors. Variables associated with invasive treatment included age, BMI, hypertension, hyperlipidemia, diabetes, active smoking, prior myocardial infarction and family history. Patients receiving invasive treatment were matched in a 1: 1 accommodation to patients receiving con-servative treatment on the strength of the approximated propensity score of each patient (the match tolerance was 0.01). The odds ratio (OR) of in-hospital mortality and its 95% CI for patients receiving invasive treatment were calculated by logistic regression. The Hazard ratio (HR) of follow-up endpoints and its 95% CI were calculated for invasive strategy versus conservative strategy by univariate and multivariate Cox proportional regression analyses. Then HR were adjusted for age, including age > 85 years old, sex, current smokers, hypertension, hypercholesterolemia, diabetes, prior myocardial infarction, prior stroke, systolic blood pressure < 100 mmHg, heart rate < 100 beats/min, hemoglobin < 10 g/dL, Killip class, chronic kidney disease and Gensini score, all of which may confound the relationship between invasive treatment and follow-up mortalities. The event-free Survival curves between invasive treatment and conservative treatment groups were assessed by the Kaplan-Meier method and compared by the log-rank test among the entire cohort and the matched cohort. The statistical analysis was achieved by SPSS version 22.0 software (SPSS Inc., Chicago, IL). For all analyses, two-tailed P < 0.05 was considered significant. Baseline characteristic of patients with different survival status From January 2012 to August 2017, the present study enrolled 514 eligible patients  80 years with AMI. The median age was 82 years, and the age range of the study was 8094 years old. The baseline characteristics of the patients, stratified by follow-up survival status, were reported in Table 1. No significant difference in age or sex was found in both groups. However, the dead group had lower BMI (23.15  3.33 vs. 24.00  3.51, P = 0.021), higher percentage of hypertension (78.63% vs. 69.27%, P = 0.049) and hyperlipidemia (73.50% vs. 82.12%, P = 0.04). In addition, the percentage of severe heart failure was considerably higher in the dead group (13.68% vs. 3.78%, P < 0.001). Patient  80 years alive had lower Killip classification (1.0 vs. 2.0, P < 0.001), higher Gensini score (43.01  38.16 vs. 31.39  45.95, P < 0.006) and a lower rate of atrioventricular block (11.97% vs. 5.29%, P = 0.012). Patients in alive group were more often used angiotensin-converting enzyme inhibitor/ angiotensin receptor blocker (ACEI/ARB), statins, nitrates and IIB-IIIA antagonists than those in the dead group. The use of betablockers, Ca 2+ channel blockers and aspirin had not significant difference in both groups. The patients in the dead group had higher creatinine, higher NT-proBNP, higher percentage of the history of prior MI while they had lower hemoglobin, lower FT3 and less likely underwent coronary angiography. Baseline characteristic of patients with different treatment strategies Then, all the patients were allocated to two groups: invasive treatment group (n = 269) and conservative treatment group (n = 245). Among the invasive treatment group, 252 (93.68%) patients underwent PCI, and first-generation drugeluting stents were implanted if necessary, while 161 patients (59.85%) received complete revascularization. 17 (6.32%) patients underwent coronary artery bypass graft. Patients  80 years who receive invasive treatment had higher total cholesterol (4.13  1.14 vs. 3.09  0.94, P < 0.001), higher low-density lipoprotein cholesterol (LDL-C) (1.76  0.58 vs. 2.49  0.95, P < 0.01), higher percentages of STEMI, severe heart failure, higher Killip classification, more numbers of vessel disease of the coronary artery and more use of angiotensin-converting enzyme inhibitors, while they had lower percentages of atrial fibrillation, less history of prior AMI and prior CABG, lower rates of NSTEMI. However, no significant differences were observed in age, sex, the use of aspirin, IIB-IIIA antagonists, statins, nitrates, beta-blockers, and Ca 2+ channel blockers ( Table 2). A total of 216 patients adjusted to determinants of invasive treatment were generated by propensity score matching ( Table 2). Predictors of the mortality of patients  80 years with myocardial infarction In the propensity matched cohort, univariate Cox proportional hazards regression analysis found that invasive treatment was associated with lower intermediate-term mortality of patients  80 years (HR: 0.36, 95% CI: 0.240.53, P < 0.001). We also found that hypercholesterolemia, hemoglobin < 10 g/L, higher Killip class, and higher Gensini score were risk factors of mortality. Therefore, multivariate Cox proportional hazard regression analysis was performed with the purpose of exploring risk factors associated with mortality of the patients  80 years with AMI. After fully adjusting for potential risk factors, including age > 85 years old, sex, current smokers, hypertension, hypercholesterolemia, diabetes, prior myocardial infarction, prior stroke, systolic blood pressure < 100 mmHg, heart rate < 100 beats/min, hemoglobin < 10 g/L, Killip class, chronic kidney disease, Gensini score and revascularization, invasive treatment was found to have a negative association with all-cause mortality independently (HR: 0.48, 95% CI: 0.260.89, P = 0.01) ( Table 3). The mortality of the patients in different treatment groups For patients  80 years with AMI, the composite events which include myocardial infarction, need for urgent revascularization, stroke, or death did not reach statistical significance between two groups. Whereas, invasive treatment was associated with lower in-hospital mortality (OR: 0.34, 95% CI: 0.150.74, P = 0.007). For patients with STEMI, invasive treatment was found to be associated with lower in-hospital mortality (OR: 0.38, 95% CI: 0.160.94, P < 0.05), while for patients with NSTEMI, the correlation between invasive treatment and in-hospital mortality (OR: 0.14, 95% CI: 0.02-1.14, P = 0.066) did not reach statistical significance. As showed in Figure 1, the intermediate-term mortality rate was significantly lower for patients  80 years who received invasive treatment compared with the patients who received conservative treatment, both in STEMI group and NSTEMI group. Figure 2 depicts the Kaplan-Meier survival curves. The higher rate of survival in the patients  80 years who received invasive treatment was noted during the follow-up period in the entire as well as the propensity-adjusted cohort (P < 0.001, either). Discussion The present study at the period of median 17.4 (IQR: 7.3-32.3) months of follow up of the very elderly showed that the invasive treatment (PCI and CABG) was associated with intermediate-term all-cause mortality in comparison to a conservative strategy which only includes optimum medical treatment. The results were seen in patients with both STEMI and NSTEMI. The China PEACE-Retrospective Acute Myocardial Infarction Study [18] showed that rates of patients receiving reperfusion therapies in China were much lower than those in the USA or Europe, [19,20] and the rate of primary percutaneous coronary intervention was still low. [18] Furthermore, the general population is ageing, patients  80 years account for a growing proportion of the population with AMI. PCI for most patients with STEMI [21,22] or with non ST-elevation acute coronary syndrome (NSTE-ACS) [22] was recommend by the current guidelines, however, the very elderly with AMI, especially patients  80 years, seldom receive invasive treatment the guidelines recommended and were treated more conservatively compared to their younger counterparts, because they were more likely had atypical symptoms, frequently with more complications, had higher rate of death. [23,24] Meanwhile, these patients had been scarcely represented in clinical trials comparing treatment strategies in AMI. [25] Thus, the treatment strategies of elderly patients with AMI, especially patients  80 years, lack evidence at present. Compared with ACOS registry, [26] PL-ACS, [27] the present study had a lower rate of hypertension, diabetes, previous myocardial infarction and hypercholesterolemia, whereas, the rates of these comorbidities in our study were higher than After Eighty Registry. [28] Besides, the rate of the previous stroke was highest among them. In accordance with the results of the present study, ACOS study [26] and PL-ACS Registry study [27] showed the reduction of in-hospital mortality in invasive therapy group compared with conservative treatment group. The After Eighty study showed that invasive strategy had an advantage over conservative strategy in reducing composite events, including stroke, MI, need for urgent revascularization and death, for patients aged 80 years or older after presenting with NSTEMI or unstable angina. [28] On the contrary, the Italian Elderly ACS study showed that there were no differences between routine invasive therapy and initial medical management at one year, and there were no statistically significant findings of the in-hospital mortality between two groups. [29] The present study found that invasive treatment was associated with lower intermediate-term mortality for the patients aged 80 or older with NSTEMI, but there was no statistical significance between IT group and CT group about the composite endpoints of myocardial infarction, need for urgent revascularization, stroke, and death. Compared with other trials, the lower rate of invasive treatment (52.34%) and the higher rate of death (18.26%) probably explain these differences. As for elderly patients with STEMI, a few research have found that the mortality of patients aged 80 or more with STEMI was associated with many risk factors, including age, history of hypertension, [30] history of diabetes mellitus, longstanding ischemic heart disease (prior myocardial infarction), lower heart rate, Killip class  2, [30] lower hemoglobin, renal dysfunction, and revascularization. Among them, revascularization was one of the most robust predictors of short-term [31,32] and intermediate-term mortality. [3335] Even for patients aged  85 years or older with STEMI, who underwent invasive management, have better shortand long-term outcomes, [3638] meanwhile, aggressive treatment is associated with excellent quality of life. [39] Our results are in accordance with the previous studies, showing that better intermediate-term survival and lower in-hospital are associated with invasive treatment of STEMI patients  80 years. Thus, should all very elderly patients with AMI through a regular invasive process? The data of the present study indicated that invasive strategy might be associated with better prognosis of AMI patients aged 80 or older, certainly, each patient should receive individualized treatment, and random control trials of two treatment strategies were needed to verify that hypothesis. Study limitations There are some limitations in our study. Firstly, as an observational study, there may be bias from non-random assignment of exposure, unmeasured confounders like frailty, cognitive status and physical performance might influence the choice of treatment in patients with AMI, while selection bias and other significant differences observed between two groups can be balanced in some measure by propensity-score matching. Furthermore, the bleeding event was not registered in evaluating the intermediate-term outcomes of subjects receiving invasive strategy, however, results of the After Eighty study [28] showed no difference of bleeding complications between the invasive strategy and the conservative strategy. The time to PCI and the type of PCI were also associated with long-term clinical outcome, [40] we did not analyze the time to PCI, and the type of PCI (primary PCI, rescue PCI, delayed PCI) of patients underwent revascularization, which may be a confounder of our study. Conclusions This study indicates that the invasive strategy including PCI and CABG is more associated with lower intermediate-term mortality compared with the conservative strategy of using optimal medical treatment alone for patients older than 80 years presenting with or without ST elevation. Moreover, patients  80 years with STEMI who received invasive treatment during hospitalization had a lower risk of in-hospital mortality.
2018-12-16T18:46:01.032Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "cd090523c7c83ce46579cdd4a629b2efb283c880", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "cd090523c7c83ce46579cdd4a629b2efb283c880", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
115160682
pes2o/s2orc
v3-fos-license
Winning Approach : Selection Criteria for Competitive Battery Powered Racing Vehicles Electric vehicles are going to be a game changer in motorsport due to their high power, instant torque, indifference to altitude, superior traction control, and moldability of the drivetrain for aerodynamic advantage or for optimal weight distribution. The main limitation for battery powered vehicles is the mass of the battery that grows directly with race duration. At the current technology level, battery powered racing vehicles can be top contenders in most races that don’t exceed about 10 minutes. The 2015 victory by Rhys Millen in the Drive eO electric car in the legendary and incredibly challenging Pike’s Peak International Hill Climb clearly demonstrates the potential. The battery powered KillaJoule being the world’s fastest sidecar motorcycle also speaks for itself. Introduction Electric vehicles are going to be a game changer in motorsport due to their high power, instant torque, indifference to altitude, superior traction control, and moldability of the drivetrain for aerodynamic advantage or for optimal weight distribution.The main limitation for battery powered vehicles is the mass of the battery that grows directly with race duration.At the current technology level, battery powered racing vehicles can be top contenders in most races that don't exceed about 10 minutes.The 2015 victory by Rhys Millen in the Drive eO electric car in the legendary and incredibly challenging Pike's Peak International Hill Climb clearly demonstrates the potential.The battery powered KillaJoule being the world's fastest sidecar motorcycle also speaks for itself. Racing is a great application to demonstrate the capabilities of electric drive and to change the general public's perception of energy efficient technology.It is also a great arena to showcase new battery technology and other drivetrain components.However, to get the maximum benefit from your racing effort, it is important to select a race discipline and race format where battery powered electric vehicles (EVs) can be competitive with internal combustion vehicles (ICEs).This paper will present a systematic approach and criteria for an optimal selection of race discipline, format, and vehicle type in order to develop a battery powered racing vehicle that can compete head-tohead with ICE vehicles.It will also briefly cover budget, overall goals of the racing effort, sponsorship, and publicity strategy. Assumptions This paper is based on the assumption that you want to race to be competitive, which we define as setting a record or place in the top half of the field.Many people have their hearts set for a specific vehicle or race, or just want to participate for the joy of it.If you are one of these, you may still find this paper useful, but you are not in the main target group.This paper is for the individuals and teams that want to showcase a certain component, or want to promote the capabilities of electric vehicles in general, or simply want to win.The purpose of our own racing effort, which will be covered in more detail below, is to showcase the capabilities of EVs.We call it "eco-activism in disguise".Our mission is to show that eco-friendly can be fast and fun, and hopefully make people that otherwise wouldn't be interested in low-emission vehicles be aware of their potential.Speed is a great way of showing the potential of battery power, because fast is always in fashion! Figure 1: Land speed racing provides a perfect race format for battery powered vehicles.The battery powered KillaJoule, piloted by Eva Håkansson, is the world's fastest sidecar motorcycle.Photo courtesy of Scooter Grubb. Strength and weaknesses of battery powered vehicles In the process of selecting a suitable race discipline and vehicle type, the strength and weaknesses of EVs need to be considered.The list below summarizes the main strength and weaknesses of EVs in a race application.Some of them are obvious, while some are more subtle. Strengths  Very high power drivetrains available. Instant torque from zero rpm. Reduced driver workload = the driver can focus on the track and the race. Greatly reduced number of moving parts = less wear of parts, higher reliability, less maintenance. Low maintenance = small crew requirements. Silent drivetrain = allows driver to hear competitors, and can allow for track use during noise abatement hours or for use of locations otherwise not available for racing activities. No gain/loss of HP due to barometric pressure, humidity, or air temperature. No fuel starvation due to vehicle acceleration, turns, etc. Weaknesses  Limited run time per charge due to much lower specific energy in batteries compared to liquid fuels = only competitive with ICE in race formats of short duration. Relatively long time required for recharge/battery swap compared to refueling an ICE. Novelty of components and systems = difficult to find experienced crew and suppliers. No local spare parts suppliers. Can be EMI/RFI issues with controls, instruments, radios, etc.  Complex electronics. Spare battery pack shipping restrictions = logistics for racing at distant location can be complicated, slow, and expensive. The current run-time limitation: approximately 10 minutes The main limitation for EVs in race applications is currently the limited run-time per charge.This is solely due to the low specific energy of batteries compared to liquid fuels.This forces the selection of a short duration race format.Some race disciplines such as land speed racing and hill climb have inherently short durations.Others such as traditional circuit racing come in many different formats, from short sprint races to 24 hour endurance races.In the latter case, we will show that battery power only can be competitive in the short race format with the current state-of-the-art battery technology. In order to be competitive in circuit racing and similar competitions with frequent turning and braking, the mass of the EV needs to be similar to the mass of the ICE competitor.While some of the possible strengths of EVs such as better weight distribution, instant torque, and superior traction control might make up for a higher mass, the vehicle cannot be much heavier without sacrificing performance. The current run-time limitation can be calculated by comparing the mass of the EV drivetrain including the battery with the mass of the ICE drivetrain including the fuel.A comparison for a Le Mans style car can be found in in figure 3. The data comes from the Drayson B12/69EV car, which during its lifetime has been fitted both with an internal combustion engine and an electric drivetrain.The Drayson B12/69EV (Figure 2) is a "Le Mans prototype type 1" (LMP1) car.It is a single-seat, carbon fiber monocoque car, purpose built for racing.It had originally 700 HP when equipped with a V8 engine fueled by E85.The mass of the drivetrain, excluding battery/fuel tank, is lower in the EV version than it was in the ICE version.The combined mass of the motors, controllers, wiring and all other components except the battery of the 1100 HP electric drivetrain is about 172 kg.The mass of the original engine, transmission, and all other components except the fuel in the 700 HP drivetrain was 302 kg [1].Despite the electric drivetrain having an efficiency four times higher than the ICE drivetrain, the mass of the battery pack per unit energy is almost 100 times higher in the EV version: 11 kg/kWh for the battery on a system level versus 0.12 kg/kWh for E85. The difference in specific energy can be improved by the selection of a higher specific energy battery type.However, unlike fuel, there is a significant tradeoff between power and energy.The battery pack in the Drayson B12/69EV was built for high power, safety, and reliability.This resulted in a relatively low specific energy.The car's duration per charge would be doubled if the specific energy of the battery pack was doubled, but this would result in a tradeoff in power and safety.This tradeoff would have made the car unsuitable for its intended purpose of setting lap and speed records. Figure 3 illustrates how the total mass of the drivetrain in a car like the Drayson B12/69EV would increase with required race duration.The mass of the EV drivetrain is scaled down to 700 HP to be comparable with the ICE version.The dotted blue line represents an ICE drivetrain fueled with E85.The solid green line represents a drivetrain with a battery pack with 100 Wh/kg on the pack level, while the dotted green line represents a battery pack with twice the specific energy.Advances in battery technology will tilt the line more in favor of the EVs. With the current available technology, we can conclude that an EV circuit racing car of this type can be competitive with ICE car, if the duration of the race does not exceed 6 to 13 minutes.This conclusion is purely based on the mass of the drivetrain.If other factors such as superior traction control and mass distribution, as well as the availability of full torque from zero, are taken into account, EVs can be competitive even if the mass is slightly higher than the ICE equivalent.Conversely, figure 3 assumes that all energy is used in the battery pack.In reality, some margin is needed.You should design your vehicle with the goal to have at least 10-20 % energy left in your battery pack at the end of the race. Based on our own experience and these kinds of calculations, we have established the rule-of-thumb of ~10 minutes as the cut-off point where EVs currently can be competitive with ICEs.As the battery technology improves, this duration window will increase.3 Selection of race discipline, format, and vehicle type Race disciplines and formats There are quite a few options available for race disciplines and formats with a duration of about 10 minutes or less.Some of the most commonly known race disciplines that fit this requirement are the following: Specific for motorcycles: Other types of vehicles:  Snow mobile drag racing  Airplane pylon races (<10 min) Some of the suggested race formats may have heats longer than 10 minutes, but you will need to stay at or below 10 minutes to be competitive.It is also important to remember that racing in snow and sand consumes a lot of energy, and you might have to decrease the duration further to be competitive on these surfaces. It is obvious that NASCAR, Le Mans 24 h, and endurance racing are not suitable race formats for EVs with the current available battery technology.Although these types of races can theoretically be finished through quick change of battery packs, the EV will not be competitive with ICE vehicles. We should mention that there may be regulatory issues preventing EVs from participating in some of these race disciplines, but that is outside the scope of this paper.That is politics, not technology.In general, the more competitive you are, the better the chance that you will get to race.Racing is always show business, and racing organizers are typically quite generous if you give them a good show.You might have to begin by running exhibition, but when you have demonstrated that your vehicle is competitive, (or can "kick ass" as racers would express it), you are typically welcomed in. Vehicle type There can be many different reasons to choose a particular vehicle type.A common reason is the love of a specific kind of vehicle, such as motorcycles, cars, boats, or airplanes.No matter what your preference may be, or what vehicle suits your goals, there are some key criteria to keep in mind.The most important is that power is expensive.The motor(s), motor controller(s), and the battery pack(s) are often the most expensive parts of a racing vehicle.In contrast to other parts of the vehicle, such as the frame and bodywork, many of the drivetrain components such as battery cells, motor controllers, and high power motors requires highly specialized manufacturing equipment and are almost impossible to manufacture yourself.The second-hand market is also currently close to non-existent.However, the opportunity for sponsorship or discounts can be relatively good if you have an interesting project and good reputation. To maximize the performance from the drivetrain you can afford, you want to have the smallest practical vehicle.Let's say that you can only afford a 400 HP drivetrain, but want to set a speed record.In this case, you should consider putting that drive package in a motorcycle rather than in a truck, as that will give you a higher top speed.The cost of support equipment will also grow with the vehicle size, as does shipping costs and team size.In general, it is much less expensive to race a motorcycle than a car, something to keep in mind if you are on a tight budget. Sanctioning bodies and existing records If your goal is to set a record, you need to pay particular attention to the available sanctioning bodies.A sanctioning body is the organization that certifies and keeps track of the records.It is also the organization that establishes the class and safety rules.In many cases, but not all, they also organize the actual event.This means that there can be several different "official" records for the same vehicle class, registered by different sanctioning bodies.Confusing?Yes, absolutely, so pay careful attention or you can end up spending a lot of money and time without achieving your goal of an official record. There are many forms of records, but some of the most commonly recognized records are land speed records.These are typically measured over a so called flying mile (or kilometer) where your average speed is measured over a mile (or kilometer).In the standing mile (or kilometer), your top speed is measured at the end of the mile (or kilometer o Be the first to do X?  Budget and other resources, including existing or potential sponsors.(There is a limit to every budget, no matter how large of a team you have). Time frame (for example, certain races only go once a year). All these items are at a similar level in the decision tree, and they have to be considered simultaneously.However, certain items such as budget and time frame may set firm limitations for your options.As much as we wish it wasn't the case, racing costs!And it can cost a lot.There is no limit to how much that you can spend on a racing vehicle.Well, there is, and that is the over-draft limit of your checking account.In order to fulfill your electric racing dream, you have to aim for a goal that is in line with your budget.No point starting to build a 5,000 HP streamliner car with US$ 1,000 in your budget.You won't even get a set of tires for that amount of money. When you have decided how much you are willing to spend, at least double it.Our experience shows that racing effort will end up costing twice as much as you predicted, even if you take into consideration that it will be twice as expensive and you try to stay on budget.You can either set your budget and then cut it in half and decide that you are only allowed to spend half of it on your vehicle, or you can simply double your budget.It is purely your choice, but you should count on 100 % contingency.This number may be higher if you budget is really low.Even if you can buy a US$ 1,000 motorcycle on Craigslist and convert it using a fork lift motor and a starter battery, you will still need a truck and trailer, fuel, hotel, food, helmet, racing suit and a gazillion other things.Your US$ 1,000 motorcycle will quickly turn into a US$ 10,000 adventure if you don't watch your pennies very carefully.On the other hand, if you build a million dollar streamliner car, the cost of your transport vehicle, fuel, and accommodations may be "in the noise" so to speak.Just to pick an example of unexpected expenses -the driver's personal racing gear.The requirements for personal safety equipment increase with your speed, and to some extent with the risk of certain vehicle types.Land speed racing vehicles typically have much higher safety gear requirements than drag racing.If you want to build a car to set a land speed record over 200 mph, count on spending an absolute minimum of US$ 2,000 on your personal safety equipment alone (suit, helmet, boots, gloves, neck protector, fire extinguisher, etc.)The sum is similar for a motorcycle racer if you like a set of high-quality leathers and a lightweight helmet.For example, Eva's personal safety gear to ride the KillaJoule streamliner has cost more than twice that, and we have chosen far from the most expensive brands. Junior Johnson said it best, "The way to make a small fortune in racing is to start with a large one."Electric racing is no different.It is still racing and is still expensive. Sponsorship We often say that finding sponsorship is like convincing your neighbor to buy you a big screen TV.Why on Earth would he do that?!You have to convince him that you have really cool friends, and if you have a big screen TV, your friends will come to your house, and your neighbor will be invited to the party as well and get to meet your cool friends.Your neighbour will immediately reply that he will buy the TV for himself, and your friends will come to his house instead.You have to politely explain to your neighbour that he isn't cool enough, and your friends will only come to your house.The situation with racing sponsors is very similar.Why would they pay you to build a race car, when they can just build themselves a race car.You have to convince them that you are much better at this, and that your race car will make sure their component or logo is seen by lots of important people. It is fairly easy to make a case to an experienced sponsor if you compete in a well-known race series such as NASCAR.There are lots of statistics on the return in investment, and everybody knows about NASCAR.In electric racing, you have neither of these luxuries.Electric racing is still fairly unknown, and your potential sponsors will likely be "virgin".Companies that should have an interest in electric racing, such as a printed circuit board manufacturer, have typically never sponsored racing before.They don't understand the value of racing sponsorship to their business.They incorrectly assume that they will assume liability, etc.No market research or statistics on return on investment exists for electric racing.Many don't even understand how to go about sponsoring a team.Do we write a check?Create a contract?What should we tell our accountant?!It is easy to get "stuff", because it can just be registered as a free product sample in the accounting, but it is very difficult to get cash funds.The truth is that sponsorship agreements can be arranged in many different ways.It can be anything from a company (or person) sending you a check together with some vinyl stickers in an envelope, to elaborate high dollar endorsement deals with long negotiations. Because of the difficulty in getting cash sponsorships, you are typically better off going for in-kind sponsorships as a new racing team.The cost of the components in an electric racing vehicle is not insignificant.In-kind sponsorships can also be things like machine shop time, which can be worth a lot. Make your own artwork What is a race car without artwork?It is just an ordinary car.A problem that we hadn't expected was the difficulty to get stickers with the sponsor logos.We had sponsors, but the sponsors had no vinyl stickers to put on the bike.In the few cases they had stickers, they were often either way too large or way too small for the contribution.One of our best investments was a computer controlled cutting plotter for vinyl film.We bought it online for US$ 500, and we have used it to create almost all the artwork on the KillaJoule streamliner.It takes a couple of days work to create all the logos, names, and numbers for a typical racing vehicle; a job that a sign company would have charged thousands of dollars to perform.The access to a cutting plotter makes it easy to add new sponsors to the vehicle at any time. Publicity Unless you can afford to hire a PR person, you have to become your own PR machine.Eva takes care of all the PR for the KillaJoule team.She is apparently so successful that many people think that we a factory team with people on staff.They are shocked when they learn that the entire streamliner is built as a hobby project in our two-car garage behind our modest home, and that we both work regular day jobs. The secret to media coverageoffer free material If you thought you were poorly paid, you probably still make drastically more than the typical freelance writer.The budgets for even the largest magazines and newspapers are slashed, and to make a living as a writer you have to work really fast.This is something that you can take advantage of.All journalists are hurting for interesting stories that can be quickly put together.They also need photos, and having to pay for photos or send out a photographer doesn't result in a happy editor. Figure 5: This was a completely spontaneous photo shoot wearing a dress that a spectator had brought to Bonneville.Eva didn't even have any suitable shoes, but it is one of the most published photos of the KillaJoule. If you offer high-quality photos to the media for free, and if you write up your background story as well as short press releases (less than one page) when you set a record or do something else of media value, chances are very good that one or several writers will pick your story.The key is to do the work for the writer, like including facts and quotes that can be copied and pasted into an article.If you provide this, the likelihood of getting media coverage increases exponentially.At the bare minimum, you have to have a fact sheet with all the key records, specifications, and sponsor names listed.This is so members of the press will get the facts straight.Print a few copies of the fact sheet and bring it to the track.Don't forget to include contact information. Examples of successful battery powered racing vehicles There are several recent examples of racing EVs that are competitive head-to-head with ICEs.The Drayson B12/69EV mentioned above is one example.Last year's overall winner at Pike's Peak is another example.Two of our own electric racing motorcycles, the KillaCycle and the KillaJoule, also belong to this group. Our own success in electric drag racing and land speed racing with countless world records has been possible through a careful selection of race formats and vehicle design.We found that drag racing and land speed racing offered the optimal combination of short duration, high power demand, and simple regulations.Both communities were welcoming to innovative approaches and open to create classes for EVs.We found it particularly easy to communicate results and records in land speed racing to sponsors and to the general public.High speeds always impress and we found it much easier to find sponsors in land speed racing than for any other racing discipline. By adopting an approach where we primarily designed what we could build using our own relatively small machine shop in our two-car garage, we could keep the budget to a minimum.By choosing to build motorcycles instead of cars, we could get the maximum performance from the drivetrains that we could afford.It also kept support equipment such as the trailer small, and affordable. The KillaCycle -the world's quickest EV The KillaCycle is still the world's quickest electric vehicle at 0-60 mph in less than 1 second.Its ¼ mile time of 7.62 sec @ 169 mph (272 km/h) was its best E.T. and its top speed record was 174 mph.It held the EV world record for the ¼ mile until just a few years ago.We have finally retired the KillaCycle after many years of racing and setting countless records (looking for a suitable museum for it now.)The KillaCycle was the first electric motorcycle to break the 10 second barrier in the ¼ mile, and then the 9 second barrier, then the 8's, and then the first electric vehicle of any kind in the 7's.It held the electric ¼ mile speed record for many years. The original KillaCycle was built in 1999, and used state-of-the-art thin film lead-acid batteries.At the time, these batteries had the highest available power-to-weight-ratio, but the low specific energy did not allow for any other racing application than drag racing. The KillaCycle was one of the first race vehicles to try Li-Ion batteries in 2003.We convinced A123 Systems to take a chance and to sponsor the KillaCycle with their high specific power, LiFePO 4 cordless tool cells.It set a new world record during the first test session at the track, as we suspected it might.It was the breakthrough period for EV racing and for Li-Ion cells.The decision to build a motorcycle rather than a car was based on the smaller frontal area, the lower component cost, and the less complex design.In the international land speed racing rules, a car is defined as vehicle with four or more wheels, while a motorcycle is defined as a vehicle with two or three wheels.A sidecar motorcycle is defined as a vehicle that has two wheels in line, and one wheel offset.The sidecar motorcycle streamliner configuration offered the smallest possible frontal area, but would at the same time provide similar stability and safety of a car.With only one steering wheel and one driving wheel, the complexity of the chassis was drastically reduced compared a car.The simple design allowed the KillaJoule to be built on a minimal budget in our two-car garage.The results have shown that this was a successful strategy.At a "merely" 400 HP and a combined budget of around US$ 250,000 (of which about half is inkind sponsorships), the KillaJoule is the world's 3 rd fastest battery powered EV.Only the multi-million dollar efforts of the Buckeye Bullet 1 and 2.5 from Ohio State University have achieved higher speeds.About 80 % of KillaJoule was fabricated by Eva, the rest by family and friends.Working on evenings and weekends alone, it took 18 months from the start of the build to the first world speed record.The KillaJoule is now 6 years old, and has continuously been upgraded for higher power and higher speed.The current drivetrain with battery cells from A123 Systems, motor from GKN/Evo Electric, and motor controllers from Rinehart Motion Systems is capable of approximately 400 HP. The 2015 racing season completely rained out at Bonneville Salt Flats.If the Bonneville Salt Flats dries up normally this year and gives us a decent track to race on, and the planets align, we are hoping to break 300 mph this fall with the KillaJoule.[5].Towards the end of the 4,720 ft.(1,440 m) climb, the ICEs have lost up to 30 % of their power due to the thin air [5], while the EVs are essentially unaffected. The four wheel independent regenerative braking possible in EVs along with near instantaneous traction control and four wheel independent drive provide an insurmountable advantage in handling and traction.EV's are very much at home on the twisty 156 turn Pike's Peak course. The insane power emanating from the current state-of-art motors, controllers, and batteries was also reflected in the 2015 top contenders.The winner Drive eO PP03 has a reported 1368 HP [6] with a curb weight of 1150 kg [7], while the runner-up Tajima Rimac E-Runner Concept_One has a staggering 1496 HP with a curb weight of 1500 kg [8]. Conclusions EVs are already a game changer in certain racing disciplines such as hill climb and land speed racing, which both offer short race formats perfect for EVs.It is just a matter of time before overall records will be set by EVs.EVs can also be competitive in other disciplines if the vehicle type and race format are chosen wisely.For those of you that want to enter the world of electric racing, now is the time to act.The technology is here to be competitive, the battery technology is being continuously improved The introduction of EVs in racing also brings in new potential sponsors that would never consider traditional racing.However, attracting sponsors is always very difficult, and you will need a clear "business plan" as well as reasonable goals to attract andperhaps even more importantlyto keep sponsors.We prefer to under-promise and over-deliver, which has been a successful strategy for us.You also need a publicity strategy.It doesn't need to involve a large PR team, but you need to provide free material to the press such as photos, fact sheets, and press releases.Publicity is what attracts sponsorship. While EVs can be highly superior to ICEs in power, acceleration, handling, traction control, and reliability, it is important to not "over innovate" in electric racing and lose that vital reliability edge.It is always tempting to be "Oh so clever" but you are far better off building a traditional vehicle that only changes the components that are to your competitive advantage to change.We always say "To finish first, one must first finish". More resources More The power was increased to 1100 HP with the electric drivetrain.In its ICE version, it participated in the Le Mans 24 h race in 2010.In its EV version, it has set several FIA speed records and successfully participated in hill climb as well as other short duration events with its owner and driver Lord Paul Drayson.It has a top speed of 219.1 mph (352.3 km/h). Figure 2 : Figure 2: The Drayson B12/69EV, an example of an EV competitive with ICEs.Photo courtesy of Drayson Racing Technologies. Figure 3 : Figure 3: Mass of drivetrain including fuel/battery versus race duration for EVs and ICE of LMP type.Sources: [1-4] Figure 4 : Figure 4: Left) Prominently displaying your sponsors is instrumental to a successful racing effort.Right) A die-cut set of names like this can cost you several hundred US$ when ordered from a local supplier.If you make them yourself using a cutting plotter, it will cost you a few dollars for the vinyl and a few hours of work. Figure 8 : Figure 8: Left) The overall winner of the 2015 Pike's Peak International Hill Climb Rhys Millen with the Drive eO PP03.Right) The runner-up Nobuhiro "Monster" Tajima with the Tajima Rimac E-Runner Concept_One.Photo courtesy of Alastair Ritchie/Red Bull Content Pool and Team APEV. , and sponsors are beginning to become aware of the competitiveness of EVs.However, most of the current race teams have not discovered the advantages of EVs, and few teams have the experience and knowledge necessary to build a competitive electric vehicle.If you already have the background in electric vehicles, you can relatively quickly learn what you need to know about racing, and you will have a huge advantage compared to other racing teams. 3.4 Make sure your vehicle actually fits in a class! ).Another form of record is the ¼ mile (or 1/8 mile) records in drag racing, were both the time elapsed (E.T.) to reach the finishing line and your speed at the end of the ¼ mile (or 1/8 mile) track are measured.The EV drag racing records are registered by the National Electric Drag Racing Association (NEDRA) in the United States.Records set in other countries can also be submitted to NEDRA.There are other organizations registering EV drag racing records, but NEDRA has the largest number of active racers.In case you plan for an EV drag racing vehicle, start by studying the existing records to see what your competition is and where there may be records easy to break.The drag racing records are categorized both by vehicle type and battery voltage.There are also special classes for student teams.You will notice that the top records are typically in the highest voltage classes.By choosing a slightly lower voltage class, you may find a record that that is much easier (and less expensive) to achieve.Land speed racing have many more available sanctioning bodies than drag racing.One of the never ending debates within the land speed racing community is which records count as world records.We can only American Motorcyclist Association) record of 240.726 mph (387.4 km/h) set in August 2014.Its official world record sanctioned by FIM is lower.If your goal with your racing just says "land speed record", then you can, by careful choices, find a suitable class with a low or open record.An open record refers to a class where there are no previous records.This means that if you pass the technical inspection, comply with the class rules, and make two complete runs 1 , you will be the new record holder!That's cool, isn't it?An official record, no matter if it is national or international, always makes it easier to attract new sponsors (and keep your existing sponsors happy.)It may sound surprising, but it is extremely easy to build a racing vehicle that won't fit in any competition class or record category.Your very first purchase (or download, if you are lucky) should be the rulebook for the racing discipline and sanctioning body that you are considering.Many organizations charge for the rule book, but at US$ 10 or so, it is worth every penny.You won't be happy, and your sponsors certainly won't be happy, when you discover at the technical inspection that your vehicle doesn't pass the rules and you have to go home without having made a single run on the track.(It is surprizing how often this happens.)Even if racing efforts rarely make money, or at least not directly, you do need a "business plan".The business plan will help you narrow down your options for racing disciplines, race formats, and vehicle types.You will also need it to help attract sponsors.Even if it initially may be written on the back of an old envelope, your business plan needs to include at least the following items: speak for ourselves, but there appears to be some general consensus that you can call a land speed record a world record only if it is sanctioned by Federation Internationale Motocyclisme (FIM) for motorcycles or Federation Internationale de l'Automobile (FIA) for cars.However, if you want to call your vehicle the world's fastest of a certain kind, your record has to exceed all other recordsworld records, international records, and national records -for this kind of vehicle.The KillaJoule is the world's fastest electric motorcycle with a national AMA (4 Resources and "business plan"  Passion/knowledge/expertise/team (what you already have). Overall goal (what you want to accomplish) o Showcase a specific component?o Personal goal, like beating somebody?Set a record? The KillaJoule -the world's fastest sidecar motorcycle The KillaJoule is the world's fastest electric motorcycle, but also the world's fastest sidecar motorcycle with an official record of 240.726 mph (387 km/h) and a registered top speed of 270.224 mph (434 km/h).The KillaJoule has bested all ICE sidecar motorcycle land speed records by a large margin.This is the first time in over a century that a battery powered vehicle has taken the overall record for a vehicle type.In 1899 the world's fastest car was battery powered, since then ICEs have completely dominated racing, until now.If we also include the two-wheelers, less than 10 motorcycles in the world are faster than the KillaJoule.It is just a matter of time (and money) before an electric motorcycle takes the overall speed record.The top speed of over 270 mph also made Eva the world's fastest female motorcycle rider. 3 EVs dominate at Pike's Peak Hill Climb 2015 [5] a ground breaking year in the almost 100 year history of the legendary Pike's Peak International Hill Climb race.Rhys Millen took the overall victory in the Drive eO PP03 electric car with a staggering time of 9 minutes 7.222 seconds, followed by Nobuhiro "Monster" Tajima in the his Rimac-engineered E-Runner Concept_One at 9 minutes 32.401 seconds[5].Never before had an EV taken the overall victory.Tim Eckert is an electric pioneer at Pike's Peak driving his Li-Ion powered ER2 up the mountain in 2002 setting the bar for EVs.In 2003, the ER3, built by Tim and driven by Jeri Unser pared the electric record down to 14 minutes.The electric record has steadily been going down as batteries improve and frankly as EV race team budgets have gone up to the current point where electric is neck and neck with internal combustion.These results confirm that hill climb is one of the most suitable race disciplines for EVs.The duration of a typical hill climb race is typically of the order of a few minutes.The Pike's Peak race, with the top competitors finishing in around 9 minutes, is one of the longest hill climb races in the world, but still within the perfect range for EVs.The Pike's Peak Hill Climb also has other features that cater to EVs.The 12.42 mile (19.99 km) course starts at 9,390 ft.(2,862 m) elevation and finishes at 14,115 ft.(4,300 m) about the authors: www.HakanssonDube.comand www.EvaHakanssonRacing.com Drayson Racing Technologies: www.draysontechnologies.com National Electric Drag Racing Association: www.NEDRA.comBonneville Motorcycle Speed Trials: www.bonnevillespeedtrials.comSouthern California Racing Association (sanctions land speed racing): www.scta-bni.org
2018-12-08T17:36:23.156Z
2016-03-25T00:00:00.000
{ "year": 2016, "sha1": "979fa3a2a59f8c6fce9350c038db9e072a9ed66a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2032-6653/8/1/160/pdf?version=1526291554", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "979fa3a2a59f8c6fce9350c038db9e072a9ed66a", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Computer Science" ] }
266698484
pes2o/s2orc
v3-fos-license
Activation of Ustilaginoidin Biosynthesis Gene uvpks1 in Villosiclava virens Albino Strain LN02 Influences Development, Stress Responses, and Inhibition of Rice Seed Germination Villosiclava virens (anamorph: Ustilaginoidea virens) is the pathogen of rice false smut (RFS), which is a destructive rice fungal disease. The albino strain LN02 is a natural white-phenotype mutant of V. virens due to its incapability to produce toxic ustilaginoidins. In this study, three strains including the normal strain P1, albino strain LN02, and complemented strain uvpks1C-1 of the LN02 strain were employed to investigate the activation of the ustilaginoidin biosynthesis gene uvpks1 in the albino strain LN02 to influence sporulation, conidia germination, pigment production, stress responses, and the inhibition of rice seed germination. The activation of the ustilaginoidin biosynthesis gene uvpks1 increased fungal tolerances to NaCl-induced osmotic stress, Congo-red-induced cell wall stress, SDS-induced cell membrane stress, and H2O2-induced oxidative stress. The activation of uvpks1 also increased sporulation, conidia germination, pigment production, and the inhibition of rice seed germination. In addition, the activation of uvpks1 was able to increase the mycelial growth of the V. virens albino strain LN02 at 23 °C and a pH from 5.5 to 7.5. The findings help in understanding the effects of the activation of uvpks1 in albino strain LN02 on development, pigment production, stress responses, and the inhibition of rice seed germination by controlling ustilaginoidin biosynthesis. Introduction Rice false smut (RFS), caused by Villosiclava virens (anamorph: Ustilaginoidea virens), is a destructive rice disease in rice-producing areas worldwide.Moreover, the RFS pathogen (V.virens) can produce mycotoxins to decrease the yield and quality of grains [1][2][3][4].White rice false smut was first reported in Japan in 1991 [5].A symptom of RFS-infested grains is white instead of normal yellowish-green smut balls growing on the panicles.It was later reported in China [6,7] and the United States [8].Jin et al. described the isolation and biological features of two albinotic isolates of V. virens, wherein the RFS balls remained completely white on the infected panicles, and the surfaces of the chlamydospores were less verrucose than those of the normal chlamydospores of V. virens [7]. The secondary metabolites are crucial players in fungal development and interactions with other organisms [9,10].It is speculated that the mycotoxins produced by the RFS pathogen have many physiological and ecological functions such as influences on sporulation, mycelial growth and pathogenesis, and protection against environmental stresses.In our previous study [11], four sorbicillinoids (i.e., trichotetronine, demethyltrichodimerol, dihydrotrichodimer ether A, and bisorbicillinol) as the main mycotoxins were isolated from the albino strain LN02 of V. virens.However, the toxic ustilaginoidins were not found due to the four-base deletion in the promoter sequence of the polyketide synthase (PKS)-encoding gene uvpks1 in strain LN02.The mutation of the uvpks1 promoter led to the silencing of the ustilaginoidin biosynthetic gene cluster (BGC) and the elimination of colored ustilaginoidin production.The normal uvpks1 complemented mutant of strain LN02 could restore the expression of the uvpks1 gene and the ability to synthesize ustilaginoidins.Further investigation showed that V. virens was a non-melanin-producing fungus.The deficiency of ustilaginoidin biosynthesis was proven to be the cause of albinism in the albino strain LN02 [11]. In this study, the ustilaginoidin biosynthesis gene uvpks1 in the albino strain LN02 was activated via complementation to describe its effects on development, pigment production, and resistance to environmental stresses, as well as the inhibition of rice seed germination, in order to reveal the biological functions of the uvpks1 gene and ustilaginoidins in the RFS pathogen V. virens. Fungal Strains The albino strain LN02 of V. virens was isolated from white RFS balls [11].The normal strain P1 of V. virens was kindly provided by Prof. Wenxian Sun from the Department of Plant Pathology, China Agricultural University.Three complemented strains, uvpks1 C -1, uvpks1 C -2, and uvpks1 C -3, were obtained by cloning the genomic fragments, including the native promoter from strain P1 plus the coding sequence (CDS) of uvpks1 from strain LN02, into the pCBHT binary vector, which was then introduced into the protoplasts of strain LN02.As all three complemented strains displayed very similar phenotypes, only the uvpks C -1 strain was selected in this study.The transgenic strains were grown in an incubator at 28 • C. The pCBHT vector was kindly provided by Prof. Jin-Rong Xu from the Department of Botany and Plant Pathology, Purdue University.The detailed methods are described in the Supplementary Materials and our previous study [11].The primers used in this study are shown in Table S1.The LN02, P1, and uvpks1 C -1 strains, were transferred to potato dextrose broth (PDB) containing 50% glycerol and were stored at −80 • C at China Agricultural University. Assessment of Sporulation and Conidia Germination To assess the effects of the activation of uvpks1 on sporulation and conidia germination, the P1, LN02, and uvpks1 C -1 strains of V. virens were cultured in potato sucrose broth (PSB, comprising 200 g/L of potato and 20 g/L of sucrose).The conidia were obtained via the filtration of the culture broths after the fungal strains were cultured in PSB medium at 28 • C for 7 days.The conidia quantity and germination rates were determined with an optical microscope and a hemocytometer according to previously described methods [12][13][14]. Transient Expression Assay The transient expression assay was performed according to the method previously described in [15].Briefly, the green fluorescent protein (GFP)-encoding regions under the control of the promoters from the P1 and LN02 strains in the pCBHT-GFP vector were introduced into the leaf cells of Nicotiana tabacum with the T-DNA from Agrobacterium tumefaciens.The treated tobacco plants were kept in a greenhouse at 25 • C. GFP fluorescent signals were detected using a confocal laser scanning microscope (Zeiss LSM710 3-channel, Zeiss Company, Jena, Germany). Effects of Environmental Stresses on Mycelial Growth of Fungal Strains An amount of 1 µL of a spore suspension (1 × 10 6 spores/mL) of V. virens was inoculated in the center of a plate at 28 • C and pH 6.5 for 18 days.Mycelial growth was assayed after incubation at 28 • C and pH 6.5 for 18 days on plates of potato sucrose agar (PSA, containing 200 g/L of potato, 20 g/L of sucrose, and 20 g/L of agar), and the PSA was treated with 0.1-0.4M of sodium chloride (NaCl); 2 mg/mL or 4 mg/mL of Congo red (CR); 0.025% or 0.050% sodium dodecyl sulfate (SDS, w/v); and 0.02% or 0.04% hydrogen peroxide (H 2 O 2 , v/v), respectively.The colony diameters were measured, and the mycelial growth of the treated strains was compared with that of non-treated controls [16].The inhibition ratio was calculated using the following equation: [(average colony diameter of each strain on PSA without treatment-average colony diameter of each strain on PSA with treatment)/average colony diameter of each strain on PSA without treatment] × 100%.All experiments were repeated three times with three replicates each time. Effects of Fungal Extracts on Germination of Rice Seeds The fungal strains P1, LN02, and uvpks1 C -1 were separately grown on PDA medium in Petri dishes at 28 • C for 7 days.Then, three agar plugs (0.5 cm × 0.5 cm) with mycelia were added into a 1000 mL Erlenmeyer flask containing 500 mL of PDB under aseptic conditions and incubated at 28 • C in darkness for 30 days in a rotary shaker at 180 rpm. The liquid cultures of fungal strains were extracted with ethyl acetate (EtOAc) at room temperature three times.The EtOAc extract was concentrated under vacuum at 42 • C in a rotatory evaporator.The obtained brownish residue was then diluted with dimethylsulfoxide (DMSO) for the assessment of the radicle and plumule elongation inhibitory effects on the seed germination of cultivated rice varieties 9311 and Zhonghua 17.The rice seeds were kindly provided by Prof. Zejian Guo from the Department of Plant Pathology, China Agricultural University.The assay was performed in a 24-well plate using the method described previously in [17].Briefly, the three-day-germinated rice seeds of each rice variety were sown directly into each well containing 200 µL of a working solution in a 24-well plate.The DMSO solutions containing the EtOAc extracts of the fungal cultures were then added to sterile distilled water with a DMSO concentration of 2.5% and an EtOAc extract concentration of 10 µg/mL.The 2.5% DMSO in distilled water was used as the negative control.Three replicates were used for each treatment.The plates were incubated in a moist chamber at 25 • C in darkness.The length of each radicle or plumule was measured after a treatment period of 48 h.The inhibition rate of radicle or plumule elongation was calculated as follows: inhibition rate = [(Lc − Lt)/Lc] × 100%, where Lc is the radicle or plumule length of the non-treated group, and Lt is that of the treated group. Statistical Analysis All experiments were designed with three independent biological replicates.Five replicates were performed for each treatment.The treated samples were analyzed at the same growth stage.Extreme individual samples were excluded.All statistical analyses were conducted using SPSS version 17.0 (SPSS, Inc., Chicago, IL, USA).Comparisons were tested for statistical significance with Student's t-test.The data were expressed as the means ± standard error (SE).Differences at p < 0.05 or p < 0.01 were considered statistically significant or extremely significant. Activation of uvpks1 Increased Sporulation and Conidia Germination Secondary metabolites can influence fungal development, such as mycelial growth, sporulation, conidia germination, and sexual reproduction [18,19].In order to investigate the functions of uvpks1, the P1, LN02, and uvpks1 C -1 strains of V. virens were cultured in PSB medium.It was found that the conidia concentration (1.96 × 10 8 conidia/mL) of the complemented strain uvpks1 C -1 was restored to the same level as that (2.00 × 10 8 conidia/mL) of the P1 strain after 7 days of fermentation in PSB medium.The conidia concentration (0.23 × 10 8 conidia/mL) decreased by approximately 10-fold in the LN02 strain compared with the P1 strain or the uvpks1 C -1 strain (Figure 1A).complemented strain uvpks1 C -1 was restored to the same level as that (2.00 × 10 8 conidia/mL) of the P1 strain after 7 days of fermentation in PSB medium.The conidia concentration (0.23 × 10 8 conidia/mL) decreased by approximately 10-fold in the LN02 strain compared with the P1 strain or the uvpks1 C -1 strain (Figure 1A).The conidia of the three strains were obtained via the filtration of the culture broths after they were cultured in PSB medium for 7 days.The green color of the conidia surface, especially inside the conidia of the P1 strain, was observed obviously.However, the conidia color of the LN02 strain appeared to be white under the optical microscope.It seemed that the light-colored compounds were accumulated inside the conidia of the complemented strain uvpks1 C -1.The red arrowheads indicate the typical spores in Figure 1B.Meanwhile, the conidia germination rates of the three strains were compared in different cultivation periods from 12 h to 48 h.The germination rate of the LN02 strain spores was significantly lower than that of the P1 strain or uvpks1 C -1 strain at each incubation time (Figure 1C).The results indicate that the activation of uvpks1 could increase the sporulation and conidia germination rate in the uvpks1 C -1 strain. Furthermore, we performed a transient expression assay on the promoter fragments of uvpks1 from either the LN02 strain or P1 strain in the leaf cells of Nicotiana tabacum to test the effects of the promoter region.A green fluorescent protein (GFP) fusion construct driven by the tested promoter was generated, which contained either a normal native uvpks1 promoter or a mutated uvpks1 promoter in front of the GFP-encoding gene.The protein was named as either normal uvpks1 promoter-GFP or LN02 mutated uvpks1 promoter-GFP.Compared with the expression activity of the normal uvpks1 promoter, the relative expression activity of the deletion-mutated promoter in the LN02 strain was not observed.This indicates that the promoter fragment of uvpks1 from the LN02 strain did not have any activity in the leaf cells of N. tabacum (Figure 2), which confirmed that the inactive promoter of uvpks1 in the LN02 strain led to the silencing of uvpks1 expression in the albino strain. typical spores in Figure 1B. The conidia of the three strains were obtained via the filtration of the culture broths after they were cultured in PSB medium for 7 days.The green color of the conidia surface, especially inside the conidia of the P1 strain, was observed obviously.However, the conidia color of the LN02 strain appeared to be white under the optical microscope.It seemed that the light-colored compounds were accumulated inside the conidia of the complemented strain uvpks1 C -1.The red arrowheads indicate the typical spores in Figure 1B.Meanwhile, the conidia germination rates of the three strains were compared in different cultivation periods from 12 h to 48 h.The germination rate of the LN02 strain spores was significantly lower than that of the P1 strain or uvpks1 C -1 strain at each incubation time (Figure 1C).The results indicate that the activation of uvpks1 could increase the sporulation and conidia germination rate in the uvpks1 C -1 strain. Furthermore, we performed a transient expression assay on the promoter fragments of uvpks1 from either the LN02 strain or P1 strain in the leaf cells of Nicotiana tabacum to test the effects of the promoter region.A green fluorescent protein (GFP) fusion construct driven by the tested promoter was generated, which contained either a normal native uvpks1 promoter or a mutated uvpks1 promoter in front of the GFP-encoding gene.The protein was named as either normal uvpks1 promoter-GFP or LN02 mutated uvpks1 promoter-GFP.Compared with the expression activity of the normal uvpks1 promoter, the relative expression activity of the deletion-mutated promoter in the LN02 strain was not observed.This indicates that the promoter fragment of uvpks1 from the LN02 strain did not have any activity in the leaf cells of N. tabacum (Figure 2), which confirmed that the inactive promoter of uvpks1 in the LN02 strain led to the silencing of uvpks1 expression in the albino strain. Activation of uvpks1 Influenced Fungal Resistance to Environmental Stresses Stress tolerance is one of the most important mechanisms for the survival of fungi.Environmental stresses, such as temperature, pH, UV, and hyperosmotic stress, have especially critical effects on fungal growth, cellular composition, and metabolism, which have roles in virulence, pathogenicity, adaptation to the environment, colonization, and interactions with host plants and other organisms [20][21][22][23]. Activation of uvpks1 Influenced Fungal Resistance to Environmental Stresses Stress tolerance is one of the most important mechanisms for the survival of fungi.Environmental stresses, such as temperature, pH, UV, and hyperosmotic stress, have especially critical effects on fungal growth, cellular composition, and metabolism, which have roles in virulence, pathogenicity, adaptation to the environment, colonization, and interactions with host plants and other organisms [20][21][22][23]. Activation of uvpks1 Increased Mycelial Growth at Relatively low Temperature Fungi are able to grow and survive at relatively low temperatures.They usually evolve various strategies such as the production of secondary metabolites in response to low temperatures [24].When the fungal strains P1, LN02, and uvpks1 C -1 were exposed for 21 days after inoculation to temperatures of 23 • C, 28 • C, and 32 • C, respectively, their growth trends were similar.As the temperature was increased, the fungal growth was accelerated (Figure 3).The LN02 strain produced fewer pigments and grew relatively slowly at 23 • C.However, the uvpks C -1 strain produced more pigments (Figure 3A) and restored fungal growth based on colony expansion and mycelial dry weight (Figure 3B,C).It was indicated that the normal uvpks1 in V. virens played a role in producing more pigments in response to low-temperature stress (i.e., 23 • C).The pigments were previously identified mainly as bis-naphtho-γ-pyrones, including ustilaginoidins E, K, and D, and isochaetochromin B2 in the uvpks1 C -1 and P1 strains [11].Fungi are able to grow and survive at relatively low temperatures.They usually evolve various strategies such as the production of secondary metabolites in response to low temperatures [24].When the fungal strains P1, LN02, and uvpks1 C -1 were exposed for 21 days after inoculation to temperatures of 23 °C, 28 °C, and 32 °C, respectively, their growth trends were similar.As the temperature was increased, the fungal growth was accelerated (Figure 3).The LN02 strain produced fewer pigments and grew relatively slowly at 23 °C.However, the uvpks C -1 strain produced more pigments (Figure 3A) and restored fungal growth based on colony expansion and mycelial dry weight (Figure 3B,C).It was indicated that the normal uvpks1 in V. virens played a role in producing more pigments in response to low-temperature stress (i.e., 23 °C).The pigments were previously identified mainly as bis-naphtho-γ-pyrones, including ustilaginoidins E, K, and D, and isochaetochromin B2 in the uvpks1 C -1 and P1 strains [11]. Activation of uvpks1 Increased Fungal Growth at pH Values from 5.5 to 7.5 In many fungal pathogens, the PacC transcription factor regulates environmental adaptation, secondary metabolism, and virulence [25].A low pH often favors the production of mycotoxins, such as deoxynivalenol production by Fusarium graminearum [26]. The PSA medium was treated with Tris-HCl, with its pH values regulated to 5.5, 6.5, and 7.5, respectively.The P1, LN02, and uvpks1 C -1 strains were subjected to pH stress.After 21 days of culture, the colony expansion diameters of the fungal strains showed (A) In many fungal pathogens, the PacC transcription factor regulates environmental adaptation, secondary metabolism, and virulence [25].A low pH often favors the production of mycotoxins, such as deoxynivalenol production by Fusarium graminearum [26]. The PSA medium was treated with Tris-HCl, with its pH values regulated to 5.5, 6.5, and 7.5, respectively.The P1, LN02, and uvpks1 C -1 strains were subjected to pH stress.After 21 days of culture, the colony expansion diameters of the fungal strains showed significant differences (Figure 4).The growth of the LN02 strain was more strongly inhibited by pH stress than that of the P1 strain or uvpks1 C -1 strain (Figure 4C).The results show that the activation of uvpks1 could increase fungal growth and pigment formation at pH values from 5.5 to 7.5 (Figure 4A). OR PEER REVIEW 7 of 16 significant differences (Figure 4).The growth of the LN02 strain was more strongly inhibited by pH stress than that of the P1 strain or uvpks1 C -1 strain (Figure 4C).The results show that the activation of uvpks1 could increase fungal growth and pigment formation at pH values from 5.5 to 7.5 (Figure 4A). Activation of uvpks1 Increased Fungal Resistance to NaCl-Induced Osmotic Stress The production of secondary metabolites is usually increased in fungi in response to increasing osmotic stress [27].When the fungal strains P1, LN02, and uvpks1 C -1 were exposed to hyperosmotic stress induced by NaCl at 0.1-0.4M in PSA medium, the growth rates of all strains were decreased as the concentration of NaCl was increased (Figure 5).However, the LN02 strain displayed more sensitivity and produced fewer pigments under salt stress compared with the P1 and uvpks1 C -1 strains (Figure 5A).These results indicate that the activation of uvpks1 could increase fungal resistance to osmotic stress induced by NaCl. (A) Activation of uvpks1 Increased Fungal Resistance to NaCl-Induced Osmotic Stress The production of secondary metabolites is usually increased in fungi in response to increasing osmotic stress [27].When the fungal strains P1, LN02, and uvpks1 C -1 were exposed to hyperosmotic stress induced by NaCl at 0.1-0.4M in PSA medium, the growth rates of all strains were decreased as the concentration of NaCl was increased (Figure 5).However, the LN02 strain displayed more sensitivity and produced fewer pigments under salt stress compared with the P1 and uvpks1 C -1 strains (Figure 5A).These results indicate that the activation of uvpks1 could increase fungal resistance to osmotic stress induced by NaCl. Activation of uvpks1 Increased Fungal Resistances to Congo Red-Induced Cell Wall Stress Secondary metabolites can increase fungal resistance to cell wall impairment.The production of some secondary metabolites can be induced in response to cell wall stress [28].The P1, LN02, and uvpks1 C -1 strains were subjected to Congo red (CR)-induced cell wall stress.When the fungal strains were cultured in the medium containing CR, the evidence was that the growth and pigment production of strain LN02 were more strongly inhibited than those of strains P1 and uvpks1 C -1.These results show that the activation of uvpks1 could increase fungal resistance to cell wall stress induced by CR (Figure 6). Activation of uvpks1 Increased Fungal Resistances to Congo Red-Induced Cell Wall Stress Secondary metabolites can increase fungal resistance to cell wall impairment.The production of some secondary metabolites can be induced in response to cell wall stress [28].The P1, LN02, and uvpks1 C -1 strains were subjected to Congo red (CR)-induced cell wall stress.When the fungal strains were cultured in the medium containing CR, the evidence was that the growth and pigment production of strain LN02 were more strongly inhibited than those of strains P1 and uvpks1 C -1.These results show that the activation of uvpks1 could increase fungal resistance to cell wall stress induced by CR (Figure 6). Activation of uvpks1 Increased Resistance to SDS-Induced Cell Membrane Stress Secondary metabolites can also increase fungal resistance to cell membrane damage [29].Sodium dodecyl sulfate (SDS) can perturb membrane integrity.The strains P1, LN02, and uvpks1 C -1 were subjected to cell membrane stress induced by SDS with concentrations of 0.025% and 0.050% in PSA medium, respectively.The growth of strain LN02 was completely inhibited when the SDS was at 0.050%, while those of strains P1 and uvpks1 C -1 were moderately inhibited by the SDS at 0.050% (Figure 7).These results indicate that the activation of uvpks1 could increase fungal resistance to cell membrane stress induced by SDS. Activation of uvpks1 Increased Resistance to SDS-Induced Cell Membrane Stress Secondary metabolites can also increase fungal resistance to cell membrane damage [29].Sodium dodecyl sulfate (SDS) can perturb membrane integrity.The strains P1, LN02, and uvpks1 C -1 were subjected to cell membrane stress induced by SDS with concentrations of 0.025% and 0.050% in PSA medium, respectively.The growth of strain LN02 was completely inhibited when the SDS was at 0.050%, while those of strains P1 and uvpks1 C -1 were moderately inhibited by the SDS at 0.050% (Figure 7).These results indicate that the activation of uvpks1 could increase fungal resistance to cell membrane stress induced by SDS. Activation of uvpks1 Increased Resistance to SDS-Induced Cell Membrane Stress Secondary metabolites can also increase fungal resistance to cell membrane damage [29].Sodium dodecyl sulfate (SDS) can perturb membrane integrity.The strains P1, LN02, and uvpks1 C -1 were subjected to cell membrane stress induced by SDS with concentrations of 0.025% and 0.050% in PSA medium, respectively.The growth of strain LN02 was completely inhibited when the SDS was at 0.050%, while those of strains P1 and uvpks1 C -1 were moderately inhibited by the SDS at 0.050% (Figure 7).These results indicate that the activation of uvpks1 could increase fungal resistance to cell membrane stress induced by SDS. Activation of uvpks1 Increased Fungal Resistance to H2O2-Induced Oxidative Stress Secondary metabolites are usually formed in fungi in response to oxidative stress induced by hydrogen peroxide (H2O2) [30].When fungal strains were cultured in the medium containing 0.02%-0.04%H2O2, the mycelial growth and pigment production of strain LN02 were more strongly inhibited than those of strains P1 and uvpks1 C -1 (Figure 8A).Strain LN02 could not grow in the medium containing 0.04% H2O2 (Figure 8).The LN02 strain showed reduced tolerance to H2O2 compared with P1, while the complemented strain uvpks1 C -1 showed statistically the same tolerance to stress responses as the wild type.The results indicate that the function of uvpks1 was restored in the complemented strain uvpks1 C -1. Secondary metabolites are usually formed in fungi in response to oxidative stress induced by hydrogen peroxide (H 2 O 2 ) [30].When fungal strains were cultured in the medium containing 0.02-0.04%H 2 O 2 , the mycelial growth and pigment production of strain LN02 were more strongly inhibited than those of strains P1 and uvpks1 C -1 (Figure 8A).Strain LN02 could not grow in the medium containing 0.04% H 2 O 2 (Figure 8).The LN02 strain showed reduced tolerance to H 2 O 2 compared with P1, while the complemented strain uvpks1 C -1 showed statistically the same tolerance to stress responses as the wild type.The results indicate that the function of uvpks1 was restored in the complemented strain uvpks1 C -1. Secondary metabolites are usually formed in fungi in response to oxidative stress induced by hydrogen peroxide (H2O2) [30].When fungal strains were cultured in the medium containing 0.02%-0.04%H2O2, the mycelial growth and pigment production of strain LN02 were more strongly inhibited than those of strains P1 and uvpks1 C -1 (Figure 8A).Strain LN02 could not grow in the medium containing 0.04% H2O2 (Figure 8).The LN02 strain showed reduced tolerance to H2O2 compared with P1, while the complemented strain uvpks1 C -1 showed statistically the same tolerance to stress responses as the wild type.The results indicate that the function of uvpks1 was restored in the complemented strain uvpks1 C -1. Activation of uvpks1 Increased Inhibition by the Fungal Extracts of Germination of Rice Seeds The EtOAc crude extracts of the fermentation cultures of strains P1, LN02, and uvpks1 C -1 were evaluated for their inhibition of the seed germination of two rice varieties, 9311 and Zhonghua 17.The inhibitions of radical and plumule elongation are shown in Figure 9.Both the radicle (Figure 9B) and plumule (Figure 9C) inhibition rates of the extracts from the strain P1 and uvpks1 C -1 cultures were obviously bigger than those of strain LN02. Activation of uvpks1 Increased Inhibition by the Fungal Extracts of Germination of Rice Seeds The EtOAc crude extracts of the fermentation cultures of strains P1, LN02, and uvpks1 C -1 were evaluated for their inhibition of the seed germination of two rice varieties, 9311 and Zhonghua 17.The inhibitions of radical and plumule elongation are shown in Figure 9.Both the radicle (Figure 9B) and plumule (Figure 9C) inhibition rates of the extracts from the strain P1 and uvpks1 C -1 cultures were obviously bigger than those of strain LN02.Many phytotoxic metabolites produced by plant pathogenic fungi show inhibition of seed germination.They are usually considered pathogenic factors to host plants [31] and exhibit bioherbicidal potential in agriculture [32][33][34].In our previous report, four main bis-naphtho-γ-pyrones, including ustilaginoidins E, K, and D, and isochaetochromin B2 were identified in the fermentation cultures of strains P1 and uvpks1 C [11].Ustilaginoidins E, K, D, and B2 were also reported as the main bis-naphtho-γ-pyrones in the fermentation cultures of normal V. virens.At the same time, ustilaginoidin E and isochaetochromin B2 were screened to show their inhibitory activity on the radical elongation of rice seeds [17].Many phytotoxic metabolites produced by plant pathogenic fungi show inhibition of seed germination.They are usually considered pathogenic factors to host plants [31] and exhibit bioherbicidal potential in agriculture [32][33][34].In our previous report, four main bis-naphtho-γ-pyrones, including ustilaginoidins E, K, and D, and isochaetochromin B2 were identified in the fermentation cultures of strains P1 and uvpks1 C [11].Ustilaginoidins E, K, D, and B2 were also reported as the main bis-naphtho-γ-pyrones in the fermentation cultures of normal V. virens.At the same time, ustilaginoidin E and isochaetochromin B2 were screened to show their inhibitory activity on the radical elongation of rice seeds [17].It was speculated that the high content of phytotoxic ustilaginoidins in the P1 and uvpks1 C -1 strains may contribute to their inhibition of the germination of rice seeds. Discussion The capacity of fungi to respond to stresses influences their development and virulence, as well as their adaptation to environmental stresses.Fungal secondary metabolites may play crucial roles in controlling morphological differentiation, environmental fitness, and interactions with other organisms [18,[35][36][37][38][39][40]. In our previous study [11], it was found that the polyketide synthase (PKS)-encoding gene uspks1 for ustilaginoidin biosynthesis in strain LN02 was inactivated due to the deletion of four bases in the promoter sequence of uvpks1.Therefore, the albino strain LN02 was considered a white-phenotype mutant with its incapability to synthesize pigments mainly as ustilaginoidins.In addition, four main bis-naphtho-γ-pyrones, including ustilaginoidins E, K, and D, and isochaetochromin B2 were identified in the fermentation cultures of the normal strain P1 [55] and complemented strain uvpks1 C [11].In this study, the LN02 strain showed a reduced tolerance to stresses compared with the normal strain P1, while the complemented strain uvpks1 C -1 showed statistically the same tolerance to stress responses as the normal strain P1.The activation of uvpks1 in strain LN02 could increase sporulation, conidia germination, and fungal resistance to a series of stresses, and the inhibition of rice seed germination. Both ustilaginoidin E and isochaetochromin B2 were screened to show their phytotoxic activity on rice seedlings [17].These two compounds (ustilaginoidin E and isochaetochromin B2) were also identified as the main ustilaginoidins in the EtOAc extract from the uvpks1 C strain [11], which led to increased sporulation, conidia germination, and inhibition of rice seed germination.It is plausible that ustilaginoidins might be associated with the virulence factors in the interactions of V. virens with rice plants.The pathogenic fungi were capable of living in an environment with a wide pH range.While living in symbiosis with plants, the rhizospheric environment is mostly neutral or acidic, which is beneficial for fungal secondary metabolism and infection [56][57][58][59][60][61].In this study, the growth of the albino strain LN02 was more strongly inhibited with a pH of 5.5 than that of the P1 strain or uvpks1 Cstrain 1 (Figure 4).This indicates that the uvpks1 gene might play a positive role in the pH response for the growth of V. virens. It was speculated that the activation of the ustilaginoidin biosynthesis gene uvpks1 in the albino strain LN02 led to the production of ustilaginoidins in V. virens, which might have contributed to the increase in their resistance to environmental stresses.It also contributed to the increased sporulation, conidia germination, and inhibition of rice seed germination.In addition, at a relatively low temperature (i.e., 23 • C) and pH from 5.5 to 7.5, the activation of the ustilaginoidin biosynthesis gene uvpks1 could increase the mycelial growth of V. virens.Therefore, ustilaginoidins are considered to play important physiological and ecological functions in fungal development, environmental fitness, and interactions with other organisms.However, the albino V. virens cannot synthesize ustilaginoidins.This may be the reason why the white RFS rarely occurs in rice fields, which is worthy of further investigation. Conclusions In summary, three strains including the normal strain P1, albino strain LN02, and complemented strain uvpks1 C -1 of V. virens were used to investigate the activation of the ustilaginoidin biosynthesis gene uvpks1 in the albino strain LN02 to influence fungal sporulation, conidia germination, pigment production, stress responses, and the inhibition of rice seed germination.The activation of the ustilaginoidin biosynthesis gene uvpks1 in the albino strain LN02 increased fungal resistance to NaCl-induced osmotic stress, Figure 1 .Figure 1 . Figure 1.Comparison of the sporulation and conidia germination between fungal strains.(A) The conidia concentrations in PSB medium at 28 °C, pH 6.5, and 180 rpm over 7 days.(B) The conidia morphology observed under an optical microscope.The conidia were collected via the filtration of the culture broth after the strains were cultured in PSB medium at 28 °C and 180 rpm for 7 days.The photos in (Bb,Bd,Bf) are the enlarged views of the photos of the red squares in (Ba,Bc,Be), respectively.(C) The germination rates of conidia collected from PSB medium at 28 °C and 180 rpm after 12, 24, 36, and 48 h, respectively.The data are indicated as means ± SD (from at least three independent samples) and were compared with P1 strain using Student's t-test (* p < 0.05; ** Figure 2 . Figure 2. Transient expression of the promoters in the leaf cells of Nicotiana tabacum.Transient expression of promoters: normal uvpks1 promoter-GFP (a-c) and LN02 mutated uvpks1 promoter-GFP (d-f) fusion proteins located in the leaf cells of N. tabacum, respectively.Scale bars in the photos: 20 µm. Figure 2 . Figure 2. Transient expression of the promoters in the leaf cells of Nicotiana tabacum.Transient expression of promoters: normal uvpks1 promoter-GFP (a-c) and LN02 mutated uvpks1 promoter-GFP (d-f) fusion proteins located in the leaf cells of N. tabacum, respectively.Scale bars in the photos: 20 µm. Figure 3 . Figure 3. Effects of temperature on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains growing on PSA medium at pH 6.5 for 21 days after inoculation at 23 °C, 28 °C, and 32 °C, respectively.(B) The colony expansion diameters of fungal strains.(C) The mycelial dry weight per Petri dish.*, p < 0.05. Figure 3 . Figure 3. Effects of temperature on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains growing on PSA medium at pH 6.5 for 21 days after inoculation at 23 • C, 28 • C, and 32 • C, respectively.(B) The colony expansion diameters of fungal strains.(C) The mycelial dry weight per Petri dish.*, p < 0.05. Figure 4 . Figure 4. Effects of pH values in medium on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains grew for 21 days after inoculation with pH values of 5.5, 6.5, and 7.5 in PSA medium at 28 °C, respectively.The photos of the three strains at pH 6.5 are the same as those at 28 °C and pH 6.5 in Figure 3A.(B) The colony expansion diameters of fungal strains.(C) The mycelial dry weight per Petri dish.*, p < 0.05. Figure 4 . Figure 4. Effects of pH values in medium on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains grew for 21 days after inoculation with pH values of 5.5, 6.5, and 7.5 in PSA medium at 28 • C, respectively.The photos of the three strains at pH 6.5 are the same as those at 28 • C and pH 6.5 in Figure 3A.(B) The colony expansion diameters of fungal strains.(C) The mycelial dry weight per Petri dish.*, p < 0.05. Figure 5 . Figure 5. Effects of NaCl in medium on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains grown for 21 days treated with NaCl at concentrations of 0.1, 0.2, and 0.4 M in PSA medium, respectively.(B) Inhibition of the colony expansion diameters of fungal strains.**, p < 0.01. Figure 5 . Figure 5. Effects of NaCl in medium on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains grown for 21 days treated with NaCl at concentrations of 0.1, 0.2, and 0.4 M in PSA medium, respectively.(B) Inhibition of the colony expansion diameters of fungal strains.**, p < 0.01. Figure 6 . Figure 6.Effects of Congo red in medium on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains grew for 21 days treated with Congo red at concentrations of 2 mg/mL and 4 mg/mL in PSA medium, respectively.The CK (0 mg/mL CR) photos of the three strains are the same as those (0.0 M NaCl) in Figure 5A.(B) Inhibition of the colony expansion diameters of fungal strains.**, p < 0.01. Figure 6 . Figure 6.Effects of Congo red in medium on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains grew for 21 days treated with Congo red at concentrations of 2 mg/mL and 4 mg/mL in PSA medium, respectively.The CK (0 mg/mL CR) photos of the three strains are the same as those (0.0 M NaCl) in Figure 5A.(B) Inhibition of the colony expansion diameters of fungal strains.**, p < 0.01. J 16 Figure 6 . Figure 6.Effects of Congo red in medium on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains grew for 21 days treated with Congo red at concentrations of 2 mg/mL and 4 mg/mL in PSA medium, respectively.The CK (0 mg/mL CR) photos of the three strains are the same as those (0.0 M NaCl) in Figure 5A.(B) Inhibition of the colony expansion diameters of fungal strains.**, p < 0.01. Figure 7 . Figure 7. Effects of SDS in medium on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains grew for 21 days after inoculation with SDS at concentrations of 0.025% and 0.050% in PSA medium, respectively.The CK (0.000% SDS) photos of the three strains are the same as those (0.0 M NaCl) in Figure 5A.(B) Inhibition of the colony expansion diameters of fungal strains.**, p < 0.01. Figure 8 . Figure 8. Effects of H2O2 in medium on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains grew for 21 days after inoculation with H2O2 at concentrations of 0.02% and 0.04% in PSA medium, respectively.The CK (0.00% H2O2) photos of the three strains are the Figure 7 . Figure 7. Effects of SDS in medium on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains grew for 21 days after inoculation with SDS at concentrations of 0.025% and 0.050% in PSA medium, respectively.The CK (0.000% SDS) photos of the three strains are the same as those (0.0 M NaCl) in Figure 5A.(B) Inhibition of the colony expansion diameters of fungal strains.**, p < 0.01.3.2.6.Activation of uvpks1 Increased Fungal Resistance to H 2 O 2 -Induced Oxidative Stress J 16 Figure 7 . Figure 7. Effects of SDS in medium on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains grew for 21 days after inoculation with SDS at concentrations of 0.025% and 0.050% in PSA medium, respectively.The CK (0.000% SDS) photos of the three strains are the same as those (0.0 M NaCl) in Figure 5A.(B) Inhibition of the colony expansion diameters of fungal strains.**, p < 0.01.3.2.6.Activation of uvpks1 Increased Fungal Resistance to H2O2-Induced Oxidative Stress Figure 8 . Figure 8. Effects of H2O2 in medium on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains grew for 21 days after inoculation with H2O2 at concentrations of 0.02% and 0.04% in PSA medium, respectively.The CK (0.00% H2O2) photos of the three strains are the Figure 8 . Figure 8. Effects of H 2 O 2 in medium on growth of fungal strains P1, LN02, and uvpks1 C -1. (A) The colonies of fungal strains grew for 21 days after inoculation with H 2 O 2 at concentrations of 0.02% and 0.04% in PSA medium, respectively.The CK (0.00% H 2 O 2 ) photos of the three strains are the same as those (0.0 M NaCl) in Figure 5A.(B) Inhibition of the colony expansion diameters of fungal strains.*, p < 0.05; **, p < 0.01.
2024-01-02T16:03:52.905Z
2023-12-31T00:00:00.000
{ "year": 2023, "sha1": "41df604b5ad15d54a8920b9705dc723b4cd1587c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2309-608X/10/1/31/pdf?version=1704013588", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4fb5d4c53096d6fe1bd09e6d746116c7e6b6c03", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
40182336
pes2o/s2orc
v3-fos-license
Cardiovascular Risks of Androgen Deprivation Therapy Mailing address: Carlos V. Serrano Jr. • Av. Enéas Carvalho de Aguiar Bloco II, 2 Andar, Sala 12 05403-000 São Paulo, SP Brazil E-mail: carlos.serrano@incor.usp.br Manuscript received August 6th, 2009; revised manuscript received March 15, 2010; accepted April 27, 2010. increasing in the last years, a fact attributed mainly to the routine measurement of the prostate-specific antigen (PSA) in males aged 45 years and older. This type of cancer presents the highest correlation with age and it is believed, in the USA, that one in every six men will be diagnosed with prostate cancer in his lifetime3. Androgen deprivation therapy was first used by Huggins and Hodges in 1941 5 .Its effect is based on the fact that the prostatic neoplastic cells present a large number of androgen receptors on their surface and that their growth depends on the stimulation of these receptors.In brief, we can affirm that testosterone is the main circulating androgen and most of it is produced by Leydig cells in the testes, after central stimulation by the gonadotropin-releasing hormone (GnRH) and the luteinizing hormone (LH), which are secreted by the hypothalamus and the pituitary, respectively.After entering the prostate, testosterone is converted by 5α-reductase into dihydrotestosterone and binds to a cytoplasmic receptor, forming a complex that modulates the nuclear transcription and consequently, all cell activity 6 .The figure below illustrates this hormonal axis and the specific mediations used to block it (Figure 1). The androgen deprivation can be attained through GnRH agonists, steroidal and nonsteroidal antiandrogens, estrogens or bilateral orchiectomy.This review will cover only GnRH agonists and orchiectomy, the modalities considered to be the most efficient ones.GnRH agonists such as leuprolide and goserelin result in a central deprivation of testosterone secretion by suppressing the physiological pulsatility of GnRH secretion, with a consequent negative regulation of the pituitary receptors and lower LH secretion.These are long-acting medications administered by depot-injections.The orchiectomy is another form to inhibit androgen activity Abstract Prostate adenocarcinoma is the most common cancer type in the male sex after skin cancer.Among the several types of treatment for prostate cancer, the androgen deprivation therapy has been highly recommended in patients with metastatic or locally advanced disease, which probably results in increased survival.However, the androgen deprivation is the cause of several adverse effects.Complications such as osteoporosis, sexual dysfunction, gynecomastia, anemia and body composition alterations are well-known effects of the therapy.Recently, a number of metabolic complications have been described, such as increase in the abdominal circumference, insulin resistance, hyperglycemia, diabetes, dyslipidemia and metabolic syndrome, with a consequent increase in the risk of coronary events and cardiovascular mortality in this specific population.This update article presents a literature review carried out at MEDLINE database of all literature published in English from 1966 to June 2009, using the following key words: androgen deprivation therapy, androgen suppression therapy, hormone treatment, prostate cancer, metabolic syndrome and cardiovascular disease, with the objective of analyzing which would be the actual cardiovascular risks of androgen deprivation therapy, also called androgen suppression, in patients with prostate cancer.and is considered a relatively simple procedure with few risks; however, it is seldom used, due to the psychological effects on the patient 7 .The meta-analysis of ten studies did not show any difference in global survival, with similar mortality rates between the two therapeutic options 8 . Epidemiological aspects of prostate cancer Initially, the deprivation was used only in patients with advanced (metastatic) disease, which has been shown to improve the individuals' quality of life, including the decrease in bone pain, pathological fractures, spinal cord compression and uretheral obstruction.More recently, studies have demonstrated an increase in survival in patients with localized advanced disease (extracapsular involvement or high-risk local disease -PSA>20, Gleason>8 or stage T2c) submitted to androgen deprivation after local treatment with radiotherapy or prostatectomy.It has also been indicated, although more controversially, for patients with PSA elevation after local treatment, even without evidence of metastatic disease 7 .Therefore, the use of androgen deprivation therapy has increased considerably in the last decade. Adverse effects of androgen deprivation In spite of the benefits of the deprivation therapy and the very often dramatic and sustained responses presented by many patients, this type of treatment also exposes them to several adverse effects that have been long acknowledged, such as skeletal complications, loss of muscular strength, loss of libido, erectile dysfunction, hot flashes, anemia and gynecomastia.However, it was only in 1990, after a small cross-sectional study by Tayek et al 9 , that the first evidence of the deleterious cardiovascular effects of this type of treatment were disclosed.This study demonstrated, during a 12-month follow-up, the onset of metabolic and nutritional alterations that comprised increase in body weight, body fat and total cholesterol levels. Several publications followed this first study 10,11 , with similar populations, which confirmed the weight gain, loss of lean mass, increase in body fat percentage, mainly at the cost of fat deposits in the subcutaneous tissue.Other studies have shown a decrease in arterial compliance 14 , as well as significant metabolic alterations 11,12,13 : increase in the levels of total cholesterol, high-density lipoprotein (HDL-cholesterol), triglycerides, increase in insulin resistance and glycemia.The increase in the incidence of diabetes after the deprivation therapy has also been demonstrated 14 . Metabolic syndrome secondary to the hormonal deprivation In recent years, it has been demonstrated that hypogonadism is an independent risk factor for the development of metabolic syndrome 15,16 and that the androgen deprivation is nothing more than a model of hypogonadism that was purposely produced, either by surgery or drug-induced.Currently, this syndrome is defined as a set of multiple metabolic risk factors that are directly related to the development of atherosclerotic cardiovascular disease 17 . The prevalence of metabolic syndrome 18,19 after androgen deprivation was recently studied by Braga-Basaria et al 20 They were the pioneers when they published a cross-sectional study that demonstrated an increase in the prevalence of metabolic syndrome after one year of androgen deprivation (22% in the group without deprivation vs 55% in the group with deprivation, que demonstrou p < 0.03).It is noteworthy the fact that the metabolic syndrome presented by patients submitted to androgen deprivation has some especial features that differ from the classically described form, such as the predominant accumulation of subcutaneous fat, rather than visceral fat accumulation and the concomitant increase in HDL-cholesterol and LDL-cholesterol levels.It is possible that the metabolic syndrome, as usually described, encompasses different patient profiles and that the alterations seen in patients submitted to androgen deprivation constitute a specific subgroup 21 .Moreover, the metabolic syndrome in these patients seems to develop early at the beginning of the treatment. Aiming at the analysis of these metabolic alterations in the Brazilian population with prostate cancer, a preliminary joint study was carried out by Instituto do Coração (The Heart Institute) and the Division of Urology of Hospital das Clínicas of Faculdade de Medicina da Universidade de São Paulo, enrolling patients with a diagnosis of prostate cancer submitted to androgen deprivation.This was a prevalence study of 54 patients that were divided in two groups: recent deprivation (less than three months of treatment) and chronic deprivation (one year of treatment).The prevalence of metabolic syndrome in the recent deprivation group was 26%, whereas the chronic deprivation group presented a prevalence of 48%.Therefore, in our population, there is an increase in the prevalence of metabolic syndrome in patients submitted to androgen deprivation 22,23 . The interaction "androgen deprivation" vs metabolic syndrome vs cardiovascular disease in patients with prostate cancer Considering all the alterations, several doubts have surfaced regarding the safety and potential cardiovascular risks that are inherent to the deprivation therapy.Statistical data from the last decade already showed that cardiovascular diseases are the main cause of death in patients with prostate cancer submitted to hormonal deprivation and that these rates are higher than those found in the general population 24 .Recently, three studies that have been published, which will be analyzed next, have strongly suggested an increase in the cardiovascular mortality, as well as an increase in the frequency of nonfatal myocardial infarction (MI) in this population. The first study, carried out by Keating et al 25 , was an observational study of population that comprised more than 73,000 patients with a diagnosis of localized prostate cancer.Of this total, 36% were submitted to hormonal deprivation with GnRH agonists and 6.9% through orchiectomy.After a mean follow-up of 4.6 years, it was observed that the use of GnRH agonists was associated with a 44% increase in the risk of developing diabetes, 11% increase in the risk of myocardial infarction and 16% increase in the risk of sudden death.Orchiectomy, on the other hand, was associated with a 34% increase in the risk of developing diabetes and was not associated with an increase in the risk of cardiovascular diseases.Another study, published by Tsai et al 26 , analyzed more than 1,000 patients submitted to androgen deprivation and observed a cumulative five-year incidence of cardiovascular death of 5.5% in those older than 65 years.This incidence was significantly lower for patients without deprivation, with a 2.0% risk for those older than 65 and 3.6% for those younger than 65 years. It is important to mention that these studies present limitations, mainly the fact that they are retrospective studies, and thus, lack the capacity to control other cardiovascular risk factors.Nevertheless, the difference in mortality between the groups is very significant and suggests a role of the deprivation in this difference. The third and more recent study was published by D'Amico et al 27 , quewho analyzed the influence of the deprivation on the frequency and time of development of fatal MI.This study was carried out based on the combined retrospective analysis of the results of three randomized trials with androgen deprivation and radiotherapy, published in Australia, Canada and United States.An increase in the cumulative incidence of fatal MI was observed in patients older than 65 years that were submitted to deprivation, in comparison with those that were not submitted to deprivation.Patients that were submitted to only three months of deprivation therapy presented an incidence of MI similar to that observed in those submitted to a six-month therapy, suggesting that a three-month treatment period is enough to cause deleterious cardiovascular effects.Moreover, the occurrence of fatal MI in patients submitted to deprivation had an earlier onset than in those without deprivation therapy. Conclusion In spite of the potential limitation of the present review that restricted the bibliography to the Medline database, it is increasingly evident that this modality of treatment results in several important side effects, such as diabetes, dyslipidemia, metabolic syndrome and coronary artery disease, including the increase in the rate of fatal infarctions and cardiovascular mortality.Therefore, although it is effective in the treatment of specific subgroups of patients with prostate cancer, the indication must always be judiciously made and individualized for each patient, aiming at minimizing the cardiologic impact as well as optimizing the oncologic benefit.It is also worth mentioning that these patients must be monitored by both the urologist and the cardiologist and must be routinely evaluated, with the objective of attaining the early diagnosis and treatment of the potentially adverse cardiovascular effects. Potential Conflict of Interest No potential conflict of interest relevant to this article was reported. Sources of Funding There were no external funding sources for this study. Study Association This study is not associated with any post-graduation program.
2017-09-28T09:53:31.091Z
2010-09-01T00:00:00.000
{ "year": 2010, "sha1": "3cadb8d1b690d545c5673803bdb47d7b05dbc25e", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/abc/a/WVQjK3wMyrRqd9dLHj7LsjQ/?format=pdf&lang=pt", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3cadb8d1b690d545c5673803bdb47d7b05dbc25e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247457599
pes2o/s2orc
v3-fos-license
Benchmarking of the Conductance Increment Method and its improved versions To increase the performance of a photovoltaic (PV) system, a circuit using MPPT (Maximum Power Point Tracking) technology must be used. There are several algorithms proposed in the literature and they need to be compared to select the best performing MPPT technology for a specific application or to make recommendations for future MPPT research. This article presents a benchmarking the most widely used MPPT algorithms, namely the "INC_C" (Classical Incremental Conductance), the "INC_AM" (Modified Adaptive Incremental Conductance). The comparative study presented in this work will confirm that "INC_AM» is the best MPPT technique to improve the efficiency of a PV system. INTRODUCTION The PV energy conversion system must operate near to the maximum power point (MPP) in order to increase solar system efficiency. It is therefore necessary to use MPPT which plays an important role in tracking maximum power points as they maximize the efficiency of the PV system under given conditions and minimize the overall cost of a PV installation [1,2]. Various MPPT techniques have been developed and used to monitor the MPP of PV systems, such as: the P&O technique [3], which is based on iterative algorithms and is easy to implement but with an inevitable oscillation problem. The Incremental Conductance technique [4]. The Fuzzy Logic Control (FLC) search technique [5], which is used very successfully in the implementation of MPP research, among these existing MPPT control methods, the P&O algorithm, is widely used in many PV systems [6]. The P&O method works well when solar irradiation and temperature do not vary rapidly over time [7], but it cannot quickly keep up with the MPP and therefore the output power oscillates around the MPP. The examination of the Incremental Conductance (INC) approach and its modifications is the subject of this work. This document is structured as follows: After the introduction, section 2 presents the mathematical model of the PV cell. Section 3 deals with the MPPT technique used to design a Boost Converter. In section 4 we present the simulation results with their discussion to evaluate the algorithm developed, finally, we conclude our work. SOLAR CELL MATHEMATICAL MODEL Photovoltaic models with one or two diodes have been widely used to model the I-V output characteristic of a photovoltaic cell or panel [8]. The single diode model is the simplest one. It is improved by the incorporation of an Rs series resistor [9]. However, despite its simplicity, it has relatively significant errors compared to experimental data when exposed to high temperature variations. Model optimization is achieved by including an additional shunt resistor Rsh [10]. In addition, its accuracy decreases at low irradiance, especially in the open circuit voltage range Voc. The electrical current of the PV panel for the one diodes model is given by: [11,12] "# = "% − '( − )% So, The following relation describes the current supplied by a solar cell in a single diode model : [13,14] / "# + ) "# ( 5% 6 − 18 With: Iph Photo-generated current (A). Ipv Solar cell terminal current (A). I0i Reverse saturation current of diode in conventional model (A). The ideality factors TECHNIQUE MPPT The characteristics of the generator I (V) depend on the illumination and temperature "add reference". The maximum power point fluctuates as a result of these climatic variations. Due to these fluctuations, we often insert one or more controlled static converters that can follow the maximum power point. These commands are called MPPT and are associated with the chopper controls to ensure the coupling between the PV generator and the receiver by forcing the former to provide its maximum power. . Figure 2: Synoptic diagram of the PV system studied As illustrated in Figure 2, the proposed framework comprises of a PV panel, a DC-DC converter lift, equipped with its MPPT control block and a battery. The MPPT control block ensures, under certain conditions, the control of the conduction and blocking of the MOSFET by changing the duty cycle α of the PWM signal driving this MOSFET. Number of PV cells per module Ncell 22 The specifications of the booster chopper converter used are summarized in Table 2: Table 3 lists the specs of the battery that was utilized as a charge at the converter's output: Method "Classic Conductance Increment" (INC_C) The steady conductance was planned based on a perception of the trademark bend P(V). This calculation was created in 1993 and meant to conquer a portion of the hindrances of the P&O calculation [16]. This method depends on the information on the conductance of the PV module and the results on the working point according to the greatest force point (MPP). Accordingly, the conductance Gc of the PV not really settled in the connection between the current and the yield voltage of the PV module demonstrated underneath. Let Vpv be the voltage at the PV module's output and Vmpp be the voltage at the PV module's maximum power point [17] [18]. The derivative dP/dV in this example can meet the following conditions: • If dP/dV > 0, the operating point is to the left of the MPP, Vmpp > Vpv. • If dP/dV < 0, the operating point is to the right of the MPP, Vmpp < Vpv. • If dP/dV = 0, the operating point is on the MPP, Vmpp = Vpv. The results of Figure 7 show that the PV model with the algorithm "INC_C" converges to the desired optimal values with oscillations around the MPP (Zoom 2), The performance of the algorithm is rigid and good. The response to a rapid variation (Zoom 1) is done in a fluid way at around 0.075 s, corresponding to the passage of the irradiation from 100 W/m2 to 1000 W/m2, as well as the satisfactory monitoring of the MPP under different lighting conditions. Method "Increment of Adaptive Conductance Modifiede" (INC_AM) The processing is separated into two sections in this technique, each of which is critical for detecting the target MPP value. The first section addresses the discrepancy between tracking precision and convergence speed. The second half, on the other hand, provides for the removal of the influence of drift during a quick change in light. • Part One: Given the contradictions between the accuracy of the tracking and the speed of convergence of the "INC_C", we present here in the first part of our improved adaptive algorithm, namely the "INC_AM" variable-step method. When the operating point is further away from the MPP, the primary principle is to pick a step high enough to speed up the optimization. When it's getting close to the MPP, you'll want to pick a tiny enough step to emphasize precision. As illustrated in Figure 8, the P(V) curve is separated into three portions (1, 2 and 3). Let k = dP / dV, the slope at a point n of this curve; in zone (1), k is basically positive, and in zone (3) it is negative. In addition, the absolute value of k in zones (1) and (2) is greater than its absolute value in zone (3). The operating point can be determined according to the sign of k. When k > 0, essentially in zone (1), the disturbance can be set larger (d1). When k < 0 (zone 3), we can choose a small perturbation step (d2). When | k | < ε while tending towards 0, the system works in the region (2), corresponding to an operation around the MPP i.e. ε | dV | -| dP | > 0. In this case, all we need to do now is address a little stumbling block (d3). The flowchart of the first portion of the "INC AM" algorithm in Figure 9 illustrates this principle of action. The PV module's maximum usable power is determined by the ambient temperature and the amount of solar irradiation. The algorithm of INC_C exploit the slope of the P (V) curve to detect the MPP. If the algorithm finds that the operating point is at the top of the curve P (V), corresponding to a slope is zero, and equation (6) is satisfied, then the duty cycle α of the DC-DC converter is fixed and no oscillation occurs during this step until changes in the slope occur. In real life, the zero slope condition is rarely reached. Generally, the algorithm INC_C fails to make a good decision when the irradiation is suddenly increased [20], as shown in Figure 11. Indeed, when the irradiation is at 0.3 kW/m², the MPPT algorithm adjusts the duty cycle to ensure that the PV system operates on the load line 2 and that the MPP (point B) is tracked. After some time, solar irradiation may increase to 1 kW/m², but the duty cycle is maintained at charge line 2. Therefore, the point M will be recorded by the load line 2 on curve I(V), corresponding to the power at point C on the curve P(V). Therefore, the algorithm of INC_C calculates the slope between point C and point B, which is then positive. However, the charge line 2 detects power at point C, which is coupled with a negative slope between point C and point A, which is the genuine MPP. As a result, the algorithm INC C increases the output voltage of the PV module without detecting the anomaly, causing the PV module's power to drift away from the real MPP. A < 0.06 (6) When the condition of equation (6) is met, the system operates in MPP. Therefore, the algorithm sets b to 1 and then switches to the improved algorithm. In the improved adaptive algorithm, the program continues to check the state of the equation (6). If solar radiation and charge remain constant, the duty cycle α will not change. When solar radiation changes, the algorithm sets the variable b to zero. The program then analyses the fluctuations in the PV module's voltage and output current. If the algorithm detects an increase in current or voltage, the duty cycle will rise as well. Under the conditions mentioned at the beginning of this work, the result of the simulations of the algorithm "INC_AM" under Matlab / Simulink, are illustrated in Figure 13. The latter is composed of three graphs in a time interval of 0.3 s: The first represents the voltage Vmpp_INC_AM of PV output, the second corresponds to the output current Impp_INC_AM of the PV panel, and the third represents the power Pmpp_INC_AM produced by the PV module. The results of Figure 13, show that for a rapid variation in the level of solar irradiation, the algorithm "INC_AM" responds better than the algorithm "INC_C", moreover, when the MPP is reached the oscillations in steady state no longer occur. COMPARISON OF THE PERFORMANCE OF PROCESSED MPPT ALGORITHMS In order to ensure a sufficient comparative study between the MPPT algorithms treated, we evaluated certain performance parameters, namely; for each of the two algorithms, the convergence time τc and the oscillation deviation εo for an irradiation ranging from 100 to 1000 W/m2 for each of the two algorithms. The calculation results are reported in the following table: CONCLUSION In this work dedicated to the study of MPPT methods, we began by exposing the most commonly used method which is "classical incremental conductance" (INC_C) and its improved version: "modified adaptive incremental conductance" (INC_AM). These technologies have easy-toimplement algorithms to control booster-type DC-DC converters. Generally speaking, the traditional algorithm (INC_C) gives good results, but has big disadvantages. In order to overcome these problems, the improved version of this algorithm has been studied. For the purpose of evaluating the performance of these two algorithms "INC_C" and "INC_AM", we compared their convergence times τc,as well as their oscillation differences εo induced by their use. This allows us to understand and analyze the pros and cons of each of the two methods. It can then be concluded that, compared to the "INC_C" method, the "INC_AM" method is the best with aconvergence time τc lower; 16 msagainst 31 s, with a deviation of oscillation εo very minimized to the value of 0.01 W against 1.18 W.
2022-03-16T15:31:50.420Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "22214efe34f6c0b7bad24fac13c405ddba5b79db", "oa_license": "CCBY", "oa_url": "https://www.itm-conferences.org/articles/itmconf/pdf/2022/03/itmconf_icaie2022_01011.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2fc6d9c60726afd4f1c3c72e263e340f743c5fb1", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
4853783
pes2o/s2orc
v3-fos-license
The Seroprevalence of Hepatitis C Antibodies in Immigrants and Refugees from Intermediate and High Endemic Countries: A Systematic Review and Meta-Analysis Background & Aims Hepatitis C virus (HCV) infection is a significant global health issue that leads to 350,000 preventable deaths annually due to associated cirrhosis and hepatocellular carcinoma (HCC). Immigrants and refugees (migrants) originating from intermediate/high HCV endemic countries are likely at increased risk for HCV infection due to HCV exposure in their countries of origin. The aim of this study was to estimate the HCV seroprevalence of the migrant population living in low HCV prevalence countries. Methods Four electronic databases were searched from database inception until June 17, 2014 for studies reporting the prevalence of HCV antibodies among migrants. Seroprevalence estimates were pooled with a random-effect model and were stratified by age group, region of origin and migration status and a meta-regression was modeled to explore heterogeneity. Results Data from 50 studies representing 38,635 migrants from all world regions were included. The overall anti-HCV prevalence (representing previous and current infections) was 1.9% (95% CI, 1.4–2.7%, I2 96.1). Older age and region of origin, particularly Sub-Saharan Africa, Asia, and Eastern Europe were the strongest predictors of HCV seroprevalence. The estimated HCV seroprevalence of migrants from these regions was >2% and is higher than that reported for most host populations. Conclusion Adult migrants originating from Asia, Sub-Saharan Africa and Eastern Europe are at increased risk for HCV and may benefit from targeted HCV screening. Introduction Hepatitis C virus (HCV) infection is a serious global health threat with an estimated 150-170 million individuals chronically infected worldwide, resulting in 350,000 deaths each year due to associated cirrhosis and hepatocellular carcinoma (HCC) [1][2][3]. Mortality due to HCC has increased over the past four decades in many countries and in part is due to chronic HCV. [4,5] Chronic HCV has also resulted in an enormous economic burden and lost productivity. [6] HCV infected individuals often remain asymptomatic for 30 years or more until liver disease is advanced [7]. Early detection therefore is critical as treatment usually leads to viral eradication, prevents progression of liver disease, and decreases all-cause mortality [8]. The recent development of safer, more tolerable and highly effective direct acting antiviral combinations offers the real possibility of cure for all HCV infected patients [8,9]. This provides a clear and compelling rationale for identifying and screening groups at risk to avert the projected individual and economic burden from HCV. The traditional approach to HCV control in most low prevalence countries is to screen groups with behavioral risk factors for exposure to infected blood, such as through intravenous drug use or receipt of blood products prior to routine screening. In spite of these programs, the majority of individuals with HCV (45-80%) in these countries remain undiagnosed and unaware of their infection until they develop chronic liver disease [10,11]. To address this issue in the US, the Centre for Disease Control and Prevention (CDC) and the U.S. Preventive Services Task Force (USPTF) recently recommended a one-time HCV birth cohort screening program (Baby Boomers born between 1945 and 1965) in addition to risk factor based screening programs [10,12]. Migrants born in intermediate and high HCV prevalence countries who live in low HCV prevalence countries are likely to be at increased risk for HCV due to exposure in their countries of origin [13]. Unlike low HCV prevalence countries where the primary mode of transmission is through intravenous drug use, most infections in intermediate and high HCV endemic countries are acquired iatrogenically through contaminated needles, medical procedures or receipt of unscreened contaminated blood products [7,14]. Most migrants are therefore unlikely to be detected in current HCV screening programs. Furthermore they have not been identified as a group that should be targeted for HCV screening with the exception of recent UK and Canadian guidelines. [13,15] This is primarily due to the fact that the HCV burden in this population has not been adequately quantified. To address this knowledge gap, we carried out a systematic review and meta-analysis on the seroprevalence of HCV in migrants living in several different low HCV prevalence, high migrant-receiving host countries. Data sources and searches This article was prepared and reported according to PRISMA guidelines (S1 Appendix) [16]. Four electronic databases, including Medline, Medline In-Process, EMBASE, and the Cochrane Database of Systematic Reviews were searched from inception until June 17, 2014. The search strategy was developed by a medical librarian and the strategy and search terms for MEDLINE are listed in supporting information (S2 Appendix). In summary, search terms included those for hepatitis C and the population of interest (migrants, foreign born, immigrants, refugees, asylum seekers) using a combination of text words and subject headings appropriate to each database. No limits by date or language were applied to the search. Additional studies were identified by examining the bibliographies of eligible studies and review articles. Study selection and quality assessment Original studies that reported data on the anti-HCV prevalence in migrants originating from low/intermediate income, intermediate/high HCV prevalence countries, and arriving in high income, low/intermediate HCV prevalence countries, were included. Conference abstracts and proceedings were not included due to concerns regarding the inability to determine the quality of the methods. We adapted the GRADE method to assess the quality of the body of evidence and assessed the risk of bias, inconsistency, indirectness, and imprecision of the data [17,18] (S3 Appendix). We did not assess small study effects (e.g. publication bias) as this is not reliable for seroprevalence studies [19]. We only included studies judged to have a low to moderate risk of selection or detection bias. Studies were included if the entire population or a random sample of individuals captured in a particular settings such as clinics conducting immigrant and refugee screening, general primary care or medical clinics, and prenatal clinics over a set period of time and all participants were offered HCV blood testing. Studies were excluded if; 1) If a non-random sample of individuals from a site were recruited or if only selected individuals had HCV serologic testing performed, 2) surveys in which <65% of eligible persons agreed to participate in the study, and 3) the study population focused on migrants populations judged to be at either at lower risk (for example healthy blood donors [20][21][22]) or increased risk for HCV due to specific lifestyles or environmental conditions (i.e. sex workers, intravenous drug use, incarcerated migrants, men having sex with men (MSM), individuals being treated for chronic liver disease). Data extraction Two authors screened the titles and abstracts of all identified articles, reviewed and selected full-text articles, and independently extracted and entered data from each included study into a database. Data on study design, recruitment method, serologic screening test used, confirmatory test used, study duration, country of landing, mean or median age, gender, migrants' region of origin, and migration status were extracted. The primary study outcome was the proportion of subjects with the presence of HCV antibody (anti-HCV) detected by an enzyme immunoassay (EIA) with or without reported confirmatory testing (previous and current infection). Categories of HCV antibody seroprevalence for individual countries were defined as: very low (<1%), low (1-<2%), intermediate (2-<3%), high (3-<5%), and very high (! 5%). Age was defined as the mean or median age of the study population categorized into the following groups: 18 years, 19-29 years, and ! 30 years. Studies on children were classified as those where all study subjects were 18 years of age. Region of origin and income level were defined according to the World Bank classifications [23]. Regions of origin were categorized as: Latin America and the Caribbean, Eastern Europe and Central Asia, Middle East and North Africa, Sub-Saharan Africa, South Asia, and East Asia and the Pacific Categories for 'Combined Africa' and 'Combined Asia' were added to the region of origin variable, so that studies that only classified migrants as originating from Africa or Asia could be included. Studies with participants from several regions of origin were described as "Mixed", however anti-HCV prevalence by region of origin was estimated if the data were reported this manner. Migration status was classified as; 'immigrants' (immigrants and adopted children), 'refugees' (refugees and asylum seekers), and 'other' (mixed populations or migrant status not mentioned). Data synthesis and analysis For each study, the prevalence of HCV antibodies (includes previous and current infection) and viremic HCV infections were calculated as the reported numbers of subjects positive for anti-HCV or HCV PCR, respectively, divided by the total number of subjects screened for each of these markers. Proportions were transformed with the logit transformation and pooled using a random-effects model [24,25]. The logit transformation was used to avoid studies with few events from being weighed too heavily in the random-effects model and the multivariate analysis, which used a random-effects logistic regression model, is also based on the logit transformation [26]. The logit transformation, arcsine and Freeman-Turkey methods, different methods used in meta-analysis of proportions were compared in a sensitivity analysis. Overall heterogeneity among studies was assessed using the I 2 statistic and estimates were stratified by age group, region of origin and migration status, as these variables were believed a priori to be important predictors of chronic HCV infection [27]. The meta-analysis was performed using the metaprop command of the meta package (3.6-0) in R (version 3.1.3) [28]. A random-effects meta-regression, using the glmer command of the lme4 package (version 1.0-4) to fit generalized linear mixed-models in R, was used to further explore heterogeneity. Unadjusted models for each of age group (reference: 18 years), migrant status (reference: immigrants), and region of origin (reference: Latin America & Caribbean) with anti-HCV prevalence as the outcome were run (see Table 1 for description of each categorical variable). Two adjusted metaregression models were run and included; 1) age and migrant status, and 2) migrant status and region of origin. It was not possible to create a model to adjust for age by region of origin because the age of migrants was not stratified by region of origin in the included studies [29]. We created maps of country specific anti-HCV prevalence with data from the WHO [30] or more recent estimates [31][32][33][34][35] using the ArcGIS-9 software (ESRI data and Maps 9Á3, 2008; USA). Weighted regional HCV seroprevalence (categorized by World Bank classification) were calculated from these same country specific seroprevalence data and multiplied by the 2009 country specific population estimates from the World Bank (Tables A and B in S4 Appendix) [23]. Regional seroprevalence estimates from our meta-analysis were superimposed on the same map for comparison. Results A total of 973 titles and abstracts were screened, 173 full text articles were reviewed, and 50 were included in this study (Fig 1). Included studies reported on 38 635 participants from all world regions who arrived in Australia, Canada, Europe, Israel and the United States (Table 1). Across all studies, there was a low to moderate risk of bias, moderate to high inconsistency, moderate imprecision, and low indirectness. We therefore considered the overall quality of the body of data to be low to moderate (S3 Appendix). Although all studies measured HCV antibodies, only 44% (N = 22) of studies reported confirmatory testing. The prevalence of antibodies, was similar in those studies in which a confirmatory test was reported or not [2.1% (95% CI, 1.2-3.6) vs. 1.8% (95% CI, 1.1-2.8%), respectively] (data not shown). All studies were therefore included in the meta-analysis, and resulted in an overall anti-HCV prevalence of 1.9% (95% CI, 1.3-2.7) ( Table 2). A sensitivity analysis was performed and compared the overall anti-HCV prevalence estimates using the logit transformation, arcsine and an adjusted meta-regression model found a 8.6 (95% CI, 3.0-24.7) increased odds of being HCV positive after adjusting for migration status ( Table 2). In addition, the anti-HCV prevalence in studies that only included children was significantly lower than the seroprevalence in the general population [0.6% (95% CI, 0.3-1.3) vs. 2.3% (95% CI, 1.6-3.5)] (Table 2, Fig 2). When studies that only included children were removed from the analysis, the overall anti-HCV prevalence increased from 1.9% (95% CI, 1.4-2.7) to 2.2% (95% CI, 1.6-3.2). Anti-HCV prevalence also varied significantly by region of origin, and was high (>3%) in migrants from South Asia, Sub-Saharan Africa, combined Africa Fig 3). Significant differences in the seroprevalence between regions of origin remained in the meta-regression model even after adjusting for migration status (data not shown). The regional HCV seroprevalence estimates for migrants from our meta-analysis were similar or slightly lower than the corresponding HCV seroprevalence estimates from the WHO in the general population in their regions of origin with the exception of Sub-Saharan Africa and South Asia ( Table 2, Fig 4). Calendar time did not affect our results as anti-HCV prevalence was similar across all decades and heterogeneity within groups remained high. Discussion The findings of this study suggest that migrants from intermediate or high HCV prevalence countries represent an important risk group for HCV infection. The overall pooled anti-HCV seroprevalence in migrants in this study was 1.9% (1.4%-2.7%). Those from Sub-Saharan Africa, Asia and Eastern Europe and older age groups had the highest risk with anti-HCV prevalence ranging from 2.2% to 5.6%. The regional HCV seroprevalence estimates from our study were a little lower or higher but fell within a similar range and pattern when compared to published studies from migrant source countries [1][2][3][32][33][34][35]. Globally, anti-HCV prevalence is highest in Asia, Africa, Eastern Europe, and North Africa & the Middle East, ranging from 2-4%, whereas in North America, Latin American & the Caribbean, most Western European countries and Australia, the anti-HCV prevalence is less than 1.5% [3,[30][31][32][33][34][35]. We also found that anti-HCV seroprevalence increased with age and was higher in adults as compared to children. The similar age trend and the pattern and rank of regional estimates provide external validity to our results. Despite similarities to published regional anti-HCV seroprevalence, the heterogeneity of the overall pooled estimate from our study was high and remained moderate even after stratification by important HCV predictive variables such as age and region of origin. The high overall and residual heterogeneity after stratification may reflect; 1) the true variability of HCV epidemiology within countries and regions, 2) characteristics of migrants included in our study and/ or 3) the inability to adequately adjust for age and country of origin due the fact that data were not stratified by both variables in the studies included. HCV epidemiology is highly variable both within and between countries and regions of origin. HCV is unevenly distributed in most countries and is usually concentrated in certain risk groups or birth cohorts and may change over time as affected cohorts grow older. The prevalence of HCV in any given country is determined by the calendar time the virus was introduced into the country, when or if blood screening is performed, and the ongoing risk of HCV exposure [2,36]. Age is also an important predictor of HCV infection and increases with age in most settings [2]. Regional HCV prevalence may be heterogeneous due to the wide range of anti-HCV prevalence of individual countries within the same region [34,35,37]. The most striking example is the Middle East & North Africa which contains both Egypt with very high anti-HCV prevalence (>14%) and Djibouti, which has a low prevalence (<1%). Our regional estimates may therefore have been influenced by the number of migrants from high burden countries such as Egypt in the North African region or Pakistan in the South Asian region. We could neither adjust for this nor estimate how much this may have affected the study results. Calendar time did not affect our results as anti-HCV prevalence was similar across all decades and heterogeneity within groups remained high. The regional anti-HCV estimates in our study were slightly lower or higher than those reported in source countries. This may be due to the fact that the migrants included in our review may have differed from the populations in their countries of origin in several ways. First, the risk profile of the two populations may be different. The majority of country level HCV estimates are based on data extrapolated from or modeled from data obtained from adult groups that are either at low or high HCV risk, whereas these populations were excluded in our study [10, 31-35, 37, 38]. Second, the inclusion of children in our study may have decreased our overall rate compared to reported rates. When children were excluded from our sample, the overall prevalence increased from 1.9% to 2.2%, an estimate closer to global estimates [1][2][3]. Finally, migrants may have truly lower HCV rates compared to those living in source countries due to the "healthy immigrant" effect. Many migrants, excluding refugees, are self-selected and have a higher socio-economic status (SES) and higher levels of education as compared to those living in their countries of origin [39]. The data included in this review has a number of limitations. Age was missing in 50% of study subjects and data in each region of origin was not stratified by age. We were therefore unable to adjust for both age and region of origin in the meta-regression and this may explain some of the residual heterogeneity despite stratification by these variables. There were very few study subjects from certain geographic regions such as South Asia leading to low precision in this group. Measurement bias is possible but unlikely had a large impact on the accuracy of our results. This is because the majority of studies (82.6%) used a third generation EIA and prevalence estimates were similar for studies with and without confirmatory testing. Sources of grey literature were not systematically searched and thus some data may have been missed. Finally, given the small number of studies (N = 6 accounting for only 13% of the study population) reporting HCV PCR we were not able to estimate the number of viremic HCV cases in migrants that could benefit from antiviral therapy. Despite these limitations and the uncertainty regarding all the source of the heterogeneity of our study results, it is both plausible and likely that immigrants from intermediate and high HCV endemic countries have an HCV prevalence that is similar or slightly lower than that in their source countries. The strength of this study is that it included a large number of migrants over four decades arriving in all major immigrant receiving regions. Untreated HCV is an enormous health and economic burden that is projected to increase over the next decade unless asymptomatic infected individuals are detected and treated prior to developing end stage liver disease [6]. Eradication of HCV is now within reach given the recent availability of highly effective, well tolerated, short course, direct acting anti-viral treatments [9,40]. Identifying and treating all individuals with HCV infection therefore, is more important than ever in order to decrease the rising economic and individual burden from HCV. The majority of migrants (>70%) received in Canada and most European countries originate from intermediate and high HCV prevalence countries . Recent studies show that European migrants make up a disproportionate number of HCV cases and have a higher prevalence of HCV compared to host populations [32,37]. Similarly in Canada, migrants were estimated to accounted for 20% of HCV cases in 2002 [63]. Migrants generally lack traditional HCV risk behaviors given that they are most likely to have been infected in their countries of origin through contaminated needles, procedures and blood products [64]. Neither the risk factor based program, which detects <50% of HCV cases or the new CDC birth cohort screening program will detect HCV in most migrants [10]. Despite the well-known differences in global distribution of HCV and the potential increased risk of HCV in many migrants, only the UK has recently recommended HCV screening in migrants who were born in countries with intermediate or high seroprevalence of HCV antibodies (2% or greater). Adult migrants originating from intermediate and high HCV prevalence regions such as Africa, Asia and Eastern Europe are an ideal additional group to consider for targeted HCV screening. Given the limitations of our study, screening would serve, at least, to provide robust seroprevalence and viremic data at the country level to identify which migrants have the highest HCV infection rates. Several studies have shown that it is cost effective to screen the general population at an anti-HCV prevalence of 3% with older interferon based therapies [65,66]. The cost-effectiveness of screening and treatment with new highly effective but costly medications is still to be determined [67,68]. Cost-effectiveness analysis of HCV screening in migrants would help guide the decision as to which migrants would benefit the most from screening for HCV. The recent revolution in HCV treatment provides promise to cure all infected patients. Recent studies have shown that HCV can be eliminated in the next 15-20 years with focused strategies to screen and cure current infection as well as prevent new infections [40,69]. Additional groups at increased HCV risk will need to be identified and treated if this potential is to be realized. In the current HCV context, the results of our study suggest that adult migrants originating from intermediate or high endemic countries are at high risk for HCV and may benefit from targeted HCV screening.
2018-04-03T05:33:25.240Z
2015-11-11T00:00:00.000
{ "year": 2015, "sha1": "64148b76a1ee71c51a6f2966ebcdcdf1d0ebfb53", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0141715&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64148b76a1ee71c51a6f2966ebcdcdf1d0ebfb53", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218670964
pes2o/s2orc
v3-fos-license
A single variant sequencing method for sensitive and quantitative detection of HIV-1 minority variants HIV drug resistance is a major threat to achieving long-term viral suppression in HIV-positive individuals. Drug resistant HIV variants, including minority variants, can compromise response to antiretroviral therapy. Many studies have investigated the clinical relevance of drug resistant minority variants, but the level at which minority variants become clinically relevant remains unclear. A combination of Primer-ID and deep sequencing is a promising approach that may quantify minority variants more accurately compared to standard deep sequencing. However, most studies that used the Primer-ID method have analyzed clinical samples directly. Thus, its sensitivity and quantitative accuracy have not been adequately validated using known controls. Here, we constructed defined proportions of artificial RNA and virus quasispecies and measured their relative proportions using the Primer-ID based, quantitative single-variant sequencing (qSVS) assay. Our results showed that minority variants present at 1% of quasispecies were detected reproducibly with minimal variations between technical replicates. In addition, the measured frequencies were comparable to the expected frequencies. These data validate the accuracy and reproducibility of the qSVS assay in quantifying authentic HIV minority variants, and support the use of this approach to examine the impacts of minority HIV variants on virologic response and clinical outcome. template is generated. Since each starting RNA template is tagged with a unique Primer-ID barcode, errors originating from subsequent steps such as nucleotide misincorporation and deep sequencing, allelic skewing from differential amplification, and template resampling, are corrected using this approach. Thus, the resulting population of consensus sequences should accurately represent the original quasispecies population, and the approach holds great promise for quantifying minority variants in clinical samples. However, most published studies have applied the Primer-ID method directly to clinical samples 27,[32][33][34][35][36][37] . The sensitivity, reproducibility, and quantitative accuracy of the approach have not been adequately validated using quasispecies of known proportions. In the present study, we constructed artificial RNA pools and artificial viral quasispecies and determined the frequency of minority HIV variants using a Primer-ID, quantitative single variant sequencing assay. The goal was to validate using known controls that the assay can detect minority variants reproducibly at or above 1% of the HIV-1 quasispecies with high precision. Results Determination of the background error rate. The single variant sequencing (SVS) method leveraged the Primer-ID approach and Illumina sequencing to quantify the abundance of different variants in viral quasispecies. We first determined the background error rate for the SVS method by selecting three plasmids containing the entire HIV-1 PR gene and three plasmids containing partial RT gene segment (codon 1-237) and subjecting each individual plasmid to in vitro transcription, PCR amplification, and deep sequencing using the SVS procedure (Fig. 2). We then selected three cell culture-derived HIV and subjected each virus stock to the same SVS analysis. SVS analysis was performed for the PR and RT genes for plasmids and the PR gene for viruses. Amino acids called erroneously were identified and their frequency determined at each position (see Supplemental Tables ST1 and ST2). Mean of error frequencies at each position of the given sequence was calculated and reported as the background error rate per position for that particular gene. For both plasmid-derived RNA and viral RNA, the background error rate per position was found to be less than 0.1% in all samples (Fig. 2). Sensitive and quantitative detection of individual variants in plasmid-derived RnA quasispecies. To determine the threshold at which individual minority variants could be detected consistently, we generated artificial pools of RNA quasispecies derived from plasmids of known sequences. The concentration of each linearized plasmid was determined, and combined in different ratios to generate 3 plasmid populations with defined proportions (Supplemental Table ST4). Two of the three populations included minority species at 1% mean abundance (i.e. pools B and C for PR; pools E and F for RT). RNA quasispecies was then generated by in vitro transcription of the plasmid pools. To evaluate the contribution of pipetting variations during the construction of artificial pools, each pool was prepared in quadruplicates by manual pipetting and also by a liquid handling robot. The abundance of individual variants in each PR or RT artificial RNA pool is shown in Fig. 3. The expected (theoretical) frequency of individual variants in each pool was calculated based on measured concentrations of linearized plasmids. The SVS analysis revealed that minority variants present at mean 1% abundance of the quasispecies populations (i.e. Protease pool B p50V, Protease pool C p84V, Reverse Transcriptase pool E p82A, and Reverse Transcriptase Pool C p84V) were detected in all replicates, although the measured mean abundance deviated from the expected abundance in some cases (observed mean abundance -Protease pool B p50V: 0.14%, Protease pool C p84V: 1.70%, Reverse Transcriptase pool E p82A: 0.08%, and Reverse Transcriptase Pool F p151M: 0.33%). We observed minimal differences between technical replicates, and also between the two pipetting methods (liquid handling robot vs manual pipetting). These results indicate that the SVS method reproducibly detects minority variants present at or above 1% abundance of quasispecies. Sensitive and quantitative detection of individual variants in virus-derived RnA quasispecies. To demonstrate that the SVS method can be applied to viral populations, we selected three cell culture-derived viruses that differed in two positions (82 and 84) in the PR gene and constructed four artificial viral mixtures with defined proportions (Supplemental Table ST5). Each artificial pool consists of two minority variants ranging from 0.8-12.7% and one majority variant (>84.1%). The abundance of each virus was determined using SVS and compared with the expected (theoretical) abundance calculated based on p24 titers. The analysis showed that minority variants (i.e. control virus) present at 1 to 3% of the expected frequency were detected reproducibly and were comparable to the expected frequencies (4.49% vs 3.21% expected, 3.23% vs 2.38% expected, 2.41% vs 1.57% expected, 0.84% vs 0.77% expected; mix1 to mix4, respectively) (Fig. 4). In addition, variations in measured frequencies between technical replicates were small. These results demonstrate that the SVS procedure reproducibly detected minority variants at or above ~1% of the viral quasispecies. To further confirm the high sensitivity and accuracy of the SVS assay, the frequencies of authentic minority variants in virus Mix 4 and Mix 3 were compared to the frequencies of reads that were called erroneously (Fig. 5). The frequency of authentic minority variants at position 82 and 84 called correctly (green bars; 0.77% and 2.41% in virus Mix 4 and Mix 3, respectively) was significantly higher than the frequencies of erroneous calls (variant with the highest frequency has a mean abundance of 0.143% for Mix 4 and 0.158% for Mix 3; p < 0.00001). Overall, >99.6% of the reads were called correctly in all four mixes. These results indicate that authentic minority variants present in as low as 0.8% frequency in the viral quasispecies could be distinguished accurately from the background errors. Discussion HIV-1 drug resistant minority variants can compromise response to antiretroviral therapy. Although many studies have investigated the clinical impact of minority variants, the clinical significance of HIV-1 drug resistant minority variants continues to be debated 27,29,30,37-40 . Furthermore, the level at which drug resistant minority variants become clinically relevant remains unclear. The Primer-ID method is one approach that can reduce www.nature.com/scientificreports www.nature.com/scientificreports/ accumulation of erroneous variants 41 . Correctly calling minority variants while simultaneously reducing erroneous base calls is paramount for accurate determination of minority drug-resistance mutations and studies of their impacts on treatment response. Here, we leveraged the Primer-ID approach and Illumina deep sequencing and report the validation of a quantitative Single Variant Sequencing (SVS) method for accurate and sensitive detection of minority HIV-1 variants. We constructed artificial RNA quasispecies of defined proportions by in vitro transcription of plasmids and direct extraction of cell culture-derived viruses, then interrogated the proportions of RNA quasispecies using the SVS methods. To accurately identify and quantify authentic HIV-1 minority variants, it is critical that an assay can reliably distinguish authentic low frequency variants from errors generated during nucleic acid amplification and the deep sequencing process. We first mixed plasmids carrying HIV-1 PR and RT gene segments with known www.nature.com/scientificreports www.nature.com/scientificreports/ mutations, then generated RNA quasispecies by in vitro transcription of the plasmid pools. Expected proportions of different plasmids were calculated based on plasmid concentrations to include one or two minority variants (less than 20%) and one or two majority variants (20% or higher). Our results showed that the assay was highly sensitive in accurately detecting minority variants in as low as 1% of the quasispecies population, with a low background error rate of <0.1%. The low error rate was consistent with a recent report by Howison et al. that also used the Primer ID method 41 . Importantly, our data demonstrated a high level of reproducibility among technical replicates. In some cases, the abundance of variants (both majority and minority variants) deviated from the expected abundance calculated based on measured plasmid concentrations. The deviations for individual quasispecies were consistent among technical replicates in all quasispecies (i.e. reproducibly higher or lower than the expected values and with a similar magnitude) (Fig. 3). We speculate that the observed differences were a result www.nature.com/scientificreports www.nature.com/scientificreports/ of concentration measurements of linearized plasmids, and/or stochastic events from in-vitro transcription (i.e. differences in the amount of RNA transcribed in vitro from different plasmids). Seifert et al. 42 combined Primer ID with MiSeq sequencing to study heterogeneous HIV-1 populations by mixing five viruses in equal proportions (20% each) calculated based on RNA copy numbers. However, they observed proportions ranging between 6% and 38%. Similar deviations were reported by Howison et al 41 in their analysis of artificial quasispecies based on RT-PCR or plasmid DNA measurements. Taken together, these data suggest that in vitro plasmid or RNA-based quasispecies may be adequate for the evaluation of assay sensitivity, but may not be sufficiently robust for determining the quantitative accuracy of variant populations in quasispecies. Going forward, an improved method for constructing accurate proportions of viral quasispecies is essential for validating the quantitative accuracy of the SVS assay. To demonstrate that the SVS method is also applicable to viral populations, we generated defined mixtures of HIV-1 virus quasispecies. To minimize the impact of concentration measurements on accurate proportions of each viral population, we first combined two variants at 1:4 ratio, and then spiked the mixture into the third virus variant to generate four viral quasispecies consisting of two minority variants in a fixed ratio (1:4) ranging from 0.8% to 12.7% abundance and one majority variant at >84% abundance. This approach was expected to generate different quasispecies with a fixed ratio between the two minority variants. Indeed, our data (Fig. 4) demonstrated a consistent ratio between the two minority variants across all four viral quasispecies, with the observed ratio (1:2) deviating slightly from the expected value (1:4). This high level of consistency across the four quasispecies suggests that the differences likely resulted from an over or under estimation of viral RNA copy number based on p24 measurements. Overall, the SVS assay demonstrated high reproducibility in identifying and quantifying both majority and minority variants across different sample types with minor errors that do not impact data interpretation. Importantly, these results demonstrate that the SVS method can consistently quantify authentic minority variants present in as low as 0.8% of the viral quasispecies. This study has several limitations. First, the initial Primer-IDs label individual RNA templates during reverse transcription. The SuperScript IV Reverse Transcriptase possesses no proofreading activity and may have introduced errors during the first strand cDNA synthesis which could not be corrected in subsequent steps 43 . Second, PCR amplification following cDNA synthesis may have introduced additional errors in the Primer-ID barcode sequence, thereby generating offspring Primer-IDs, the effect of which could be minimized using a cutoff model proposed by Zhou et al 44 . However, the use of high-fidelity Platinum SuperFi DNA Polymerase should reduce the likelihood of additional errors introduced during the PCR step. In summary, we have validated the SVS method using known controls, and showed that the assay consistently detects minority variants at or above 1% level with high precision. These results support the use of quantitative single variant sequencing assay to examine the impacts of minority HIV-1 drug resistant variants on virologic response and clinical outcome. Methods Construction of artificial RNA pools with defined proportions. Plasmids (p8E5, p50V, p84V, p82A and p151M) with unique amino acid polymorphism (Supplemental Table ST7) were obtained from NIH AIDS Research and Reference Reagent Program. These constructs carry a 1060-bp fragment spanning from gag gene to codon 237 of the reverse transcriptase gene (nucleotide 2201 through 3261) of the HIV-1 genome (GenBank accession no. K03455) in pCR 2.1-TOPO vector (Invitrogen). To confirm the sequence of the insert, plasmids were transformed into E. coli (One Shot TOP10 Competent Cells, ThermoFisher, Cat # C404010), single colonies were selected and grown in Luria Broth with 25 μg/ml kanamycin, and plasmid DNA was isolated using NucleoSpin Plasmid (NoLid) kit (Macherey-Nage, Cat # 740499). The sequence of the PR and RT gene segments in each plasmid was confirmed by Sanger sequencing using following primers; PRC (forward): 5'-CTCCCCCTCAGAAGCAGGAGCCGATAGACAAGGAACTGTATCC and RT3 (reverse): 5'-TATCAGGATGGAGTTCATAAC. Next, plasmids were linearized using BamHI-HF restriction enzyme (New England Biolabs), and the length of the linearized plasmid was verified by agarose gel electrophoresis and DNA concentration measured using the Qubit dsDNA HS Assay Kit in Qubit 4 fluorometer (Invitrogen). Purified linearized plasmids were mixed to create 12 pools with defined proportions based on measured DNA concentrations (Supplemental Table ST4; Fig. 1a). Six pools were prepared by hand pipetting, and six identical pools were constructed by robot pipetting using Eppendorf epMotion 5070 liquid handling robot. To generate artificial RNA pools with defined proportions (Supplemental Table ST4; Fig. 1a), individual plasmids or plasmid pools were transcribed in vitro using T7 RiboMAX Express Large Scale RNA Production System (Promega, Cat # P1320), and the transcribed RNA was purified using the NucleoSpin RNA Clean-up Kit (Macherey-Nagel, Cat # 740948). The RNA concentration was measured using the Nanodrop 2000 (Thermo Fisher) and adjusted to 100,000 copies/μL for subsequent reverse transcription and PCR. Single variant sequencing (SVS). A schematic of the SVS procedure is shown in Fig. 1b, which included reverse transcription, PCR amplification, pooling, and Illumina sequencing. The SVS procedure utilizes a unique (2020) 10:8185 | https://doi.org/10.1038/s41598-020-65085-y www.nature.com/scientificreports www.nature.com/scientificreports/ design of reverse transcription (RT) primers for priming the first strand of cDNA synthesis. Each RT primer molecule includes a 14 random nucleotide sequence constituting a unique Primer-ID Tag, which is flanked by a sequence at the 3' end that anneals to the RNA template and a sequence at the 5' end that serves as the annealing site for PCR primers (Supplemental Table ST6). The RNA from in vitro transcription or RNA isolated from virus particles (Fig. 1a) serves as the template for cDNA synthesis using SuperScript IV Reverse Transcriptase (SuperScript IV First-Strand Synthesis System, Invitrogen) and amplicon (protease or reverse transcriptase) specific RT primers that contained the Primer-ID tag. Each reaction contained 10 2 -10 3 fold molar excess of primers to ensure that each RNA template was reverse transcribed to generate cDNA labeled with a unique primer ID tag. The resulting cDNA was purified using NucleoSpin Gel and PCR Clean-up (Macherey-Nagel, Cat# 740609), then amplified using Platinum SuperFi DNA polymerase (Invitrogen) with forward and reverse PCR primers that included 4-8 variable length nucleotide barcode sequences specific to each sample. The PCR products were separated by agarose gel electrophoresis, purified using NucleoSpin Gel and PCR Clean-up (Macherey-Nagel, Cat# 740609), and tailed with index sequences required for Illumina sequencing. Amplified DNA were combined in equimolar pool, gel purified and quantified by qPCR (KAPA Library Quantification kit for Illumina sequencing platforms, Kapa Biosystems), then sequenced on the Illumina MiSeq using the v3 600 cycle kit and a 20% PhiX spike-in. Protease (PR) gene segment (amino acids 8-99) and reverse transcriptase (RT) gene segment (amino acids 11-133) were sequenced. SVS analysis was performed for the PR and RT genes for plasmids and the PR gene for viruses. Bioinformatics. Raw Illumina MiSeq. 301 bp x 2 reads were de-multiplexed into individual samples according to the unique variable length barcode combination (4bp-8bp in length) on each end. Additional filtering criteria included an exact match to PCR primer sequences, an average quality score of 30 or higher (<0.001 error rate in raw reads), and a minimum length of 270 bp for each paired-end read. Each paired-end read was joined using FLASh (http://ccb.jhu.edu/software/FLASH/) with a minimum of 10 base overlap. For each sample, the joined reads were grouped by the unique 19-bp tags (including 5 bp control bases) introduced during reverse transcription. Consensus sequences were called for each unique tag via alignment using MAFFT (http://mafft. cbrc.jp/alignment/software/) when three or more reads share the same tag. Consensus sequences that contained ties at certain positions resulting in degenerate bases were excluded due to ambiguity. The resulting consensus sequences were aligned against corresponding reference sequences and manually inspected and corrected for artificial gaps introduced from the sequencing process. Translation of codons and summarization of mutations were carried out using custom scripts in R (https://www.r-project.org/) with the BioStrings package (http://bioconductor.org/packages/release/bioc/html/Biostrings.html).
2020-05-18T14:00:48.259Z
2020-05-18T00:00:00.000
{ "year": 2020, "sha1": "3847fe7bdff984cfdffcd15a8f3ef979e49535fa", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-65085-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c30a08c13c1ee3484a598dd2fed139da8a7ce86", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
195190037
pes2o/s2orc
v3-fos-license
Metagenomic next-generation sequencing of samples from pediatric febrile illness in Tororo, Uganda Febrile illness is a major burden in African children, and non-malarial causes of fever are uncertain. In this retrospective exploratory study, we used metagenomic next-generation sequencing (mNGS) to evaluate serum, nasopharyngeal, and stool specimens from 94 children (aged 2–54 months) with febrile illness admitted to Tororo District Hospital, Uganda. The most common microbes identified were Plasmodium falciparum (51.1% of samples) and parvovirus B19 (4.4%) from serum; human rhinoviruses A and C (40%), respiratory syncytial virus (10%), and human herpesvirus 5 (10%) from nasopharyngeal swabs; and rotavirus A (50% of those with diarrhea) from stool. We also report the near complete genome of a highly divergent orthobunyavirus, tentatively named Nyangole virus, identified from the serum of a child diagnosed with malaria and pneumonia, a Bwamba orthobunyavirus in the nasopharynx of a child with rash and sepsis, and the genomes of two novel human rhinovirus C species. In this retrospective exploratory study, mNGS identified multiple potential pathogens, including 3 new viral species, associated with fever in Ugandan children. Introduction The evaluation of children with fever is challenging, particularly in Low and Middle income countries (LMIC). A febrile child in sub-Saharan Africa may have a mild self-resolving viral infection or may be suffering from bacterial sepsis or malaria-major causes of disability and a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 death [1,2]. Historically, febrile illness in much of Africa has been treated empirically as malaria due to the limited availability of diagnostics and the risk of untreated malaria progressing to life-threatening illness. This strategy changed in 2010 following revised guidelines from the World Health Organization (WHO), which recommended limiting malaria therapy to those with a confirmed diagnosis [3]. However, standard recommendations for management of febrile children who do not have malaria are lacking. Increased knowledge about the prevalence of non-malarial pathogens associated with fever is needed to inform management strategies for febrile children [2], especially in low resource settings. Advances in genome sequencing hold promise for addressing global infectious disease challenges by enabling unbiased detection of microbial pathogens that can be used to design directed diagnostics, and improve surveillance in LMIC [4][5]. The unbiased approach to detection of sequence-based diagnostics have led to the successful detection of pathogens in some rare or complex cases where traditional methods have failed [6][7][8][9][10]. Sequence-based diagnostics are complementary to serological assays and may contribute to a better understanding of pathogen landscapes in LMIC. Towards this aim, we conducted an exploratory retrospective mNGS analysis on samples available from a cohort of children hospitalized in rural Uganda with febrile illnesses to characterize potential pathogens associated with fever. The results, which include the detection of 3 novel viral species, suggest that mNGS will likely be a valuable tool in the arsenal of assays to understand the microbial landscape in human infections. Metagenomic sequencing findings mNGS was performed on RNA extracted from 90 serum, 90 NP swab, and 10 stool samples following library preparation. A mean of 11.5 million (IQR 6.4-15.2 million) paired-end reads were obtained per sample; sequencing statistics are in S2 Data. For one batch of serum samples, only a single read, rather than paired-end reads, was produced. Bioinformatic analysis was Table 1. Overview of the patients enrolled in the study. The other category includes: unknown (10), urinary infection (1), meningitis (2), hepatitis (1), and fever (1). Information on gender was missing for three and on age for two children. Clinical category (number of patients in category) Age (mean, months) Gender In this section, we discuss the mNGS results identified in a given sample type. Detailed findings on microbes identified in every patient, along with the total reads per million (rpM) are reported in Table B and Table C in S1 Data, respectively. mNGS of serum At least one microbial species was detected in 60 (66.7%) of the serum samples; more than one microbe was detected in 11 (12.2%) samples ( Fig 1A). No microbial species were identified in the serum of 30 (33.3%) individuals. The most commonly identified microbes were Plasmodium falciparum ( Fig 1A). Multiple viruses were detected from serum in patients with Plasmodium infections (10 of 46 (21.7%) samples; S1A Table). Three of the four identified parvovirus B19 cases were associated with P. falciparum. Additionally, GB virus C and torque teno virus (TTV), which are of unknown clinical significance [18], [19], were identified in the serum of 25 (27.8%) and 37 (41.1%) children, respectively. There have been reports on associations between immunosuppression and TTV abundance [20,21]. Interestingly, a prior study has reported a higher abundance of TTV in children with fevers [22]. mNGS of stool Among the 10 stool samples collected and sequenced, potential non-bacterial pathogens were detected in 9/10 samples. The three most common microbes identified were rotavirus A (50%), Cryptosporidium (40%), and human parechovirus (40%). Our sequencing did not provide enough information to type the identified human parechovirus. Seven children had additional microbes identified: two rotavirus A and human parechovirus, and one each rotavirus A and Cryptosporidium, rotavirus A and enterovirus, Cryptosporidium and human parechovirus, Cryptosporidium and Norwalk virus, and Giardia and human parechovirus. Five samples also had Blastocystis hominis, a protozoan of uncertain pathogenicity. Genomic characterization of viruses Representative genomes for all viruses identified in this study were assembled and deposited in GenBank (Accession numbers: MH685676-MH685701, MH685703-MH685719, MH684286-MH684293, MH684298-MH684334). In this section, we describe in detail a novel orthobunyavirus identified in the serum of one individual, two novel HRV-C species identified in two individuals and diversity of influenza B viruses assembled in the nasopharynx from five individuals. We studied genomic diversity of influenza B in detail, given its potential implication to inform vaccine design. We did not include influenza A in this analysis, as we were able to assemble the complete genome sequence only from one individual. Orthobunyaviruses Serum from one patient admitted with a clinical diagnosis of malaria and pneumonia contained a novel orthobunyavirus in addition to P. falciparum. Assembly of a near-complete genome and comparison with existing orthobunyavirus genomes indicated that this sequence includes 97.5%, 100% and 91% of the L, M, and S coding regions, respectively (Fig 2). Average read coverage across the segments was 86-fold. Phylogenetic comparison showed that the novel virus was significantly divergent from known orthobunyaviruses, sharing 44.9-55.1% amino acid identity with the closest known relatives, Calchaqui virus, Kaeng Khoi virus, and Anopheles A virus (Figs 2 and S2, S3 and S4). The virus was isolated from a patient from Nyangole village, Tororo District-hence, we propose the name "Nyangole virus", consistent with nomenclature guidelines for the family Bunyaviridae. In addition, a second orthobunyavirus, Bwamba virus, was identified in the NP swab sample from a patient admitted with rash, sepsis, and diarrhea. Insufficient sample and sequencing reads precluded genome assembly of this virus. Note that bacterial species were not considered in Fig 1B. Each column represents a febrile child, and the color bars represent the total reads per million (rpM) of a particular microbe present in the sample. Results for GB virus C and torque teno virus, which are of uncertain clinical significance, are not included. https://doi.org/10.1371/journal.pone.0218318.g001 Human rhinoviruses Within the species rhinovirus, we assembled de novo a total of 13 HRV-C (mean coverage: 39-fold) and 13 HRV-A (mean coverage: 268-fold) genomes (> 500 bp). Of these, 10 HRV-A and nine HRV-C genomes had complete coverage of the VP1 region, which is used to define enterovirus types [28]. Unique HRV types are defined by <73% similarity in the VP1 gene. As such, we found three HRV-A and eight HRV-C types in this cohort. One individual harbored two distinct HRV-A types (genome pairwise identity = 75.3%, VP1 pairwise identity = 67.1%). Additionally, we assembled two novel HRV-C species from two patients admitted with gastroenteritis (patient ID: EOFI-014) and with pneumonia, malaria and diarrhea (patient ID: EOFI-133), that shared 70.1% and 70.7% nucleotide sequence identity at VP1 compared to the closest known HRV-C (Accession JQ245968 and KF688606, respectively) (Fig 3). The Picornavirus Working Group has established that novel HRV-Cs should exhibit at least 13% nucleotide sequence divergence in the VP1 gene [29], qualifying these two as novel. Influenza B virus We assembled influenza B genome segments (>500bp, mean-coverage: 4-fold) from six of seven samples containing influenza B virus (one sample had insufficient sequencing reads). Discussion A better understanding of the microbial agents causing fever in African children is needed to inform the development of better diagnostic algorithms, therapeutic guidelines and public health strategies. We performed an exploratory retrospective study with unbiased mNGS on various tissue types to determine whether this technology has potential to contribute to our understanding of the etiologies of fever in African children. In this limited sample set, mNGS identified a wide range of potential pathogens, including three novel viral species. Other studies evaluating causes of febrile illness in African children have focused on a limited number of pathogens [30][31][32][33]. In a study of febrile children in Tanzania utilizing serologic, culture, and molecular assays, viruses accounted for 51% of lower respiratory infections, 78% of systemic infections, and 100% of upper respiratory infections [34]. Additionally, in the above study, 9% of the children had malaria and 4.2% had bacteremia. In febrile children in Kenya, reported pathogens were spotted fever group Rickettsiae (22.4%), influenza (22.4%), adenovirus (10.5%), parainfluenza virus 1-3 (10.1%), Q fever (8.9%), RSV (5.3%), malaria (5.2%), scrub typhus (3.6%), human metapneumovirus (3.2%), group A Streptococcus (2.3%) and typhus group Rickettsiae (1.0%) [35,36]. Another study reported bacteremia in 19.1% of children admitted to a referral hospital in Uganda [37]. Additionally, in patients (across all age groups) with severe febrile illness, bacteremia was detected in 10.1% in North Africa, 10.4% in East Africa, and 12.4% in West Africa [38]. In this small study, Plasmodium falciparum was identified in the serum of 51.1% of the children, human rhinoviruses A and C dominated in the nasopharyngeal swab of 40% of the children and rotavirus A was identified in the stool samples of 50% of the children studied. For 20% of NP swabs and 33.3% of serum samples, no microbial species met our thresholds for detection. These proportions are consistent with previous reports [32,34,35,39]. Unbiased sequencing approaches are designed to identify all potential pathogens but have also been limited by high cost and infrastructure needs. Given the exploratory nature of this mNGS study, we cannot ascertain population level incidence or prevalence of particular infections. As expected, in the serum samples, P. falciparum was most commonly identified [40]. Some discrepancies were seen compared to blood smear readings, with false positive smears probably due to errors in slide reading, a common problem in under resourced clinics [41], and false negative smears due to the expected greater sensitivity of mNGS for identification of P. falciparum. In children with only sub-microscopic parasitemia, it is uncertain whether fevers can be ascribed to malaria, and in fact many children had both P. falciparum and additional microbes identified. Interestingly, three of the four cases of parvovirus B19 were found in association with P. falciparum; this co-infection has been associated with severe anemia with life-threatening consequences [42][43][44]. For NP and stool samples, given that the nasopharynx and intestines are normally colonized with commensal bacteria [45][46][47][48], and the lack of samples from healthy Ugandan controls, we focused on non-bacterial species. HRV was the most commonly identified virus in NP swab samples, consistent with findings previously reported in sub-Saharan Africa and developed countries [49][50][51][52][53]. HRV-C was most frequently encountered (54.1%), followed by HRV-A (43.2%) and HRV-B (2.7%), similar to the distribution of HRVs previously reported in Kenya [49]. We identified two novel HRV-C species which were approximately 70% identical to the most closely related previously described HRV-C species [29]. Overall, we detected at least three HRV-A and eight HRV-C types co-circulating in Tororo District. Of note, during the same collection period, a lethal HRV-C outbreak was reported in chimpanzees in Kibale National Park, in western Uganda [54]; that HRV-C was modestly related to an isolate observed in our study (74% nucleotide identity; 81% amino acid identity) (Fig 3) [54]. Our results confirm that a wide spectrum of HRVs infects Ugandan children. In addition to HRV, we detected a number of other known respiratory viruses, including RSV, human parainfluenza viruses, human coronaviruses, and adenovirus. Diarrheal disease is one of the leading causes of death in children in Africa [55]. Approximately 48% of febrile children in our study presented with diarrhea, but due to logistical constraints stool specimens were available for only 10 cases. Rotavirus A, the leading cause of pediatric diarrhea worldwide [56], was the most commonly identified microbe in this cohort. Rotavirus vaccination, known to be highly effective, is yet to be implemented in Uganda, but the need is clear [56]. In addition to rotavirus A, we detected Cryptosporidium, norovirus, Giardia, B. hominis and several enteroviruses in stool specimens. Enteroviruses, HRV-C, and mamastrovirus were also identified in the serum of three children with clinical diagnoses of gastroenteritis or diarrhea. Unbiased inspection of microbial sequences from sera revealed a novel member of the orthobunyavirus genus, tentatively named Nyangole virus, which was identified along with P. falciparum in a child with clinical diagnoses of malaria and pneumonia. The virus was surprisingly divergent from known viruses, with an average amino acid similarity of 51.6% to its nearest known relatives, including Calchaqui, Anopheles A and Kaeng Khoi viruses. Mosquitoes have been proposed as a vector for Calchaqui and Anopheles A viruses; Kaeng Khoi virus has been isolated from bedbugs [57][58][59]. Antibodies to these viruses have been detected in human sera, but their role as human pathogens is uncertain [46][47][48][49]. However, other orthobunyaviruses are responsible for severe human illnesses (e.g., Oropouche, Bunyamwera virus, California encephalitis virus, La Crosse virus, Jamestown Canyon virus, and Cache Valley virus) [60]. While the coverage depth of the assembled Nyangole virus genome in our patient suggests significant viremia, it is unknown whether the identified virus was responsible for the patient's febrile illness. NP swab analysis identified another orthobunyavirus, Bwamba virus, in a child admitted with rash, sepsis and diarrhea. This virus has previously been described to cause fever in Uganda [61]. Our identification of two orthobunyaviruses, including one novel virus, in a small sample of febrile Ugandan children suggests that the landscape of previously unidentified viruses that potentially infect African children and potentially cause febrile illness, is significantly under explored. In addition to pathogen identification, the capacity of mNGS to provide viral strain resolution suggests its utility for monitoring vaccine efficacy by assessing prevalence of vaccine-targeted versus non-targeted strains. In the case of influenza B virus, the WHO recommended vaccine for 2013/2014 was highly conserved to the virus present in Uganda during that season. Our exploratory pilot study had important limitations. First, our samples were not collected randomly, but rather were a retrospective convenience sample due to logistical constraints; as such, the results are not necessarily representative of pathogens infecting Ugandan children. In particular, the lack of identification of bacteremia in study subjects may have been due to a relative paucity of severe illness, compared to that in other studies. Second, the samples were collected only over a period of three months (October-December). Hence, we are unable to comment on seasonal trends in identified pathogens. Third, clinical evaluation of children followed the standards of a rural African hospital, so diagnostic evaluation was limited to physical examination and malaria blood smears. This study was not designed to compare mNGS to other clinical or laboratory assays. It is clear that more will be learned by linking rigorous clinical evaluation with mNGS results, and thereby more comprehensively assessing associations between clinical syndromes and specific pathogens. Fourth, healthy controls from the same population were not recruited in this study, hence we were unable to include them in the background model to filter out commensal microbial species specific to the Ugandan microbiome. Fifth, we were unable to use orthogonal techniques such as PCR to confirm the microbial species identified by mNGS due to lack of sample availability. Given these limitations, we hesitate to integrate all the clinical specimens on a per sample basis, and rather present a portrait of all the microbes identified in febrile children. For readers interested in a breakdown of all microbial species from all samples collected per child, Table B in S1 Data contains this information. Future metagenomic studies should include rigorous clinical and microbiological phenotyping, along with samples collected from healthy individuals. This would facilitate design of an appropriate background model to identify potential pathogens, with confirmation using orthogonal techniques. Despite these limitations, our study provides an important snapshot of causes of fever in African children that could not be identified by available diagnostics, and suggests mNGS will be an important tool for future investigations. Given the yield of novel species in this small study alone, it is likely that an expanded use of this approach will continue to yield an increasingly rich portrait of microbial diversity associated with disease in this region. Methods This study was approved by the Makerere University Research and Ethics Committee, the Uganda National Council of Science and Technology, and the University of California, San Francisco Committee on Human Research. Written informed consent was obtained from the parent or guardian on the child's behalf for all child participants enrolled in this study. Enrollment of study subjects We studied children admitted to Tororo District Hospital, Tororo, Uganda, with febrile illnesses. Potential subjects were identified by clinic staff, who notified study personnel, who subsequently evaluated the children for study eligibility. Inclusion criteria were: 1) age 2-60 months; 2) admission to Tororo District Hospital for acute illness; 3) documentation of axillary temperature >38.0˚C on admission or within 24 hours of admission; and 4) provision of informed consent from the parent or guardian for study procedures. The only exclusion criterion was unwillingness or inability of parents/guardians to provide consent. Sample collection Serum and nasopharyngeal (NP) swab samples were collected from 90 children each; for four children, only one of the two sample types was successfully collected. Although 45 (47.9%) of the children had a presenting symptom of diarrhea, stool samples were available for only 10 due to logistical constraints. All samples that were collected were processed and included in the analysis. Study specimens NP swabs and serum were collected from each enrolled subject within 24 hours of hospital admission. Approximately 5 ml of serum was collected by phlebotomy, the sample was centrifuged at room temperature, and serum was then stored at -80˚C. NP swab samples collected with FLOQSwabs swabs (COPAN) were placed into cryovials with Trizol (Invitrogen), and stored at -80˚C within~5 min of collection. For subjects with acute diarrhea (� three loose or watery stools in 24 hours), stool was collected into clean plastic containers and stored at -80˚C within~5 min of collection. Samples were stored at -80˚C until shipment on dry ice to UCSF for sequencing. Clinical data Clinical information was obtained from interviews with parents or guardians, with specific data entered onto a standardized case record form that included admission diagnosis and physical examination as well as malaria blood smear results. For malaria diagnosis, thick Blood smears were Giemsa stained and evaluated by Tororo District Hospital laboratory personnel following routine standard-of-care practices. No efforts were made to improve on routine practice, so malaria smear readings represent routine standard-of-care rather than optimal quality controlled reads. Metagenomic next-generation sequencing (mNGS) After shipment to University of California, San Francisco, RNA was extracted from clinical samples as well as positive (HeLa cells) and negative (water) controls, and unbiased cDNA libraries were generated using previously described methods (see sections "Sequencing library preparation" and "Metagenomic Library Preparation", respectively) [62,5]. Barcoded samples were pooled, size selected (Blue Pippin), and run on an Illumina HiSeq2500 to obtain 135 base pair (bp) paired-end reads. Bioinformatic analysis and pathogen identification Microbial pathogens were identified from raw sequencing reads using the IDseq (v1.6) Portal (https://idseq.net), a cloud-based, open-source bioinformatics platform designed for detection of microbes from metagenomic data. IDseq is a scalable and cloud based implementation of a previously published pipeline (see Fig 1 in [11]). In brief, initial host read filtering is performed using Spliced Transcripts Alignment to a Reference (STAR) algorithm [17], followed by removal of duplicate/low quality and low complexity sequences [14], [16]. Next, reads are aligned once again to the host genome of interest using bowtie2 [12] to remove any remaining host reads. The non-human reads are then aligned to the NCBI nucleotide and protein database, using GSNAPL and RAPSearch, respectively [13], [15]. Additionally, reads that were identified as HHV-5 were assessed individually using BLAST, to verify specificity to this virus. All IDseq scripts and user instructions are available at https://github.com/chanzuckerberg/ idseq-dag and the graphical user interface web application for sample upload is available at https://github.com/chanzuckerberg/idseq-web. To distinguish potential pathogens from ubiquitous environmental agents including laboratory reagent contaminants and skin commensal flora, a Z-score was calculated for both nucleic acid and protein alignments, for each genus relative to a background of non-templated ("water only") controls in addition to a previously published set of uninfected clinical mNGS samples [11]. CSF samples acquired through lumbar puncture from uninfected controls are included in this background model as there is extensive representation of the skin microbiome given the need to puncture the skin. Only human papillomavirus was identified in the positive (HeLa cell) controls. For this study we report species greater than 0 reads per million (rpM) and a Z-score > 0 (for both nucleic acid and protein alignments) detected in the serum, stool, and NP samples. We chose to report all species satisfying these criteria, rather than restricting to particular species, to offer an unbiased representation of microbes present in the sample. IDseq uses the CD-HIT-DUP tool to compress duplicate reads, hence final assignment of the rpM in a given sample represents coverage across the genome, rather than a single portion. Consistent with previous studies, low levels of "index bleed through" or "barcode hopping" (assignment of sequencing reads to the wrong barcode/index) was observed within the non-templated water control samples [63]. To account for barcode mis-assignment, when a microbe was found in more than one sample, it was reported only when present at levels at least four times the level of mis-assigned reads observed in the control samples. Given the extremely high levels of rotavirus found in stool samples, these samples were run in duplicate, and only microbes identified in both replicates and present at levels at least four times the number of reads mis-assigned in the control samples were reported. If the reads identified for a given microbe were not speciesspecific, we reported the corresponding genus. For NP and stool samples, because the nasopharynx and intestines are normally colonized with commensal bacteria [45][46][47][48], and because of a lack of healthy Ugandan NP and stool samples to serve as controls, only non-bacterial species were reported though we did analyze NP microbiome diversity (see below). Genome assembly, annotation and phylogenetic analysis To more comprehensively characterize the genomes of identified microbes, the paired-read iterative contig extension (PRICE) assembler [16] and the St. Petersburg genome assembler (SPAdes) [64] were used to de novo assemble short read sequences into larger contiguous sequences (contigs). Assembled contigs were queried against the National Center for Biotechnology Information (NCBI) nucleotide (nt) database using the basic local alignment search tool (BLAST) to identify the closest related microbes. GenBank annotation files from genome sequence records corresponding to the highest scoring alignments were used to identify potential features within the de novo assembled genomes. Geneious v10.3.2 was used to annotate newly assembled genomes. Reference genomes for multiple sequence alignments and phylogenetic analyses were downloaded from NCBI. Multiple sequence (nucleotide) alignments were generated using the default settings in MUSCLEv3.8.1551 [65], and ModelTest-NGv0.1.5 was used to identify the best-fitting evolutionary model. Using the best-fitting model of for evolution, we reconstructed a maximum-likelihood phylogeny using RAxML-ng v0.6.0 using default settings [66]. Annotation of protein domains in the novel orthobunyavirus was performed using the InterPro webserver [67] as well as direct alignment against previously known orthobunyaviruses. The TOPCONS webserver [68] was used for the identification of transmembrane regions and signal peptides, and the NetNglyc 1.0 Server (http://www.cbs.dtu.dk/ services/NetNGlyc/) for the identification of glycosylation sites. Evaluation of NP microbiome diversity We applied SDI to evaluate alpha diversity of microbes identified in NP samples. For this analysis, patients were stratified into two categories based on clinical assignment: respiratory infections (admitting diagnosis of pneumonia, respiratory tract infection, or bronchiolitis; n = 52) and all other syndromes (n = 39); cases with unknown admitting diagnosis were excluded. SDI was calculated in R using the Veganv2.4.4 package on genus-level reads per million values for all microbes, including bacteria. A Wilcox Rank Sum test was used to evaluate differences in SDI between patients in the two categories.
2019-06-21T23:05:35.780Z
2019-06-20T00:00:00.000
{ "year": 2019, "sha1": "e59da8ad79490ad5270928bd54c9a4d0b417d1ee", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0218318", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e59da8ad79490ad5270928bd54c9a4d0b417d1ee", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
13758823
pes2o/s2orc
v3-fos-license
Pregnancy diet and associated outcomes in the Avon Longitudinal Study of Parents and Children All publications covering diet during pregnancy that stemmed from the Avon Longitudinal Study of Parents and Children were reviewed. Diet was assessed using a food frequency questionnaire. Socioeconomic background, maternal mental health, and the health and development of the offspring were assessed using a variety of methods, such as direct measurement, self-completion questionnaires, and assays of biological samples. Differences in diet, including specific food and nutrient intakes and dietary patterns, were associated with maternal educational attainment, smoking habits, and financial difficulty. There were marginal intakes, compared with recommendations, of the key nutrients iron, magnesium, potassium, and folate. Maternal diet during pregnancy was predictive of offspring diet during childhood. There were independent associations between prenatal fish consumption and lower frequency of maternal depressive and anxiety symptoms, as well as lower frequency of intrauterine growth retardation. Consistent evidence that fish consumption during pregnancy benefited the neurocognitive development of the child was also found. Two constituents of fish, n-3 polyunsaturated fatty acids and iodine, were associated with these benefits in children. The findings from the Avon Longitudinal Study of Parents and Children strengthen the recommendation to eat fish regularly during pregnancy. INTRODUCTION Maternal nutrition periconception and during pregnancy is important for the health, growth, and development of the fetus and the newborn infant. There is considerable interest in nutrition during pregnancy because of the fetal origins theory of adult disease. This theory hypothesizes that term infants who are small for gestational age have an increased risk of cardiovascular disease and type II diabetes in adulthood and that this is due to undernutrition of the fetus. The original work was based on the follow-up of historic cohorts for which records of birth weight and early growth were available, but there was no information about the nutritional status of the mothers during pregnancy in these studies. 1 There has been extensive research following from the original hypothesis in the area of "early nutrition programming." 2 This "programming" suggests that an under-or oversupply of a particular nutrient or nutrients at a critical or sensitive period of development may have long-term effects on the structure or function of specific organs or systems in the offspring. The diet may also contain toxic substances that may affect development (e.g., mercury). The sensitive period could be at various stages in utero or in early or later postnatal development and is likely to vary according to the structure being developed. The fetus may be most susceptible to poor nutrition during the first trimester, as this is the period of rapid cell differentiation and development of embryonic systems and organs. 3 In fact, the period of greatest risk for most birth defects is in the first few weeks after conception when a mother may not be aware that she is pregnant. For example, it has been found that excessive intakes of vitamin A or low intakes of folate can be teratogenic at this critical period during the early weeks of pregnancy when the first embryonic processes occur, resulting in an assortment of birth defects. This has led to dietary recommendations for pregnancy that limit concentrated sources of vitamin A and stress the importance of periconceptional folic acid supplementation, which has been shown to be protective against neural tube defects. 4 There is evidence that optimal fetal neurodevelopment is dependent on specific nutrients supplied by the mother mainly from dietary sources; these may include long-chain polyunsaturated fatty acids (LC-PUFAs) such as docosahexaenoic acid (DHA), an n-3 fatty acid, and arachidonic acid, an n-6 fatty acid. These fatty acids are building blocks for fetal retina and brain cells. The fatty acid composition of the cell membrane influences the stability, fluidity, and function of many cell types through effects on gene expression and tissue differentiation. 5 The major source of n-3 PUFAs in human diets is seafood. It is also possible that fish intake during pregnancy influences neurodevelopment via a mechanism not related to its fatty acid content; other active substances include its vitamin D and iodine content or its possible contamination with mercury. The evidence is in favor of an important in utero programming effect on brain development. 2 The in utero supply of nutrients or toxins may particularly affect epigenetic factors such as DNA methylation, which are associated with gene expression and thus with the intimate development of the organism. 6 This may be the mechanism responsible for prenatal programming. In the developed world, malnutrition is not usually a problem, but there is the question as to the best diet to recommend to expectant mothers during pregnancy in order to maximize the health and development of themselves and their offspring. These recommendations should be evidence-based, and it is becoming widely recognized that birth cohort studies that start in or before pregnancy and then follow the offspring and the wider family can make an important contribution to the evidence. 7 The Avon Longitudinal Study of Parents and Children (ALSPAC) is one of the few such studies in the world that has followed a population cohort from the pregnancy of the mother through to the adulthood of the offspring. It is unique in collecting dietary information from mothers, their partners, and their offspring. This paper reviews the publications that have used ALSPAC data to report on diet during pregnancy relative to the growth and development of the offspring, as well as to some maternal outcomes. Literature All articles using ALSPAC data that were published up until the end of 2013 and reported on relationships with mothers' diets during pregnancy were reviewed. This narrative review includes 38 articles (listed in Table 1) : 11 with maternal outcomes, 6 with fetal or birth outcomes, and 26 with childhood outcomes. Subjects The ALSPAC was set up to investigate the ways in which genes and the environment interact to affect health, behavior, and development as children grow from the time of gestation through infancy and childhood to adulthood. 46,47 It was designed as an observational birth cohort study, recruiting pregnant women residing in 3 Bristol-based health districts of the county of Avon, in the southwest of England, with an expected delivery date between April 1991 and December 1992 (n ¼ 14 541 pregnancies, of which 14 015 survived to the third trimester). Most women were recruited before 12 weeks of pregnancy but some presented later in pregnancy. The recruitment area covered the city of Bristol and surrounding urban and rural areas, including towns and villages with some industrial areas and farming communities (total population in 1991, $0.9 million). Ethical approval for the study was obtained from the ALSPAC Law and Ethics Committee and the local research ethics committees. From the outset, one of the focuses for data collection during pregnancy concerned the different features of the mother's diet. With advice from David Horrobin, 48 it was decided that it would be important to measure the different types of fat consumed by the mother and to ensure that the women consuming different amounts of n-3 LC-PUFA could be identified in the study. Because fish, particularly fatty fish, are a major source of these fatty acids and consumption is likely to be periodic (i.e., less than daily), it was decided that a food frequency questionnaire (FFQ) would be the most appropriate methodology to use. Furthermore, it was recognized that it would be important also to measure the diet in as much detail as possible so that intakes of various nutrients and other foods could be assessed. The indicators of socioeconomic background (SEB) of the family at recruitment-such as highest maternal educational attainment, age, and housing tenure-ethnicity, smoking status, and prepregnancy body mass index (BMI) of the mothers are shown in Table 2. The cohort was population based and broadly representative of the population of women with children aged <1 year in Avon in 1991, as covered in the national census. 46 There was a relatively low proportion of ethnic minority women recruited (2.2% in ALSPAC, whereas in the United Kingdom as a whole the prevalence of nonwhite women with children aged <1 y was 7.6% and in Avon in particular it was 4.1%). The children in ALSPAC (n ¼ 14 062 at birth; n ¼ 13 988 alive at 1 y) have been Table 1 Characteristics of the ALSPAC articles included in the present review in the order of presentation in the results Reference Dietary input/input Focus of paper Age/timing of outcome 8 Whole diet Nutrients, food groups Pregnancy Northstone et al. (2008) 9 Dietary patterns SEB Pregnancy Golding et al. (2013) 10 Whole diet Blood mercury levels Pregnancy Taylor et al. (2013) 11 Whole diet Blood lead levels Pregnancy Golding et al. (2009) 12 Fish/seafood intake Depressive symptoms Mother during pregnancy and postnatally Vaz et al. (2013) 13 Fish/seafood intake Anxiety symptoms Mother during pregnancy and postnatally Dietary patterns Micali et al. (2012) 14 Pregnancy diet Eating disorders Pregnancy 15 Whole 45 Whole diet Childhood diet Child 10 y Abbreviations: BMI, body mass index; CVD, cardiovascular disease; FADS, fatty acid desaturase; IQ, intelligence quotient; IUGR, intrauterine growth retardation; MTHFR, methyltetrahydrofolate reductase; SEB, socioeconomic background. followed using parental and teacher self-completion questionnaires, medical records, health service data, and educational records and by hands-on assessment at dedicated research clinics. Table 2 shows the SEB of the mothers who completed the dietary assessment during pregnancy and those of children who completed the intelligence quotient (IQ) assessment at age 8 years in comparison with the originally recruited mothers. The retained mothers have higher educational attainment, are older, and have more favorable health indicators than the mothers who did not continue in the study or those whose children did not take part in the IQ assessment. A proportion of children born in the last 6 months of the recruitment phase (equivalent to 10% of the whole cohort) were selected to take part in a substudy known as Children in Focus. These parents were invited to bring their child to research clinics 10 times between the ages of 4 months and 5 years (n ¼ 1432 ever attended). Dietary assessment Food frequency questionnaire during pregnancy. Maternal diet was assessed using an unquantified FFQ completed by the women at 32 weeks gestation. 8 The questionnaire covered the types of foods/drinks typically consumed in the United Kingdom in the early 1990s and was based on a questionnaire used previously in a neighboring area and on weighed intake data collected from women (nonpregnant) in the local area. 8 The FFQ was not validated prior to use due to lack of funding; however, the questions about fish consumption were later shown to be associated with both n-3 LC-PUFA 26 and mercury concentrations in maternal blood. 27 The FFQ contained questions about the weekly frequency of consumption of 43 different foods and food groups. The women were asked to tick one of the following options for each food as consumed "nowadays": never or rarely, once in 2 weeks, 1-3 times per 3) a t-test comparing women in the original cohort with those who completed a food frequency questionnaire during pregnancy. b t-test comparing women in the original cohort with the mothers of children who completed intelligence quotient assessment at 8 years of age. c Academic qualifications are normally taken at set ages in schools in the United Kingdom, but those women who obtained them at different ages were included in these groups. Abbreviations: BMI, body mass index; FFQ, food frequency questionnaire; IQ, intelligence quotient. week, 4-7 times per week, more than once a day. More detailed questions were asked about the consumption of a further 8 foods usually consumed daily (bread, spreading and cooking fat, milk, coffee, tea, soft drinks, and sugar). There were also questions about the types of some foods (bread, cooking and spreading fats, milk, and soft drinks) and about the ways in which foods were prepared and eaten (whether some or all of the fat was cut off meat, how often food was fried, and how many of the slices of bread eaten in a day were spread with fat). No questions were asked about portion sizes; therefore, standard portions were used for the nutrient estimations (see below). Women were asked to indicate if they considered themselves to be "vegetarian" or "vegan" or not. Data were collected on alcohol and caffeine intake in the women, but the papers concentrating on these topics are not considered in this review. Assessment of nutrients. The FFQ was used to calculate an approximate daily nutrient intake for each mother. 8 Each food group question was assigned a composition, based on consideration of how commonly various foods included in that food group were likely to be consumed and using an amount equivalent to one portion of that food group suitable for an adult woman. For example, the foods included in calculation of the nutrient content of one portion of leafy green vegetables were 0.4 portions of cabbage, 0.4 portions of Brussels sprouts, and 0.2 portions of spring greens. When a question on food consumption had not been answered, it was assumed that the food was rarely or never eaten. The approximate weekly intake was calculated by multiplying the weekly frequency of consumption of a food/food group by the nutrient content (obtained from the 5 th edition of McCance and Widdowson's The Composition of Foods and its supplements [49][50][51][52][53][54][55][56][57][58] ) of a portion of that food and summing this for all the foods consumed. The weekly frequencies of consumption assumed for each of the options in the questionnaire were: "never or rarely" ¼ 0, "once in 2 weeks" ¼ 0.5, "1-3 times per week" ¼ 2, "4-7 times per week" ¼ 5.5, and "more than once a day" ¼ 10. The question on bread consumption asked how many pieces of bread, rolls, or chapattis were eaten on a normal day. Milk consumption was calculated by summing the likely amount of milk consumed in tea and coffee, with breakfast cereals, on its own, and as flavored milk drinks. The nutrient values obtained were then divided by 7 to convert them to a daily intake. Cut-offs for energy intake were applied based on inspection of a histogram of the intakes, thus eliminating extremely low or high intakes (n ¼ 413 subjects [3.3%] were excluded). Approximate daily intakes were calculated for energy, protein, carbohydrate, fat, saturated fat, monounsaturated fat, polyunsaturated fat, total sugar, free (added) sugars, starch, nonstarch polysaccharide (a measure of fiber), n-3 fatty acids (from fish sources only), 12 vitamins, and 9 minerals. The adequacy of the nutrient intakes was assessed against dietary reference values for pregnant women in the United Kingdom. 59 The reference nutrient intake (RNI) is the amount at which 97.5% of the population are assumed to have an adequate intake. FFQ used at other times. The pregnancy FFQ was adapted for use in the children and parents at later times throughout the study (partners, usually fathers, were contacted via the mother, not directly). Modifications included the addition of questions about foods often consumed by children (e.g., fish fingers and sweets) and foods becoming more commonly consumed (e.g., vegetarian pies). The frequency categories and method of calculating the nutrients remained the same, but the portion sizes used differed as appropriate to the group being assessed. Assessment of fruit and vegetable intake and diet variety. Fruit and vegetable intake was assessed for the mother and child using FFQs and summing all of the frequencies of fruits and vegetables. 43 The comparison groups were >3 servings/day vs 3 servings/day for fruit and vegetables combined. Consumption of a varied diet was assessed in mother and child by creating a healthy plate variety score 44 using FFQ data based on the food groups and the number of servings as recommended in the food plate model (former pyramid model) promoted by the US Department of Agriculture. 60 The purpose of a variety score is to measure variety both within and between food groups. Omega-3 long-chain polyunsaturated fatty acid assessment (from fish and seafood only). In the FFQ there were 3 questions about fish consumption: "How many times nowadays do you eat: 1) white fish (cod, haddock, plaice, fish fingers, etc.); 2) dark or oily fish (tuna, sardines, pilchards, mackerel, herring, kippers, trout, salmon, etc.); 3) shellfish (prawns, crabs, cockles, mussels etc.)." Portion sizes were based on typical consumption patterns in the United Kingdom, and fatty acid compositions were based on profiles of typical species found in UK waters. 61 Oily fish consumption was validated by comparison with the erythrocyte fatty acid composition of blood samples obtained during pregnancy. 26 Intakes of the n-3 LC-PUFAs, eicosapentaenoic acid, and DHA from fish sources only were calculated. Dietary pattern assessment. Principal component analysis was used to identify dietary patterns in the pregnancy diet. 9 Five components described the underlying dietary patterns of the women. Dietary patterns have the advantage of using the frequencies of foods eaten directly and are not reliant on estimating the portion size and composition of foods and food groups as is the case for nutrient estimations. 9 Food records at age 10 years. When the ALSPAC children were aged 10 years (in 2002-2003) their diet was assessed using 3 1-day food records completed by the child with parental help. 62 The databank used for nutrient analysis was the same as that used for the FFQ during pregnancy. [49][50][51][52][53][54][55][56][57][58] The average daily nutrient intakes and amounts of various food groups were calculated. 62 Misreporting of dietary intake. Underreporting of energy intake in mothers was determined as reported daily energy intakes of <120% of the mother's estimated basal metabolic rate. 63 Maternal basal metabolic rate was estimated by using Schofield's equations for adults described in the UK Department of Health Report on Social Sciences 59 and based on age and body mass index (BMI; kg/m 2 ). Self-reported prepregnancy weight and height were collected via questionnaire during the pregnancy and used to estimate the woman's prepregnancy BMI and hence her basal metabolic rate. Child misreporting of energy intake was determined using a method that allows for moderate physical activity and uses standardized equations that account for the age, sex, and weight of the child. 62,64 Socioeconomic and anthropometric factors Maternal age at delivery was calculated by subtracting the mother's date of birth from the child's date of birth. Offspring birth weight was obtained from the medical records, and supine length was measured with a Harpenden neonatometer (Holtain, Dyfed, UK) soon after birth by a trained and validated member of the ALSPAC study team. Length of gestation was assessed from the mother's last menstrual period, unless the ultrasound estimate differed by 2 weeks or more in which case the ultrasound method was used. The following data were collected using questionnaires during pregnancy: parity, measured as the number of previous pregnancies resulting in a live or stillbirth; ethnic background; maternal smoking status at various time points during pregnancy; housing tenure; mother's perception of financial difficulties, including difficulty affording food. The mother's highest educational attainment was used to derive a 5-point scale with the following categories: no academic qualifications; vocational training (hairdressing, catering, etc.); O-level academic examination usually taken at age 16 years or equivalent; A-level academic examination usually taken at age 18 years or equivalent; university degree (it is important to note that, although 16 and 18 years are the usual ages for these qualifications, no restrictions based on the age at which the qualification was obtained were applied). In some analyses and in Table 2 these have been contracted to the following 3 categories: low, less than O-level; medium, O-level qualification or equivalent; high, A-level or higher. The details of all questionnaires used are available on the ALSPAC website. 65 Maternal depressive and anxiety symptoms The Edinburgh Postnatal Depression Scale was designed to exclude symptoms ascribable to somatic effects of pregnancy and childbirth (e.g., weight gain, sleeplessness, and tiredness). It was self-completed by the mother in the questionnaires sent at 18 and 32 weeks gestation and post delivery at 8 weeks and 8 months to assess maternal symptoms of depression. 66 At similar time points, symptoms of maternal anxiety were assessed using the 8 anxiety items from the Crown-Crisp Experiential Index, a validated self-rating inventory related to free-floating anxiety. 13 There is no established cut-off for this measure, but women who scored in the top 15% were identified as having a high frequency of anxiety symptoms. 13 Analysis of biological samples for fatty acids, vitamins, lead, mercury, and other trace minerals Biological samples were collected from mothers using 4 methods: 1) routinely at the pregnancy enrollment clinic and, thereafter, whenever the woman had blood taken, a sample was requested for ALSPAC 47 ; 2) at their first visit to the antenatal service, women had blood taken in acid-washed vacutainers specifically for trace metal analysis 10 ; 3) urine samples were obtained at various stages of pregnancy 34 ; 4) cord blood was collected at delivery, 30 and a piece of the umbilical cord was cut off and frozen. 37 The percentage of each fatty acid as a proportion of the total fatty acid content of the red blood cell membrane phospholipids was measured by gas chromatography in the laboratory of Scotia Pharmaceuticals. 28,29 Serum 25-hydroxyvitamin D concentrations for mothers were measured with high-performance liquid chromatography tandem mass spectrometry with an internal standard and adjusted for season of blood collection. 24 Whole blood samples were sent to the Centers for Disease Control and Prevention for analysis of maternal whole-blood mercury, lead, selenium, and cadmium. 10 Urinary iodine concentration (and creatinine to correct for urine volume) was assessed in a subset of stored urine samples from the first-trimester of pregnancy. 34 A subset of the umbilical cord aliquots was analyzed for a number of trace minerals. 37 Child's stereoacuity Stereoacuity was assessed at age 3.5 years in the Children in Focus substudy (n ¼ 435 with complete data). 26 Stereoacuity matures through 3 stages, and the test was designed to show which stage a child had reached. The stereotests were carried out by one orthoptist who was blind to the other data available for each child. Child's anthropometric measurements, bone mass, and blood pressure Height and weight were measured at the research clinics between the ages of 7 years and 15 years. Fat mass was assessed using bioelectrical impedance with a Tanita leg-to-leg body fat analyzer (Model TBF 305; Tanita, Tokyo, Japan). A Lunar prodigy DXA (dual-energy X-ray absorptiometry) scanner (GE Medical Systems, Madison, WI, USA) was used to measure body composition and provided estimates of total fat mass, lean body mass, and bone mass. Systolic and diastolic blood pressures were measured using a Dinamap 9301 Vital Signs Monitor (Morton Medical, London, UK). Two right arm measurements were recorded using a cuff size appropriate for a child's upper arm circumference, and an average of the 2 measurements was taken. Child's cognitive development (including intelligence quotient) and behavior ALSPAC developed a questionnaire-based scale to assess the child's abilities throughout infancy and early childhood. The scale included items from the Denver Developmental Screening Test 67 and was used to calculate a continuous score with 4 domains: gross motor skills, fine motor skills, social skills, and communication skills. IQ was measured in a research clinic at age 8 years using a validated age-adjusted shortened version of the Wechsler Intelligence Scale for Children. 68 Scores were calculated for performance, verbal, and total IQ. The strengths and difficulties questionnaire 69 was completed by caregivers for their children at various ages. The scale consists of 25 questions with 5 subscales: prosocial, hyperactivity, emotional symptoms, conduct problems, peer problems, and a total difficulties score. When the children were aged 7.9 years, information about behavior and development was also collected from parents and teachers using a questionnaire version of the Development and Well-Being Assessment. This is a validated measure consisting of structured and semistructured questions. 70 An experienced clinician combined all information about symptoms and their impact using a computerized heuristic to make standardized diagnoses of childhood psychiatric disorders. 71 Child's eczema, asthma, and atopy At age 2.5 years the children were classified as having eczema if their mother responded positively to the question, "Has your child had an itchy dry skin rash in joints and creases of his/her body (e.g., behind the knees, under the arms) since he/she was 18 months old?". 37,38 Information on wheezing in the child was obtained by asking the mother the following question when the child was aged 6 months and 3.5 years: "In the last 12 months has your child had any periods when there was wheezing with whistling on his/her chest when he/she breathed?". The information from these 2 periods identified children with 4 patterns of wheezing: nonwheezers, transient infant wheezers, late-onset wheezers, and persistent wheezers. 37,38 When the children were aged 7 years, the mothers were asked, "Has your child had any of the following in the past 12 months: wheezing; asthma; eczema; hay fever?". Children were defined as having current doctor-diagnosed asthma if mothers responded positively to the question, "Has a doctor ever actually said that your study child has asthma?" and positively to one or both of the questions on wheezing and asthma in the past 12 months. 38,39 Atopy at age 7 years was defined as a positive reaction to a skin prick test for any of 3 known allergens (cat, grass, and house dust mite). This definition has been shown to detect >95% of children with allergies. 39 Maternal nutrient intakes and dietary patterns Nutrient intakes calculated for 11 923 pregnant women from the FFQ at 32 weeks gestation 8 were mostly adequate when compared with recommendations 59 and were similar to intakes assessed in a representative national sample of nonpregnant women of comparable age who had completed weighed food records over 7 days. 72 The nutrients most likely to be judged inadequate in the diet of ALSPAC pregnant women were iron (median intake, 10.2 mg; RNI, 14.8 mg), magnesium (median intake, 247 mg; RNI, 270 mg), potassium (median intake, 2553 mg; RNI, 3500 mg), and folate (median intake, 245 mg; RNI, 300 lg). Iron supplements were taken by 22% of the women at 18 weeks of pregnancy and 43% at 32 weeks 8 ; fewer women took folate supplements, 9% and 18%, respectively, but this was prior to a national publicity campaign to encourage women to take folate supplements in early pregnancy. The FFQ data were used to establish dietary patterns underlying the foods consumed by the pregnant women, and these were then related to socioeconomic variables. 9 There were 5 patterns that together explained 31.3% of the variation of foods in the diet. This is a similar amount of variation to that in a large Swedish validation study assessing dietary patterns obtained from FFQs completed by women vs those obtained from diet records. 73 The "health-conscious" pattern in ALSPAC loaded positively on brown/wholemeal bread, rice, pasta, fresh fruit, salad, fruit juice, fish, cheese, pulses, and whole grain breakfast cereal and negatively on white bread. 9 Mothers with higher educational attainment, those who were older, those who did not smoke, and those who were not overweight before pregnancy were more likely to score highly on this pattern. The "traditional" pattern had high factor loadings on green vegetables, peas, carrots, root vegetables, and potatoes (not French fries). It was not associated with maternal educational attainment but was more likely if there were several children in the household or if the mother was overweight. 9 The "processed" pattern had high factor loadings for white bread, meat pies, sausages, pizza, eggs, chips and roast potatoes, baked beans, and fried foods. The strongest associations with this pattern were with the mother being aged <20 years, smoking during pregnancy, and having lower educational attainment. Having more children in the household, living in council accommodation (public housing), and reporting financial difficulties were all independently associated with higher scores on this pattern. 9 The "confectionery" pattern loaded on chocolates, sweets, crisps, biscuits, puddings, and cakes and was associated positively with young maternal age and negatively with mothers being overweight or dieting during pregnancy. 9 The "vegetarian" pattern had positive loadings on pulses, nuts, meat substitutes, and herbal tea and negative loadings on red meat and poultry. The women with higher scores on this pattern were more likely to be older and to have financial difficulties and less likely to have medium educational attainment or to have other children. 9 These data suggest that the diet consumed during pregnancy was adequate for most nutrients with the exception of some key micronutrients-namely, iron, magnesium, potassium, and folate. There was strong evidence of social bias in the dietary patterns that describe the diets of ALSPAC women during pregnancy. Toxins in the maternal diet Foods can supply toxins, as well as nutrients, and the developing fetus may be particularly susceptible. For example, high blood concentrations of mercury and lead during pregnancy have been shown to be associated with adverse offspring outcomes. Maternal blood mercury 10 and lead concentrations 11 in the ALSPAC were investigated in relation to environmental and dietary exposures. Linear regression was used to determine the contribution to total blood mercury of 103 food/ drink types based on R 2 values; maternal diet accounted for 19.8% of the total variation, with 8.75% coming from fish/seafood. 10 Other components of the diet that contributed positively included wine and herbal teas. Some foods had a negative association with total blood concentrations of mercury; these included white bread, meat pies, and French fries. The study concluded that limiting intake of seafood during pregnancy may have only a small effect on total mercury concentrations but may be detrimental to other outcomes. Lead concentrations in maternal blood (median, 3.41; range, 0.41-19.14 mg/dL) were slightly higher in the ALSPAC than reported in other developed countries, and the strongest predictor of amounts !5 lg/dL was high maternal educational attainment. 11 Other factors independently associated with increased concentrations of lead were cigarette smoking, alcohol and coffee drinking, and heating the home with a coal fire. There was some evidence that higher dietary iron and calcium intakes were associated with lower concentrations. These data suggest that following the recommendations for a healthy diet and lifestyle during pregnancy may have the added benefit of keeping blood lead concentrations low. Dietary associations with maternal psychiatric symptoms Some aspects of maternal psychiatric symptoms were investigated in relation to the mothers' diet during pregnancy. The presence of depressive symptoms was assessed using the Edinburgh Postnatal Depression Scale several times during and after pregnancy. 12 Compared with women consuming seafood frequently (>3 portions per week providing >1.5 g/week n-3 LC-PUFA), those consuming none were more likely to have frequent depressive symptoms at 32 weeks of pregnancy, the same point at which the diet was measured (adjusted odds ratio [OR], 1.54; 95% confidence interval [CI], 1.25-1.89). These associations were weaker for depressive symptoms at other time points, possibly because they were more remote from the dietary measure. Symptoms of anxiety at 32 weeks gestation were investigated in relation to dietary patterns and intakes of n-3 LC-PUFA from fish. 13 Women who had high scores on the vegetarian dietary pattern were more likely to have frequent anxiety symptoms (OR, 1.25; 95% CI, 1.08-1.44), whereas those with high scores on the health-conscious and traditional patterns were less likely to have symptoms of anxiety. There was also an independent negative association between fish consumption and anxiety symptoms; women who had no n-3 LC-PUFAs from fish had more anxiety symptoms than those who consumed >1.5 g/week of n-3 LC-PUFAs (OR, 1.53; 95% CI, 1.25-1.87). 13 Women were assessed by questionnaire for a history of eating disorders (anorexia nervosa or bulimia nervosa) during and before pregnancy; 414 women had reported such a history, and their diets during pregnancy were compared with those of the 9723 women in the cohort without a history of eating disorders or other major psychiatric problems. 14 Women with a history of eating disorders were 2.8 times (95% CI, 2.1-3.8) more likely to state that they were currently a vegetarian than those without. This was reflected in their lower consumption of meat and higher consumption of soya products and pulses, although there was no evidence of differences in nutrient intakes. The women with a history of eating disorders reported a much higher intake of coffee than women without. The highest of these intakes were above the upper limit of caffeine recommended for pregnant women in the United Kingdom. 73 Taken together, these studies suggest that maternal psychiatric symptoms are related to small differences in maternal diet, which could be either a cause or an effect of the psychiatric problems. Birth outcomes Some aspects of diet during pregnancy may be associated with adverse birth outcomes. Dietary differences relating to the smoking habits and self-assessed financial difficulties of the women and their associations with birth weight were investigated. 15 Difficulty affording food was commonly reported by the least educated women and in smokers; 14.9% and 15.6% had difficulty, respectively, compared with only 2.7% of the most educated and 6.5% of nonsmokers. As shown earlier, both smoking status and degree of difficulty affording food were related to dietary patterns. In regard to the quality of the diet in terms of foods and nutrients, smokers had higher intakes of energy, saturated fat, and free sugars but lower intakes of protein and most micronutrients, particularly vitamin C, than nonsmokers. This was a reflection of the types of foods eaten: smokers consumed sausages, pies, chips, and crisps more often and red meat, poultry, fish, green vegetables, salad, and fruit less often. Smokers were also less likely to take supplements of iron or folate. As is common, the infants of smoking mothers had a lower average birth weight than those of nonsmoking mothers. Women reporting the most difficulty affording food had lower intakes of energy and of most nutrients, particularly vitamin C, zinc, and iron, compared with those with no difficulty. Women with difficulty were more likely to eat meat products than to eat carcass meat, poultry, or fish; they ate French fries more often and green vegetables, salad, fruit, and fruit juice less often than those with no difficulty affording food. There were no differences in supplement use according to degree of financial difficulty. 15 There was no independent association between birth weight and degree of financial difficulty after accounting for smoking status and adjusting for sex, gestational age, maternal height, parity, and ethnicity. The fact that many of the food habits were similar in smokers and women with financial difficulty but the infants of those with financial difficulty did not show the birth weight deficit of the smokers' infants suggests that maternal dietary differences are not the main determinant of birth weight differences. The presence of diabetes in the mother (prepregnancy [n ¼ 40], gestational [n ¼ 53]) was associated with greater mean birth weight and with greater odds for macrosomia (birth weight >4000 g) in the infant 16 ; the adjusted ORs for macrosomia were 3.56 (95% CI, 1.53-8.28) for existing diabetes and 5.50 (95% CI, 1.18-10.30) for gestational diabetes. There was a smaller increased risk for macrosomia if the nondiabetic mother had at least 2 episodes of glycosuria detected during the pregnancy (n ¼ 372; adjusted OR, 1.58; 95% CI, 1.18-2.12) compared with mothers with none of these problems (n ¼ 10 123). These data are in agreement with previous studies and suggest that even modest increases in concentrations of glucose in maternal blood during pregnancy are associated with a fetal growth rate that is above the normal range. An investigation of birth outcomes in relation to fish intake during pregnancy was carried out in the wake of several non-ALSPAC studies looking at associations of maternal fish consumption with birth weight, length of gestation, and size at birth that had shown conflicting results. 17 It was possible that the fatty acid content of fish was driving any associations found; therefore, the intakes of n-3 LC-PUFAs from fish and the weight of fish eaten were estimated. Preliminary analysis showed that n-3 LC-PUFA intakes were strongly positively associated with maternal educational attainment and negatively with smoking status (both P < 0.001). Unadjusted positive associations between fish and n-3 LC-PUFA intakes and both birth weight and length of gestation were not robust to adjustment for child's sex, maternal smoking, age, parity, and education. However, there was a persistent relationship, after adjustment, between low fish intake and intrauterine growth retardation; there was an OR of 1.37 (95% CI, 1.02-1.84) for being below the 10 th percentile of birth weight for sex and gestational age if the mother ate no fish during pregnancy compared with mothers who ate the highest quantity of fish. 17 The relationship was not changed by removing smokers from the analysis and was stronger for fish than n-3 LC-PUFA intake, so it may be related to some other constituent of fish. These results lend some support to the hypothesis that increasing fish intake during pregnancy may increase the growth rate of the fetus. The association of maternal diet with the presence of a congenital defect of the penis in boys-namely, hypospadias-was explored. 18 It had been suggested that high intakes of phytoestrogens may be implicated and that vegetarian diets are likely to contain higher amounts than omnivorous diets. Mothers (of boys) who were vegetarian during pregnancy had a higher risk (adjusted OR, 4.99; 95% CI, 2.10-11.88) of giving birth to a hypospadias-affected boy compared with omnivores who did not take iron supplements in the first half of pregnancy, whereas omnivores who took iron supplements had a marginally higher risk (adjusted OR, 2.07; 95% CI, 1.00-4.32). These results support the hypothesis that phytoestrogens may disrupt the development of the male reproductive system. Physical growth, blood pressure, and bone development of the offspring The investigations made into the relationship between maternal diet during pregnancy and the physical growth, blood pressure, and bone development of the offspring are presented in Table 3. A weak association was found between maternal dietary iron and magnesium intake and height, particularly sitting height; it was greatly attenuated on full adjustment. 19 For offspring, blood pressure measured at ages 7 years and 15 years revealed no convincing associations with maternal intake of any nutrients during pregnancy (Table 3). 20,21 Research in animals had suggested that maternal iron deficiency during pregnancy may be related to offspring blood pressure; therefore, relationships with maternal anemia, iron supplementation, and dietary intake were investigated. 22 There was no association of dietary iron intake during pregnancy with offspring blood pressure at age 7 years (Table 3). There was some evidence that iron supplementation was associated with slightly lower blood pressure, but after accounting for multivitamin use, no association remained. In women not taking supplements, there was a marginal association between anemia during pregnancy and lower offspring systolic blood pressure (fully adjusted b, À1.48; 95% CI, À3.21 to 0.25 mm Hg; P ¼ 0.09). 22 Bone development was measured at age 9 years. There was evidence that maternal dietary intakes of magnesium, potassium, and folate, but not calcium, were positively associated with measures of bone mass (Table 3). 23 The proportion of explained variability in child bone mass was very small (i.e., for maternal magnesium intake approximately 1% higher total body bone mass between the upper and the lower tertile). Such very small differences do not suggest that diet during pregnancy plays a major role in bone development. There was also no evidence of an association between maternal vitamin D status during pregnancy and offspring bone mineral content (Table 3). 24 The suggestion that maternal body fat stores are important for fetal bone mineral deposition was investigated in a further study. 25 There was no evidence of an independent association between maternal prepregnancy BMI and offspring bone mass at age 9 years once adjustments were made for birth weight and the current height and weight of the child. 25 These data suggest it is unlikely that pregnancy diet has an important effect on any of the outcomes investigated in this section, although the associations that were found were with nutrients that are most likely to be inadequate in the diet of this population. 8 Neurocognitive and behavioral development of the offspring Brain cell membranes have a high content of n-3 LC-PUFAs, and the predominant sources of these nutrients in the diets of UK residents are fish and seafood. For infants, breast milk is also a good source. In light of this, fish consumption during pregnancy and breastfeeding were investigated in relation to neurocognitive development. 26 The earliest study of this type in ALSPAC measured stereovision at age 3 years in the Children in Focus substudy (435 full-term children with complete data). Better stereovision in children was related independently both to any breastfeeding and to mothers' eating oily fish at least once every 2 weeks during pregnancy (OR, 1.57; 95% CI, 1.00-2.45). 26 In a further analysis, eating oily fish during pregnancy was associated with higher maternal blood concentrations of the n-3 LC-PUFA DHA in a dose-response manner (analysis of variance; F ¼ 25.1, df ¼ 2, P < 0.001). It is possible, therefore, that DHA is the active factor accounting for the association of prenatal fish consumption with visual development. Visual development was not independently associated with the educational attainment of the mother. During the time of the ALSPAC, there had been concern about the possible detrimental effects of the mercury content of fish on cognitive development, with pregnant women, especially in the United States, being advised to limit their intake of fish due to the potential for high mercury content in some species. 75 One study using ALSPAC data 27 looked at the possible effects of the mercury content of fish on the early development of language and communication skills at ages 15 and 18 months (n ¼ 7421) and found that fish consumption 70.5-73.8]; P for trend, 0.03). Mercury concentrations in umbilical cord samples were available for a subset of these infants (n ¼ 1054). The amounts were positively associated with the FFQ assessment of fish consumption during pregnancy, but they were not associated with the developmental outcomes measured. 27 The authors concluded that further studies would be needed to determine what aspects of fish consumption may explain the beneficial association shown. A further investigation of developmental outcomes in relation to fish consumption assessed IQ measured at age 8 years. 28 In total, 12% of women had eaten no fish during pregnancy, 65% ate 1-340 g per week, and 23% ate more than 340 g (at least 3 portions per week). Only 2% of the pregnant women had taken fish oil supplements, so this was unlikely to contaminate the results. Table 4 shows the results of the fully adjusted logistic regression for the risk of the child being in the lowest quartile of the outcome measure. Maternal fish consumption was positively associated with total and verbal but not performance IQ ( Table 4). The OR for suboptimal score for verbal IQ was further adjusted for father's fish intake, and the relationship was only slightly attenuated (OR, 1.39; 95% CI, 1.04-1.86), suggesting that the result is unlikely to be explained by social status. The risk of the child having suboptimal scores for prosocial behavior at age 7 years or fine motor skills or social development at age 42 months were greater when no fish was consumed by the mother during pregnancy (see Table 4). 28 These 3 studies add weight to the evidence for the beneficial effect of fish consumption during pregnancy on brain development but do not determine the mechanism of the association. In light of these results, various government departments were lobbied to modify their recommendations to pregnant women regarding limiting seafood consumption, with the result that recommended intakes of fish during pregnancy have been increased in various countries, including the United States and Norway. 76 These publications led to ALSPAC's involvement in a European Union-funded project, Nutrimenthe, 77 which investigated the possible mechanisms and genetic underpinning of the relationships between fish consumption and neurocognitive development, in particular exploring the genes involved in the metabolic pathways that elongate and desaturate n-3 and n-6 PUFAs (fatty acid desaturase genes [FADS]). This work considered both maternal and child FADS genotype in relation to LC-PUFAs in maternal and cord blood. First, the program showed, as demonstrated in previous smaller studies, that maternal FADS genotype modulates the concentrations of LC-PUFAs in maternal blood. 29 The minor alleles in the FADS genes were consistently positively associated with medium-chain n-6 and n-3 PUFAs and negatively associated with n-6 and n-3 LC-PUFAs, including DHA. This suggests that they were less efficient at elongating the fatty acid chain, possibly resulting in a lower supply of DHA to the fetus. The second group of analyses showed that, in relation to cord blood concentrations of PUFAs, both maternal and child genotypes were equally important. 30 As in the maternal blood, the minor alleles of the maternal FADS genes were associated with higher concentrations of medium-chain PUFAs in cord blood. Associations with n-6 LC-PUFAs were seen for the child's but not the mother's FADS genes (negative for the minor alleles) and were much stronger than the associations with n-3 LC-PUFAs. Both maternal and child minor alleles were associated negatively with DHA concentrations in cord blood. It had previously been thought that the fetus was totally reliant on maternal supply of DHA and that fetal metabolism was not associated with DHA synthesis; these results show this not to be the case. However, maternal FADS polymorphisms were not associated with n-6 or n-3 PUFA concentrations in the child's blood at age 7 years; by this age, there were strong associations of child FADS genes with concentrations of both. 31 An analysis using maternal LC-PUFA concentrations as a direct predictor of IQ at age 8 years 32 showed only a very weak relationship with DHA concentrations in maternal blood (full-scale IQ points, À1.52; 95% CI, À2.91 to À0.14; P ¼ 0.031 for the lowest quartile of the fatty acid compared with the rest), and some n-6 LC-PUFAs (such as osbond acid [22:5n-6]; full-scale IQ points, À1.95; 95% CI, À3.30 to À0.61; P ¼ 0.004 and arachadonic acid [20:4n-6]; full-scale IQ points, À1.54; 95% CI, À2.91 to À0.18; P ¼ 0.026) were found to have similar relationships. 32 From postmortem data, it seems that these n-6 LC-PUFAs may be used in cell membranes in place of DHA if there is a shortage of DHA. The percentage of the variation in IQ explained by these fatty acids was very small (e.g., 0.29% for osbond acid) compared with the total for all other confounders (17%). Thus, these results suggest a weak influence for maternal LC-PUFAs, both n-6 and n-3, in cognitive development in the offspring. This series of studies has added to the understanding of the mechanisms underlying fatty acid metabolism. Externalizing behavior was assessed using the Development and Well-Being Assessment at age 7 years (n ¼ 8242); children with a history suggestive of attention deficit hyperactivity disorder (ADHD) and/or conduct disorder were identified. 33 Although numbers were very small in each diagnostic category, these outcomes were examined in relation to intake of n-3 LC-PUFAs from fish during pregnancy and when the child was aged 3 years (low/high intake) as well as of breastfeeding (none/any) using multivariate stepwise models. 33 There were no associations of any of these dietary measures with likely diagnosis of attention deficit hyperactivity disorder either before or after adjustment. There was evidence of an unadjusted association between conduct disorder and n-3 LC-PUFA intake during pregnancy and with breastfeeding, but this was completely removed once SEB factors were taken into account. The possibility that other nutrients supplied by fish in the diet could be active in neuro-cognitive development was investigated in a study focusing on iodine. Currently in the United Kingdom, but not the rest of Europe, there is evidence of mild-to-moderate iodine deficiency in the population. 34 Iodine is a major component of thyroid hormones, which are important for fetal brain and neurological development. Urinary iodine concentrations, adjusted for creatinine to account for urine volume, were measured in spot urine samples collected in early pregnancy from 1040 ALSPAC mothers. 34 Children of mothers with iodine concentrations <150 mg/g, (classified as mild-to-moderately iodine deficient) were compared with those of mothers with concentrations !150 mg/g for IQ at age 8 years and reading skills at age 9 years in fully adjusted analysis (Figure 1). Low and mild-moderately low maternal iodine status were both independently associated with an increased risk of suboptimal scores for verbal IQ and reading ability (Figure 1). 34 These analyses had been adjusted for n-3 LC-PUFA intake from fish consumption during pregnancy, suggesting that the iodine content of fish may partly account for its association with neurocognitive development. The association between maternal intake of vitamin B 12 during pregnancy and child IQ was also investigated, and a small association was found with minimal adjustment. 35 This relationship was greatly attenuated when adjusted for confounders such as maternal education and abolished on further adjustment for birth weight and breastfeeding duration. It is possible that this was an over-control because both birth weight reduction and reluctance to breastfeed may be on the pathway between vitamin B 12 deficiency and IQ. However, because genetic effects of alleles that increase B 12 status were only weakly associated with IQ, the authors concluded that maternal B 12 status is unlikely to have an important causal relationship with offspring IQ. These analyses underline the importance of fully accounting for factors, such as SEB, which may be associated with both dietary intakes and the behavioral or cognitive outcomes. Lack of appropriate adjustment may account for the inconsistent results from studies in this area of investigation. Eczema, asthma, and atopy It has been suggested that maternal diet during pregnancy may be related to the development of atopic disorders such as eczema and asthma in childhood. The possibility that concentrations of n-3 and n-6 LC-PUFAs measured in maternal and umbilical cord red blood cell membranes may be associated with wheezing and eczema assessed by parent-completed questionnaire in early childhood (at ages 18 months and 3 years) was investigated. 36 There was some very weak evidence of associations of some ratios of fatty acids in cord blood but not maternal blood with eczema (the ratio of arachidonic acid to eicosapentaenoic acid was positively associated [adjusted OR per doubling, 1.14; 95% CI, 1.00-1.31; P ¼ 0.04]) and later-onset wheeze (the ratio of linoleic acid to a-linolenic acid was positively associated [adjusted OR per doubling, 1.30; 95% CI, 1.04-1.61; P ¼ 0.02]). 36 So far, there have been no further studies to determine whether similar associations are found with longer-term history of asthma or markers such as bronchial hyperactivity. Trace element contents of cord samples were also explored in relation to these outcomes. 37 Amounts of selenium, zinc, copper, manganese, magnesium, iron, lead, and mercury were measured. The analysis confirmed a relationship between maternal fish consumption and mercury concentrations, which were strongly associated with both oily and white fish consumption assessed in the FFQ. There was some evidence to suggest that selenium concentrations in the umbilical cord were negatively associated with persistent wheeze (adjusted OR, 0.67; 95% CI, 0.45-0.99) and iron concentrations were negatively associated with late-onset wheeze (adjusted OR, 0.86; 95% CI, 0.75-0.99) and eczema (adjusted OR, 0.90; 95% CI, 0.83-0.98) in early childhood. 37 None of the other minerals were related to these outcomes. It is likely that pregnant women are not all achieving the recommended intakes of selenium or iron, 8 but further research is needed to confirm these findings before recommendations can be made. Another set of analyses investigated the development of eczema, asthma, and atopy in relation to the dietary patterns found during pregnancy. 38 Strong univariate associations were found with the healthconscious dietary pattern for both eczema (positive) and wheeze (negative) and the processed dietary pattern for wheeze (positive), but these associations were attenuated to the null on adjustment. Examples of these associations for persistent wheeze were unadjusted OR of 0.78 (95% CI, 0.70-0.87) and adjusted OR of 1.00 (95% CI, 0.86-1.16) with the health-conscious pattern score and unadjusted OR of 1.27 (95% CI, 1.15-1.40) and adjusted OR of 1.00 (95% CI, 0.88-1.13) with the processed pattern score. 38 This attenuation is likely to be due to the strong association of the patterns with socioeconomic variables, which are independent determinants of eczema (affluence -positive) and wheeze (low maternal age, smoking during pregnancy, living in rented housing -positive). Associations with atopy (response to skin prick tests) at age 7 years were also not robust to adjustment. 38 An additional exploration researched the possibility that folate intake and supplementation during pregnancy and/or the methyltetrahydrofolate reductase (MTHFR) C677T genotype of the mother or child may be related to childhood allergy or atopy, as had been found in a Danish study in adults. 39 In the children at age 7 years (n ¼ 5364), the prevalence of atopy (positive skin prick test) was 20% and asthma (doctor diagnosed or wheeze present) 10%; there was no association between these outcomes and either the child's MTHFR genotype or maternal dietary folate intake during pregnancy. In the mothers (n ¼ 7356), self-reported allergy was 42% and asthma 11%; again, there was no evidence of an association with the mother's own MTHFR genotype. 39 Taken together, there was no suggestion that impaired folate metabolism is associated with allergy in this population. Advice on healthy eating during pregnancy may increase nutrient intakes but the evidence to date suggests that this is likely to have only very limited (if any) effect on preventing the development of eczema, asthma, or atopy in the offspring. Obesity development in offspring The association between maternal prepregnancy BMI and offspring BMI could be due to an intrauterine programming effect or to genetic or environmental effects or a combination of all three. To assess these relationships, maternal and paternal offspring pairs were formed and BMI associations compared when the child was aged 7 years. 40 A stronger association between maternal-offspring than paternal-offspring pairs would imply an intrauterine effect, which might be expected to accumulate over generations. In ALSPAC, there was no difference in strength between the maternal and paternal associations with offspring obesity (n ¼ 4654 complete trios). These results suggest that the associations between parental and offspring obesity are likely to be due to shared genetic and environmental characteristics rather than intrauterine effects, with shared diet being a plausible contributor. 40 These possibilities are explored further in the review of childhood diet in ALSPAC, also published in this supplement. 78 Maternal prepregnancy weight and weight gain during pregnancy were investigated in relation to offspring body composition and cardiovascular risk factors at age 9 years (n ¼ 5154) 41 ; however, pregnancy diet was not included in this analysis. Greater prepregnancy weight was associated with greater offspring adiposity (higher BMI, waist circumference, and fat mass) and more adverse cardiovascular risk factors (higher systolic blood pressure and lower high-density lipoprotein cholesterol concentrations). Greater weight gain in early pregnancy up to 14 weeks was associated with increased adiposity, particularly if the mother gained more than 500 g/week. For weight gain between 14 and 36 weeks, only the higher weight gain (>500 g/week) was associated with offspring adiposity. 41 Adverse cardiovascular risk factors in the offspring were associated with greater pregnancy weight gain, and this was mediated through child adiposity. The possible contribution of gestational diabetes/glycosuria to offspring over weight and obesity at age 9-11 years was investigated in 6842 motheroffspring pairs. 16 The unadjusted associations found with gestational diabetes were attenuated by adjustment for maternal prepregnancy BMI; however, there were independent associations with glycosuria (!2 episodes during the pregnancy); the adjusted ORs were 1.35 (95% CI, 1.00-1.82) for general overweight/obesity and 1.31 (95% CI, 1.00-1.72) for central obesity (top 10% of waist circumference) in the children of mothers with glycosuria compared with those of mothers with no sign of diabetes/glycosuria. 16 Future investigation of dietary associations with these variables could be informative. Environmental factors that modify DNA methylation at critical time points can affect gene expression and cellular function, and this type of epigenetic change has been speculatively linked to adult obesity. Some nutrients, such as folate, can supply methyl groups to the methylation process; therefore, maternal supplementation with folate and dietary intake of folate during pregnancy may be important and, in theory, could be related to offspring obesity. Childhood body composition at age 9 years was investigated in relation to folate supplementation and dietary intake during pregnancy (n ¼ 5783), 42 but no evidence that intrauterine exposure to folate influences body composition, including fat mass, at this age was found. A genetic variant in the maternal MTHFR C677T genotype was used in the method of Mendelian randomization 79 to further assess the possible relationship with folate. The MTHFR gene influences the availability of methyl donors during pregnancy and, thus, affects DNA methylation and may be expected to be related to offspring obesity in the same way as folate. The use of a genetic marker in this way can overcome the biases often found in observational studies. Again, no relationship was found between the genotype and childhood body composition, thus mirroring the null finding with folate intake. 42 Maternal diet in relation to offspring diet in childhood Associations between maternal prenatal diet and offspring diet in relation to fruit and vegetable intake 43 and the consumption of a varied diet 44 have been investigated in collaboration with European partners in the HabEat consortium. 80 Data on maternal intake during pregnancy in France, obtained via FFQ, were also available (the EDEN study 80 ). Maternal and child fruit and vegetable intake and healthy plate variety score (a measure of variety of healthy foods consumed) 44 were categorized in the same way in the 2 studies. Child's diet was assessed by FFQ at ages 2 years and 3 years in EDEN and ages 2, 3, and 4 years in ALSPAC. At all ages, a child's intake of fruit and vegetables was strongly related to maternal intake. 43 In ALSPAC, the children whose mothers consumed greater amounts of fruit and vegetables were more likely to eat fruit and vegetables at least 3 times a day: adjusted ORs were 3.10 (95% CI, 2.70-3.70), 3.10 (95% CI, 2.70-3.70), and 5.50 (95% CI, 4.70-6.30), at ages 2, 3, and 4 years, respectively. The results were similar in France. The child's diet variety score was higher if their mother had a higher variety score in both the United Kingdom and France (<0.001 at each age in both cohorts). 44 Nutrient intakes were not assessed in these analyses; however, the results provide evidence that, in terms of foods consumed, a mother's diet is positively associated with her child's diet. The associations between overall maternal energy and macronutrient intakes during pregnancy and later childhood diet were investigated using dietary data collected from the children at age 10 years (by food record). Maternal and paternal diets, assessed by FFQ when the child was aged 4 years, were incorporated into the analysis to try to distinguish between intrauterine and family dietary relationships. 45 More than 5000 mother-child pairs and 3000 father-child pairs were available. There was strong evidence of under-reporting of dietary intake in the offspring, so the main results were adjusted for this. 45 Greater maternal pregnancy macronutrient intakes (protein, fat, and carbohydrate) were associated with greater child intakes of the same nutrients. Associations between maternal-child intakes were stronger than for paternal-child intakes, and maternal pregnancy-child associations were stronger than those with the later maternal diet. 45 A child's energy intake at age 10 years was positively associated with the child's fat mass, as were mutually adjusted protein, fat, and carbohydrate, with fat intake being the strongest predictor. This was only evident once underreporting was considered. Maternal pregnancy diet was not strongly associated with the offspring's fat mass. 45 This pattern of associations suggests there may be intrauterine effects of maternal diet during pregnancy that may program a child's appetite. 45 DISCUSSION This review has brought together the results of the investigations carried out using ALSPAC data in relation to diet during pregnancy. The dietary data collected in the ALSPAC using the FFQ completed by the women at 32 weeks of pregnancy have been used by a variety of experts in various disciplines, including psychiatrists and psychologists, endocrinologists and nutritionists, epidemiologists, and pediatricians. It was the first of the longitudinal cohort studies in Europe to start during pregnancy and include a measure of diet during pregnancy. More than 20 European population-based birth cohorts with dietary information collected during pregnancy have followed the ALSPAC lead, including the Southampton Women's Study in the United Kingdom 81 . The focus of the ALSPAC publications to date has been on the mental health of the pregnant woman and aspects of the health and development of her offspring; combining them in one review has emphasized the value of a prebirth cohort study with comprehensive longitudinal data collection. The key findings are listed in Box 1. Dietary intakes during pregnancy were mostly adequate, as measured against dietary recommendations with the exception of some key nutrients, in particular iron, magnesium, potassium, and folate. 8 Relationships of pregnancy nutrient intakes and dietary patterns with several child growth and health outcomes have been explored within ALSPAC; the majority have shown little evidence of important relationships being present with height, 19 blood pressure, 20,21 eczema, 36-38 asthma, [36][37][38][39] or atopy. 38,39 There was evidence of very small positive effects on bone development of the key marginal nutrients magnesium, potassium, and folate; 23 this finding is supported by the Southampton Women's Study, in which consumption of a nutrient-dense prudent diet in the third trimester of pregnancy was positively associated with bone development. 82 Exploration of direct associations of maternal pregnancy diet with birth outcomes found that if the mother ate no fish during pregnancy there was a small persistent association with a higher frequency of intrauterine growth retardation in her offspring compared with mothers who ate fish frequently. 17 Data from 19 of the European birth cohorts, including the Southampton Women's Study but not the ALSPAC, were combined in a meta-analysis assessing birth weight and length of gestation in relation to maternal fish intake during pregnancy. 83 A small but significant increase in birth weight and a slightly lower risk of preterm birth was found with moderate compared with no fish intake; this study did not investigate intrauterine growth retardation, so it did not directly confirm ALSPAC findings. ALSPAC mothers (of boys) who were vegetarian during pregnancy had a greater likelihood of giving birth to a son with hypospadias than omnivorous mothers. 18 These results imply that foods eaten during pregnancy can affect the physical development of the fetus. Several publications have confirmed the importance of fish and seafood consumption during pregnancy, with benefits for both the mother and the child emerging. Anxiety and depression during pregnancy may have adverse consequences, with possible effects on delivery and birth outcomes and the later development and behavior of the child. As such, it is of public interest that 2 ALSPAC papers reported that women who ate little or no fish and seafood during pregnancy had increased risk of developing depressive 12 and anxiety symptoms. 13 Furthermore, robust evidence was found that the offspring of mothers who did not eat fish or seafood during pregnancy showed poorer neurocognitive development than the offspring of those who frequently ate fish and seafood. This was particularly in relation to visual development, 26 communication, 27 and verbal IQ. 28 In support of these findings, the Danish National Birth Cohort and a prebirth cohort study from the United States (Project VIVA) reported similar associations with maternal fish intake during pregnancy when they studied attainment of developmental milestones at 18 months in Denmark 84 and child cognition at 2 time points-ages 6 months 85 and 3 years 86 -in the United States. A plausible theoretical mechanism for these associations relates to the preferential incorporation of n-3 LC-PUFAs (supplied by fish in the diet) into brain cell membranes during fetal development. ALSPAC data was used to explore the genetic underpinning of n-6 and n-3 LC-PUFA concentrations in maternal and offspring blood, showing that both maternal and child genotypes influence fatty acid status during pregnancy. 29,30 However, only very weak associations between maternal LC-PUFA concentrations and offspring IQ 32 were found, suggesting that the strong relationship between fish consumption during pregnancy and offspring cognitive function may have other contributing factors. Fish is a rich source of many nutrients, including iodine. Iodine is a vital component of the thyroid hormones crucial for brain and neurological development, so it is possible that the iodine status of the mother could contribute to the association of fish intake with cognitive development. In ALSPAC offspring, low maternal urinary iodine concentration was associated with lower scores for verbal IQ and reading ability, and this was independent of n-3 LC-PUFA intake from fish ( Figure 1). 34 Considered together, these studies imply that pregnant women should be advised to include fish as part of their diet; this is an important public health message. However, there are concerns about fish consumption because certain types of fish are contaminated by mercury, which is toxic to humans at high concentrations. The blood mercury concentrations, reflecting the usual UK diet, were quite low, 10,27 and ALSPAC data showed that fish consumption contributes to the maternal burden of mercury 10 and that amounts of mercury were higher in umbilical cord samples if mothers ate fish during pregnancy, 27 but no evidence was found that these relatively low concentrations of mercury were detrimental to offspring cognitive development. 27,28 Project VIVA also found a direct relationship between mothers' fish consumption and their blood mercury concentrations. 86 Higher mercury concentrations were associated with poorer cognitive test performance in the offspring at age 3 years, but higher fish intake was associated with better test scores. Associations were strengthened when both fish and mercury were included in the analysis. These findings are in line with the ALSPAC findings. It is possible that the associations found between fish consumption during pregnancy and neurocognitive development arose as a result of residual confounding because both fish consumption and some measures of neurocognitive development are related to maternal educational attainment. However, visual development at age 3.5 years was associated with maternal fish consumption but not with maternal education. 26 Furthermore, when assessing the relationships between maternal fish consumption and childhood IQ, paternal diet showed no independent association and did not attenuate the association with maternal diet substantially. 28 Taken together these findings indicate that it is more likely that an intrauterine effect of maternal diet is involved in the association with offspring IQ and it is unlikely that maternal education or other social factors are important confounders of this relationship. Several analyses have confirmed that maternal educational, financial, and smoking statuses are associated with differences in food and nutrient intakes in the maternal diet. 9,28,15 In particular, mothers with the lowest educational attainment were less likely to eat fish and foods associated with a health-conscious dietary pattern than those with the highest educational attainment. 9 Mothers who smoked and those with financial difficulties ate more processed foods than those who did not smoke or those with no financial difficulties. 9,15 The UK Southampton Women's Study cohort found similar inequalities in women's diets that were mainly associated with educational attainment. 87 These diet Box 1 Key findings in brief • Eating fish/seafood during pregnancy was associated with beneficial effects on the development of the brain and eyesight of the child • Women who ate fish/seafood during pregnancy showed fewer symptoms of depression or anxiety than those who ate no fish • Higher maternal educational attainment was related to better quality of diet consumed during pregnancy • Maternal smoking during pregnancy and more financial difficulty were related adversely to the quality of the diet consumed during pregnancy • Some pregnant women had lower-than-recommended dietary intakes of key nutrients such as iron, potassium, magnesium, and folate • Maternal diet during pregnancy was predictive of offspring diet in childhood • High maternal prepregnancy weight and greater weight gain during pregnancy were associated with increased fatness and adverse cardiovascular risk factors in offspring in mid childhood inequalities could play a role in later health inequalities in the offspring, both in an intrauterine context and later in childhood, particularly as there is evidence that childhood diets are related to mothers' pregnancy diets. There was evidence that mothers' weight status before and during pregnancy is related to later offspring adiposity and, thus, to cardiovascular risk factors in the offspring, with high maternal weight or weight gain being detrimental. 41 Further investigation is needed to determine whether maternal diet during pregnancy plays any role in these relationships. There was some evidence that maternal diet during pregnancy is related to the child's macronutrient intake at age 10 years 45 ; maternal pregnancy-child associations were stronger than those with the later maternal diet or paternal diet, suggesting the possibility of an intrauterine affect. This could be through the programming of appetite. However, the associations found between maternal fruit and vegetable intake 43 and diet variety 44 and similar measures in their young children are more likely to be due to the copying of parental habits and the availability of particular foods in the household. These suggestions are supported by findings from the Southampton Women's Study assessing the influence of maternal diet and other characteristics on childhood diet at age 3 years. 88 It is important to note that almost all publications related to child outcomes in ALSPAC have concerned prepubertal children. It is imperative that analyses be undertaken to assess whether the results, particularly adverse ones, are similar for adolescent and adult outcomes. Strengths and limitations A relatively large cohort of women and their offspring was followed intensively over time with standardized measurements of many important outcomes. Prospective assessment of current diet and other outcomes was used rather than asking participants to look back over several years; thus, recall bias was avoided. However, diet was assessed only once during pregnancy (in the third trimester) due to financial constraints and a prerogative to keep participant burden to a minimum. Therefore, possible dietary changes at different stages of pregnancy were not captured. A self-completed, unquantified FFQ designed for this population was used to assess diet; this is a very cost-effective and wellaccepted method of dietary assessment but can lead to biases and inaccuracies. Notwithstanding this, the overall nutrient intakes and types of foods eaten were very similar to those found in a UK national survey of women (nonpregnant) carried out around the same time. 8,72 The women in ALSPAC were all from a particular geographical area of England; at recruitment they were reasonably representative of the population in the area. There was a very high participation rate from the cohort during pregnancy, but this dropped off over time and when clinic follow-ups were undertaken; thus, at age 10 years less than half of the original cohort was involved. Furthermore, there was differential dropout of the least educated members of the cohort. It is unlikely that this would have altered longitudinal findings, although statistical power would be diminished. CONCLUSION In interpreting results from associations between prenatal diet and outcomes, it is important to note the strong associations between maternal SEB and diet, especially in the types of foods eaten and dietary patterns, with more processed foods being consumed when socioeconomic status was low. However, with notable exceptions, once these factors were taken into account, there was little evidence of causal relationships between intakes of particular nutrients during pregnancy and birth or childhood health outcomes. The most robust relationship found was between maternal fish/seafood intake during pregnancy and neurocognitive development in the offspring, with fish consumption being beneficial to childhood outcomes. There were other less robust associations with fish consumption, such as lower frequency of maternal depressive and anxiety symptoms during pregnancy and less intrauterine growth retardation. Fish constitute the major dietary source of n-3 LC-PUFAs, and investigation of the genetic background relating to the metabolism of these fatty acids has provided some evidence that they may be involved. These findings do not yet confirm any particular constituent of fish as the active ingredient, but the results with iodine deficiency are suggestive of the involvement of this and possibly other nutrients. Whatever the mechanism, the associations suggest that a recommendation to eat fish regularly during pregnancy is the best advice. was involved in data interpretation. L.R.J. and J.G. contributed to the drafting of the paper and the interpretation of the data. All authors approved the final version. Funding. The UK Medical Research Council (Grant ref: 74882) and the Wellcome Trust (Grant ref: 092731) and the University of Bristol provide core support for the ALSPAC. This review was specifically funded by Wyeth Nutrition but was carried out independently. Declaration of interest. P.M.E. and L.R.J. have from time to time received research funding and P.M.E has received consultancy funding from Pfizer Nutrition Ltd. and Danone Baby Nutrition (Nutricia Ltd). PME currently receives research funding from Nestlé Nutrition. JG has no relevant interests to declare.
2016-05-17T23:20:52.422Z
2015-09-22T00:00:00.000
{ "year": 2015, "sha1": "c40d786c073f5578b93ccf58e4e7913e8cf31f5d", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/nutritionreviews/article-pdf/73/suppl_3/154/5258853/nuv053.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c40d786c073f5578b93ccf58e4e7913e8cf31f5d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }