id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
17349940
pes2o/s2orc
v3-fos-license
Minimal length of two intersecting simple closed geodesics On a hyperbolic Riemann surface, given two simple closed geodesics that intersect $n$ times, we address the question of a sharp lower bound $L_n$ on the length attained by the longest of the two geodesics. We show the existence of a surface $S_n$ on which there exists two simple closed geodesics of length $L_n$ intersecting $n$ times and explicitly find $L_n$ for $n\leq 3$. Introduction Extremal hyperbolic Riemann surfaces for a variety of geometric quantities are objects of active research. Well known cases include the study of surfaces with maximum size systoles ( [2], [4], [16]), surfaces with largest embedded disk ( [3], [10]) or more classically surfaces with maximum number of automorphisms (the study of Hurwitz surfaces and related topics). These subjects are related to the study of the simple length spectrum of a surface S, denoted ∆ 0 (S), which is the ordered set of lengths of (non-oriented, primitive) simple closed geodesics (with multiplicity). The question of interpreting the geometry of the surface through the values found in the simple length spectrum seems to be a very difficult subject. For instance, for surfaces of genus 2 and 3, it is not known whether the simple length spectrum determines the surface up to isometry (see [7]). One of the major tools used to approach these problems is the collar theorem, and in particular a corollary which states that two short simple closed geodesics (of length less than 2 arcsinh(1)) cannot intersect (see [7]). The bound is sharp because it can be realized on a particular torus with a cusp. The bound is never reached for a closed surface, but for any genus, is realized in the compactification of its Moduli space. The goal of this article is to generalize this result by studying the relationship between the number of intersection points between two simple closed geodesics and the length of the geodesics. The surfaces we consider lie in the Moduli space of surfaces with boundary M g,k , where g is the genus and k is the number of simple closed boundary geodesics which we allow to be cusps (geodesics of length 0). The foundation of our study is found in the following theorem (section 2). Theorem 1.1. On a hyperbolic Riemann surface S, let α and β be simple closed geodesics that intersect n times. Then there exists a universal constant L n such that max{ℓ(α), ℓ(β)} ≥ L n and L n −→ ∞ when n −→ ∞. Furthermore, a surface S n realizing the bound exists. By realizing the bound, we mean that on S n there are two simple closed geodesics of length L n that intersect n times. We further investigate the asymptotic behavior of L n in the following proposition where we prove: Then l n ≤ L n < 2l n . We are able to describe the surfaces explicitly for n ∈ {2, 3}, which gives us the following result. . As mentioned earlier, L 1 = 2 arcsinh1 and note that the value for L 2 was previously proved in [9] but the proof presented here is new. The surfaces S 1 , S 2 and S 3 are all distinct once-punctured tori. We show that they all have non-trivial isometry groups. (A once-punctured torus is necessarily hyperelliptic, so by non-trivial isometry group we mean an isometry group not isomorphic to Z 2 ). It seems reasonable to conjecture that for all n, S n is also a once-punctured torus with a non-trivial isometry group. This article is organized as follows. Section 2 is devoted to preliminaries and the proof of theorem 1.1 and proposition 1.2. The next two sections concern the exact values of L 2 and L 3 and are similar in nature. The final section discusses possible future directions for related questions. Preliminaries and groundwork We will be considering hyperbolic Riemann surfaces of finite area, with or without boundary. We allow boundary to be either cusps or simple closed geodesics. A surface will always designate a surface of this type. The signature of a surface will be denoted (g, k) where g is the genus and k the number of boundary geodesics (or cusps). A surface of signature (0, 3) is called a pair of pants, a surface of signature (1, 1) a one-holed torus, and a surface of signature (0, 4) a four-holed sphere. We reserve the term punctures for cusps, and holes can be cusps as well as boundary geodesics. The Moduli space of surfaces with boundary will be denoted M g,k . The set of interior simple closed geodesics of a surface S will be denoted G(S). The length of a simple closed geodesic α will be denoted ℓ(α), although a geodesic and its length might not be distinguished. The term geodesic will sometimes be used instead of simple closed geodesic, but only if it is obvious in the context. Closed geodesics will be considered to be non-oriented (unless specified) and primitive, meaning that a closed geodesic cannot be written as the k-fold iterate of another closed geodesic. Seen this way, geodesics are point sets independent of parameterization. We denote int(α, β) the number of transversal intersection points between two simple closed geodesics α and β. We define the simple length spectrum ∆ 0 (S) as the ordered set of lengths of all interior simple closed geodesics. Notice that our definition takes into account multiplicity, namely that if there are n distinct simple closed geodesics of S with equal length ℓ, then the value ℓ will appear n times in ∆ 0 (S). Consider two surfaces S andS with simple length spectra ∆ 0 (S) = {ℓ 1 ≤ ℓ 2 ≤ . . .} and ∆ 0 (S) = {l 1 ≤l 2 ≤ . . .}. The notation ∆ 0 (S) < ∆ 0 (S) is an abbreviation for ℓ i <l i for all i ∈ N * . In order to describe the pasting of a simple closed geodesic, one generally uses twist parameters. The only use we will have of twist parameters is to describe what we call without twist or zero-twist and half-twist. Recall that a pair of pants has three disjoint unique simple geodesic paths between distinct boundary geodesics, called perpendiculars, which decompose the pair of pants into two isometric hyperbolic right-angled hexagons. If two pairs of pants are pasted along a geodesic α such that the endpoints of the perpendiculars coincide, then we refer to a pasting with zero-twist or without twist. The terminology is slightly different for one-holed tori. Consider a pair of pants with two boundary geodesics α 1 and α 2 of equal length, and paste α 1 and α 2 together in order to obtain a one-holed torus. If the common perpendicular a between α 1 and α 2 has its endpoints that coincide, then we refer to a pasting with zero-twist or without twist. If the endpoints of a are diametrically opposite on the geodesic formally known as α 1 or α 2 , then we refer to a half-twist. Finally, if an interior simple closed geodesic α is said to be pasted with a half-twist, then we mean that it has been obtained from the construction described above. A function f α : M g,k −→ R + that associates to a closed geodesic α its length depending on the choice of metric is generally referred to as a length function. Length functions are well known to be analytic (one way of seeing this is via Fricke trace calculus, see for instance [1]). What is interesting to us is that the function of an interior closed geodesic remains continuous, even if boundary length goes to 0. There is an immediate corollary to this result which is very useful to our study. Corollary 2.4. If α and β are two simple closed geodesics that intersect n times on a surface with non-empty boundary and with at least one boundary geodesic not a cusp, then there exists a surface of same signature, with only cusps as boundary, containing two simple closed geodesicsα andβ which intersect n times and such that ℓ(α) < ℓ(α) and ℓ(β) < ℓ(β). This corollary implies that we can limit ourselves to studying surfaces with cusp boundary. This will be useful in the following theorem which is the starting point of our work. Theorem 2.5. There exists a universal constant L n such that max{ℓ(α), ℓ(β)} ≥ L n for any two simple closed geodesics α and β that intersect n times on a hyperbolic compact Riemann surface. Furthermore, a surface S n realizing the bound exists. Finally, L n −→ ∞ when n −→ ∞. Proof. The idea of the proof is to show that, for every n, we are evaluating a continuous function on a finite set of compact sets. The function is the one that associates to a surface S the following value: For a given signature (g, k), f : M g,k −→ R + , is obviously continuous and bounded. (Mind that for certain signatures, f may not be defined, for instance on surfaces of signature (0, 4), there are no pairs of simple closed geodesics that intersect an odd number of times.) Suppose α and β are two simple closed geodesics on a surface S that intersect n times. Consider the following subsurface S α,β of S. S α,β is the surface (possibly with boundary) that embeds a tubular neighborhood around the point set α ∪ β such that all interior simple closed geodesics of S α,β intersect either α or β. In other words, α and β fill S α,β . For example, if n = 1 this is necessarily a surface of signature (1,1). It is easy to see that the signature (g, k) of S α,β is universally bounded by a function of n (g ≤ f 1 (n), k ≤ f 2 (n)). There are thus a finite number of possible signatures for S α,β , which we shall denote (g 1 , k 1 ), . . . (g m , k m ). As any interior simple closed geodesic of S α,β intersects either α or β, and as we are trying to minimize the lengths, the collar theorem ensures that the length of the systole of S α,β is bounded from below (otherwise the maximum length of α and β would be unbounded). Denote by ǫ n this lower bound. By corollary 2.4, as we are searching for a minimal value among all surfaces, we can limit ourselves to searching among surfaces with all boundary geodesics being cusps. Denote by M 0 g,k the restricted set of surfaces of signature (g, k) with cusp boundary. Further denote by M 0 (g,k),ǫn the subset of M 0 g,k with systole bounded below by ǫ n . We are now searching among a finite set of such sets, namely for each (g j , k j ), j ∈ {1, . . . , m}, we need to study the set M 0 (g j ,k j ),ǫn . These sets are well-known to be compact (for surfaces without boundary see [13], and with boundary see [12]). As f is a continuous function that we allow to cover a finite number of compact sets, it follows that f admits a minimum, and the value of f in this point we denote L n . A point in Moduli space which reaches the minimum is denoted S n . We now need to show that L n −→ ∞ when n −→ ∞. Suppose this is not the case, meaning there exists some L such that L n < L for all n. This would mean that for any n, there exist two simple closed geodesics α n and β n on some surface S that intersect n times such that ℓ(α n ) ≤ ℓ(β n ) ≤ L. By the collar theorem ℓ(β n ) ≥ 2n arcsinh( 1 arcsinh(L/2) ). But this is a contradiction, because for any L, n can be chosen so that this is not the case. The theorem is now proven. The notation S α,β , as defined in the previous proof, will be regularly used throughout the article. To study the asymptotic behavior of L n , we shall use the quantity l n defined in the following proposition. Then l n is strictly increasing in n. Proof. Let us begin by showing l n ≤ L n . If a simple closed geodesic α of length l n intersects a simple closed geodesic β n times, then β is at least as long as 2n times the width of the collar of α. Thus ℓ(β) ≥ l n . The width of the collar of α increases when α gets shorter, thus l n ≤ L n . It remains to show that L n < 2l n . For n ∈ N, let Y be a pair of pants whose boundary consists of a cusp and two boundary geodesics, α 1 and α 2 , both of length l n . Let us paste these two geodesics together (denote the resulting geodesic α) without twist. The common perpendicular between α 1 and α 2 is now a simple closed geodesic, which we shall denote δ. Notice that ℓ(δ) = l n /n. For a given primitive parameterization of α and δ, consider the simple closed curveβ = δ n α and its unique geodesic representative β. By construction, ℓ(β) < ℓ(β) = l n + n(l n /n) = 2l n . We have thus constructed a once-punctured torus with two interior geodesics α and β that satisfy int(α, β) = n and max{ℓ(α), ℓ(β)} < 2l n . It follows that L n < 2l n . Finally, as an illustration of our investigation, let us give the value for L 1 and describe the surface S 1 . Corollary 2.2 implies that L 1 ≥ 2 arcsinh1. In fact, L 1 = 2 arcsinh1, and this can be shown by constructing the surface S 1 which realizes the bound L 1 . Consider, in the hyperbolic plane, a quadrilateral with three right angles and one zero angle (a point at infinity). This quadrilateral can be chosen such that the two finite length adjacent sides are of length arcsinh1. By taking four copies of this quadrilateral, and pasting them together as in the following figure, one obtains a once-punctured torus with two simple closed geodesics of length 2 arcsinh1 that intersect once. This once-punctured torus is the only surface on which two intersecting geodesics can have length L 1 . It is worth mentioning that this torus has other remarkable properties: it is the only once-punctured torus with an automorphism of order 4. 3. Finding S 2 and calculating L 2 Let us consider two simple closed geodesics α and β on a surface S that intersect twice in points A and B, and the subsurface S α,β as defined in the proof of theorem 2.5. In order to distinguish possible signatures for the surface S α,β , let us give α and β orientations. Let C β be a collar around β. The ordered pair of simple closed oriented geodesics (α, β) induces an orientation on C β in both A and B. These orientations are either opposite (case 1) or the same (case 2). This is illustrated in figure 2. In case 1, S α,β is a surface of signature (0, 4) obtained by cutting along the simple closed geodesics ε 1 , ε 2 , ε 3 and ε 4 homotopic to the simple closed curvesε 1 ,ε 2 ,ε 3 and ε 4 shown in figure 3. Figure 3. The simple closed curvesε 1 ,ε 2 ,ε 3 andε 4 in case 1 In case 2, there are two possible topological situations for the minimal surface S α,β . Indeed, consider the simple closed curvesε 1 andε 2 shown in Figure 4. One of these curves may be null-homotopic, but not both because otherwise the surface would be a torus without holes, which of course cannot admit a hyperbolic metric. ? ? Figure 4. The simple closed curvesε 1 andε 2 in case 2 If only one curve is not null-homotopic, sayε 1 , we cut the surface along the geodesic that is homotopic toε 1 to obtain a surface of signature (1, 1). If neither curve is null-homotopic, we cut the surface along the two geodesics homotopic toε 1 and ε 2 to obtain a surface of signature (1,2). Therefore, in view of corollary 2.4, S 2 is a sphere with four cusps, a torus with one cusp or a torus with two cusps. First let us investigate geodesics intersecting twice on a four-holed sphere. Furthermore equality holds for a sphere with four cusps obtained by gluing two pairs of pants with two cusps and third boundary geodesic of length 2 arccosh3 without twist. Proof. By corollary 2.4 it suffices to show the result for X a sphere with four cusps. Suppose ℓ(α) ≥ ℓ(β). Now suppose by contradiction that ℓ(α) < 4 arcsinh1. On a four-holed sphere, distinct interior simple closed geodesics cross at least twice. By the collar theorem, the length of any other interior simple closed geodesic must be strictly greater than four times the width w(α) of the half-collar around α, which by 2.1 is w(α) = arcsinh( 1 sinh(2 arcsinh1/2) ) = arcsinh1. Thus ℓ(β) > 4 arcsinh1, a contradiction. Thus equality can only be attained if both α and β are of length 4 arcsinh1. It follows that a surface on which equality is reached has a simple closed geodesic α of length 4 arcsinh1. If there is any twist around this geodesic, then all simple closed geodesics crossing α are of length strictly superior to 4 arcsinh1 which concludes the argument. Let us now consider the case of two geodesics that intersect twice on a one-holed torus. We recall that one-holed tori are hyperelliptic, and we shall refer to the three interior fixed points of the hyperelliptic involution as the Weierstrass points. Definition 3.2. Let T be a one-holed torus and let α be an interior simple closed geodesic of T . We denote h α the unique simple geodesic path which goes from boundary to boundary and intersects boundary at two right angles and does not cross α. We will refer to the geodesic path h α as the height associated to α (see figure 5). α h α Figure 5. The height h α associated to α By using hyperbolic trigonometry, one can prove the following result (for a proof, see for instance [16]). Lemma 3.3. Let T be a one-holed torus. Let γ be an interior simple closed geodesic of T and denote its associated height h γ . Then γ passes through exactly two of the three Weierstrass points and the remaining Weierstrass point is the midpoint of h γ . Furthermore, the length of γ is directly proportional to the length of h γ . The following proposition, slightly more general than what we require, has an interest in its own right. Proposition 3.4. Let T be a one-holed torus (where the boundary geodesic ε is allowed to be a cusp). Let α be an interior simple closed geodesic and let β be any other interior simple closed geodesic that intersects α twice. Then Furthermore equality holds only when T is obtained by pasting α with a half-twist and β is the shortest simple closed geodesic that intersects α twice. Proof. For a given α and ε, let β be the shortest simple closed geodesic β that crosses α twice. Now consider the family of tori obtained by twisting along α. The key to the proof is showing that β is shortest when α is pasted with a half-twist. In first instance, let us suppose that ε is not a cusp. Consider the height h β associated to β. By lemma 3.3, the length of h β is proportional to the length of β, so minimizing the length of β is equivalent to minimizing the length of Cutting T along α and path a one obtains a one-holed hyperbolic rectangle as in figure 6. (This particular way of viewing the one-holed torus is a central part of [5].) Notice that ℓ(e 1 ) = ℓ(e 2 ), which can be seen either by using hyperbolic trigonometry or by using the hyperelliptic involution. By cutting along paths h α , e 1 , e 2 and a, one would obtain four isometric right-angled pentagons. The path h β intersects α twice, and thus the two subpaths of β between α and ε are of length at least ℓ(e 1 )(= ℓ(e 1 )), and the subpath from α and back again is at least of length ℓ(a). Thus ℓ(h β ) ≥ ℓ(e 1 ) + ℓ(e 2 ) + ℓ(a). Equality only holds when the path h β is exactly the path e 1 ∪ a ∪ e 2 . This only occurs when the pasting is right, meaning when α is pasted with a half-twist. Now, when ε is a cusp, we cannot immediately assume that the optimal situation is when there is a half-twist, but this is true because of the continuity of lengths of interior closed curves when ℓ(ε) goes to 0. We now need to calculate the length of β when α is pasted with a half-twist. For this we shall use the well known formulas for different types of hyperbolic polygons (see for instance [7, p. 454]). This can be done by considering two hyperbolic polygons inscribed in T . The first one, denoted Q, is one of the hyperbolic quadrilaterals with three right angles delimited by arcs of paths a, α, β and h α as in figure 7. The second polygon P is one the four isometric right-angled pentagons (or quadrilaterals with a point at infinity when ℓ(ε) = 0) obtained by cutting T along α, a, e 1 , e 2 and h α (see figure 7). Using the formulas for a quadrilateral with three right angles, one obtains Figure 7. The polygons P and Q Now using the formula for a right-angled pentagon with P we obtain Putting these two formulas together one obtains With a little manipulation one obtains which proves the result. There is an immediate corollary which gives a universal lower bound on the greatest of two lengths of two geodesics intersecting twice on a one-holed torus. Corollary 3.5. Let T be a one-holed torus (where the boundary geodesic ε is allowed to be a cusp). Let α and β be two interior simple closed geodesics that intersect twice. Then max{ℓ(α), ℓ(β)} ≥ 2 arccosh2. Furthermore equality holds for a torus T with a cusp which contains a simple closed geodesic α of length 2 arccosh2 pasted with a half-twist, and taking β to be the shortest simple closed geodesic which intersects α twice. Note that the torus described in corollary 3.5 is the same torus as S 1 . To see this, we shall find two simple closed geodesics that intersect once, and both of length 2 arccosh √ 2. Consider the quadrilateral Q in figure 7 and in particular the diagonal of Q from top left to bottom right. Now consider the diagonals of each one of the four isometric copies of Q. Together these four geodesic paths form two simple closed geodesics, say γ 1 and γ 2 , of equal length that intersect once. When ℓ(ǫ) = 0 and ℓ(α) = 2 arccosh2, a quick calculation shows that ℓ(γ 1 ) = ℓ(γ 2 ) = 2 arccosh √ 2. As S 1 is unique up to isometry, the two tori are the same. Proof. In view of proposition 3.1 and corollary 3.5, we now know that S 2 is a torus with one or two punctures. Suppose S 2 is a torus with two punctures, i.e., the curves labeledε 1 andε 2 on figure 4 are homotopic to cusps. Figure 8. Two intersections on a twice-punctured torus Now suppose we have two simple closed geodesics, α and β, with ℓ(α) ≥ ℓ(β) and ℓ(α) ≤ 2 arccosh2. (If this is not possible, then necessarily S 2 is the once punctured torus of corollary 3.5.) The geodesic α is cut into two arcs by β, say a 1 and a 2 . Suppose ℓ(a 1 ) ≥ ℓ(a 2 ). Consider the geodesic curves γ and δ as in figure 8. γ is the separating curve that intersects a 1 twice but doesn't intersect a 2 or β, and δ is the curve that intersects β twice but doesn't intersect γ or α. Consider the lengths x, x ′ , y, y ′ of the different arcs of β as labeled on figure 8. We have ℓ(δ) < x + ℓ(a 2 ) + y + x ′ + ℓ(a 2 ) + y ′ = ℓ(α) + ℓ(β) ≤ 2ℓ(α). Notice that this implies the width of the collar around δ satisfies w(δ) > arcsinh 1 sinh ℓ(α) . We can now apply the collar theorem to β, using the fact that β intersects both α and δ twice and α and δ do not intersect. The collar theorem 2.1 implies that the length of β satisfies the following inequality: This proves the result. Finding S 3 and calculating L 3 Let α and β be two simple closed geodesics on a Riemann surface that intersect three times. Name the intersection points A, B and C and orient α and β such that A, B and C come in that order on α and on β. As in the case of two intersections, we consider a collar around β and the orientations induced on it in the different intersection points by the ordered pair of simple closed oriented geodesics (α, β). We distinguish two situations: (1) (α, β) induces opposite orientations in two of the three intersection points (without loss of generality we can assume that (α, β) induces opposite orientations in A and in B), (2) (α, β) induces the same orientation in A, in B and in C. In the first situation, lemma 4.1 will show that max{ℓ(α), ℓ(β)} ≥ 2 arccosh(3). In the second situation, we will show that the optimal surface is a torus with a cusp containing two simple closed geodesics of lengths approximately 2 arccosh(2.648) intersecting one another three times. Lemma 4.1. Let α and β be two simple closed oriented geodesics on a Riemann surface that intersect three times in A, B and C, such that A, B, C are consecutive on both α and β. Proof. Comparing the lengths of the arcs between B and C, there are two possible situations: (1) The length BC α of the oriented geodesic arc from B to C on the geodesic α is smaller then BC β , the length of the oriented geodesic arc from B to C on the geodesic β (2) This is not the case, meaning BC α ≥ BC β . We now build the oriented closed curvesγ andδ: • In situation 1, we setγ = α; in situation 2,γ is obtained following α from A to B, then β from B to C and again α from C to A. • In situation 1,δ is obtained following β from A to B, then α from B to C and again β from C to A; in situation 2, we setδ = β. These two curvesγ andδ are thus homotopic to two simple closed oriented geodesics γ and δ intersecting one another twice such that max{ℓ(α), ℓ(β)} ≥ max{ℓ(γ), ℓ(δ)}. Lemma 4.2. Let S be a Riemann surface and let α and β be two oriented simple closed geodesics on S intersecting one another three times such that the ordered pair (α, β) induces the same orientation on S in every intersection. Name the intersections A, B, C such that they are consecutive on α. If A, B, C are also consecutive on β, then there is a torus with one cusp or a torus with two cusps containing two simple closed geodesics γ and δ which satisfy int(α, β) = 3 and max{ℓ(α), ℓ(β)} ≥ max{ℓ(γ), ℓ(δ)}. ? ? If one of the curvesε 1 ,ε 2 orε 3 is null-homotopic, corollary 2.4 proves the lemma. Otherwise, the optimal topological situation is a torus with three cusps (again due to corollary 2.4). On this surface, there is a simple closed geodesic η dividing the surface into X η , a sphere with three cusps and boundary geodesic η, and T η , a surface of signature (1,1). Notice that β is entirely contained in T η as can be seen in figure 10. Figure 10. Three intersections on a torus with three cusps The intersection points between α and η will be denoted U , V , W and Y as in figure 10. First consider the geodesic arc of α from Y to W . There is a dividing geodesic ε on X η , that does not intersect this arc. Cutting X η along ε, we get a surface of signature (0, 3). We can now diminish the length of ε in order to get another cusp. This surface of signature (0, 3) with two cusps and the boundary geodesic η contains a geodesic arc from Y to W that is shorter than the original arc from Y to W on X η (this is part of the statement of the technical lemma used in [14] in order to show theorem 2.3). Obviously, we can do the same for the geodesic arc joining U and V . Thus we can replace X η by the unique surface of signature (0, 3) with two cusps and a boundary geodesic of length ℓ(η). We get a torus with two cusps that contains a geodesic β and a curveα that intersect three times and such that ℓ(α) ≥ ℓ(α). Therefore, the geodesic γ that is homotopic toα intersects the geodesic β (that we rename to δ) three times and max{ℓ(α), ℓ(β)} ≥ max{ℓ(γ), ℓ(δ)}. . This bound is sharp and is reached by a unique once-punctured torus up to isometry. Proof. We shall use the parameters for the set of isometry classes of one-holed tori found in [5]. Let (r, s, t) be a set of these parameters such that 1 < r ≤ s ≤ t ≤ rs where r, s and t are the half-traces (hyperbolic cosines of half of the lengths) of the shortest three geodesics ̺, σ and τ = (̺σ) −1 . (In [5], half-traces are denoted traces, but we shall continue to use the term half-traces as it is more standard.) Then, the geodesics α = ̺σ −1 and β = τ ̺ −1 intersect three times and α is the forth shortest simple closed geodesic (see [5] for details). The half-traces of α and β are a = 2rs − t and b = 2rt − s. For a fixed r, max{a, b} = b = 2rt − s is therefore minimal if s = t. In this case 0 = 2rst − r 2 − s 2 − t 2 = 2s 2 (r − 1) − r 2 and therefore b 2 = s 2 (2r − 1) 2 = r 2 (2r−1) 2 2(r−1) . But for r > 1, this last quantity is minimal for d dr There is a a torus with one cusp on which there are two geodesics of lengths 2 arccosh . Therefore the bound is sharp and is attained by a unique once-punctured torus up to isometry. It is worth noticing the the torus described in this lemma is not S 1 . As mentioned in the proof, its systole length is 2 arccosh 1 4 3 + 11 3 and not 2 arccosh √ 2. . Proof. By what precedes, S 3 is a torus with one or two punctures. As in the proof of theorem 3.6, let us suppose that there exists a twice-punctured torus T with two geodesics α and β that intersect three times, and both of length less or equal to 2 arccosh . For the remainder of the proof, denote this constant Figure 11. Three intersections on a twice-punctured torus Both α and β are separated into three paths by each other, and let us denote these paths respectively a 1 , a 2 and a 3 for α and b 1 , b 2 and b 3 for β. The pasting condition implies that we are now in the situation illustrated in figure 11. On this figure, two additional simple closed curves have been added, and are denoted γ α and γ β . The curve γ α is defined as the unique separating simple closed geodesic that does not intersect α and intersects β minimally (twice), and γ β is defined symmetrically. We will use a rough upper-bound on the sum of their lengths. It is easy to see that = 2ℓ(α) + 2ℓ(β). This implies that min{ℓ(γ α ), ℓ(γ β }) ≤ 2 max{α, β} ≤ 2k 3 . So far, we have made no particular assumptions on α and β, so without loss of generality we can suppose that α is such that ℓ(γ α ) ≤ 2k 3 . Denote by α ′ the unique simple closed geodesic of T that intersects neither α nor γ α . Notice that α ′ intersects β three times. We shall now find an upper-bound on the length of α ′ . By cutting along α and α ′ , one obtains two (isometric) pairs of pants. Consider one of them as in figure 13. We denote by h α the shortest non-trivial path from α and back again. Notice that ℓ(hα) 2 Figure 13. Bounding the length of α ′ Consider the lengths l and l ′ in figure 13. Once again, we shall make use of the formulas for hyperbolic polygons. Using the hyperbolic trigonometry formulas for a pentagon with right angles, we obtain sinh ℓ(h α ) 2 sinh l ′ = cosh ℓ(α ′ ) 2 . Using the formulas for a quadrilateral with three right angles and one zero angle, one obtains sinh l sinh ℓ(hα) From these equations, and our initial hypothesis on the lengths of α and β, we obtain the following bound on the length of α ′ : This implies that the collar width of α ′ satisfies w(α ′ ) > 0.25. As β intersects both α and α ′ three times and α and α ′ are disjoint, we have that ℓ(β) ≥ 6w(α) + 6w(α ′ ). Concluding remarks The surfaces S 1 = S 2 and S 3 are specific once-punctured tori. Both admit automorphisms distinct from the hyperelliptic involution. S 1 admits a number of automorphisms both conformal and anticonformal. Using the main result of [8], S 3 admits an orientation reversing involution because it can be obtained by pasting a simple closed geodesic with a half-twist, but does not admit a non-trivial conformal automorphism. This is not so surprising seeing as there are only two isometry classes of once-punctured tori that admit a non-trivial conformal automorphism, namely S 1 and the torus with largest automorphism group, often called the Modular torus. Finding S k for k ≥ 4 seems like a difficult problem, but can we say something about the set of S k ? For higher intersection number, it is not clear whether or not S k even has boundary (recall that two simple closed geodesics can fill closed surfaces if they are allowed sufficiently many intersection points). In spite of this remark, it seems reasonable to conjecture that S k is always a once-punctured torus. Furthermore, due to the existence on S k of geodesics of equal length, it also seems reasonable to conjecture that the S k all have non-trivial automorphism groups. Supposing that the S k are all once-punctured tori, are they all found in a finite set of isometry classes of once-punctured tori?
2014-10-01T00:00:00.000Z
2006-08-02T00:00:00.000
{ "year": 2006, "sha1": "fd4b35cecec279f06aba86ccc9bf7f9bd132175b", "oa_license": null, "oa_url": "https://archive-ouverte.unige.ch/unige:9810/ATTACHMENT01", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9aa329fd7369e2fbe3d3eb45009ea8e37bad23ee", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
214708422
pes2o/s2orc
v3-fos-license
Considerations regarding the laboratory testing of electro- hydraulic heave compensators The unpredictable weather and the movements of the floating ships transform the marine environment into one of the most hostile working environments on the planet. Movements of lifting / lowering of vessels / platforms carrying cranes or drilling rigs, caused by the disturbing dynamics of the waves, varying in force, frequency, direction and amplitude, affect the precise positioning of the loads, lead to premature wear of the drilling holes, and endanger the integrity or even the life of the crew members. Due to the resonance in the cable of the load lifting system, caused by the movement of the vessel on a rough sea, the tension in the cable can increase more than 100 times. The safety and protection engineering solution is force compensation, that is the decoupling of the vertical movement of the ship from the one of the floating load. The authors present an experimental model, with the generator and the heave compensator at a small scale, which will simulate in the laboratory the operating modes of heave compensation systems very close to the real ones, existing on ships. Introduction It has been shown that for large suspended loads (weights) at the bottom of the ocean, the suspension cable could theoretically break despite being designed with a reasonable static safety coefficient. The reason is the resonance in the cable of the load system, caused by the vertical movement of the vessel produced by waves, which can increase the tension in the cable more than 100 times. Such a cable presents two potential dangers: breaking, causing loss or destruction of the load, and its falling close to the crew can cause serious bodily injury, or retracting quickly, with oscillations around the normal trajectory of retreat, causing material damage again and bodily harm. The resonance in the cable could be avoided by two simple solutions: carrying out the lifting operations only when the sea is calm or the excessive over-sizing of the cable of the lifting system. Both solutions are prohibitively expensive, leading to: the extension of the lifting operations during more days or weeks during the rough sea seasons; increasing the weight of the lifting system and of the ship or platform on which it is mounted. The engineering solution to avoid the resonance in the cable is the decoupling of the vertical movement of the ship which is floating from the vertical movement of the suspended load. This decoupling of the load from the movement of the ship is commonly known as force compensation. For the safety of the human crew, working on rough seas and oceans, for increasing the productivity of the floating lifting / drilling installations, respectively the progressive reduction of the dependence of their operation on the agitation state of the waves, compensation systems have been implemented in the structure of the machines in question, which have increasingly evolved: passive compensation systems, active compensation systems and hybrid compensation systems (active-passive or semi-active). All these types of compensators decouple the loading / unloading movement of the load from the lifting / lowering movement of the ship. The control models for the heave compensation systems have evolved into today's predictive semi-active models, with the prediction of the intensity of the wave agitation. There are four stages of development of the heave compensator [1]: Stage I -passive heave compensators (PHCs), which are vibration isolation systems consisting of a shock absorber and a compression spring, mounted in parallel, or from a hydro-pneumatic accumulator. They function as open loop systems, in which the input is represented by the movement of the ship and the output is the reduced amplitude of the load movement attached to the floating crane hook. Such compensators do not require outside energy for operation and can have an average efficiency of up to 10-35%. Passive compensators are ineffective in applications such as transferring a payload from one ship to another, or in compensating heaves when passing a load from air to water. In these cases, the PHCs are not able to compensate for the relative movement between two vessels with independent movement reference points. For these applications, an active heave compensator must be used. Stage II -active heave compensators (AHCs), which are automated systems that involve closed loop control and require outside energy for operation. If the ship is lifted by the heave, then an active compensating system is controlled by a controller, which acts in the opposite direction to lower the load, with the same displacement. These systems can have an efficiency of at least 80%. Stage III -hybrid or semi-active compensators. These arose because of the high costs of the active compensators, which were abandoned although they had a high efficiency. Hybrid compensation systems have two components: a passive one and an active one. The passive component contains two large pneumatic cylinders, which load at a pressure corresponding to keeping the weight in balance, mid-stroke. Active component, much cheaper than in the case of fully active compensators, contains a small hydraulic servo cylinder, which applies the adjusting forces to the load based on an active control strategy. Stage IV -the most modern control models for hybrid heave compensation systems are predictive semi-active models, with forecasting the intensity of the wave agitation. Figure 1 shows an example of a small ship, which moves a load vertically and uses a passive heave compensator to reduce the movement of the load, as a result of the vertical movement of the ship, generated by the waves. The compensator isolates the ship from the load, as it is placed between them. Figure 2 shows a ship which rises vertically on an ocean wave. The suspended load under water follows the movement of the vessel, indicating the deactivated state of the active hydraulic compensation system (AHC). With AHC enabled, the load (marked in gray) is kept at a constant depth. ISSN: 2668-778X www.techniumscience.com load is equipped with a heave amplitude sensor (it measures the tension in the cable and its direction of movement). The cargo transfer vessel is equipped with a winch crane, which contains the active heave compensator. It consists of a double-acting differential hydraulic cylinder, a hydraulic positioner, a spring attachment, attached to the crane, which tensions the transducer cable fixed by the first vessel deck, a proportional hydraulic valve, which regulates the pressure in the hydraulic cylinder to maintain a constant height of the load, suspended by the crane cable, from the deck of the vessel taking over the load. Figure 4 shows a possible hybrid compensation system. The system contains two passive hydraulic cylinders, each supporting half of the total weight of the F L load and a third smaller hydraulic cylinder, which is part of an active control loop, which can generate an additional tuning force, called F A . The active cylinder must be capable of moving at maximum load speed, in any situation. Since much lower forces than those loaded on the passive cylinders will generally apply to the active cylinder, it may be physically smaller, requiring a lower feed rate, lower pressure and therefore less hydraulic power compared to the hydraulic cylinder of a strictly active compensation system. Experimental tests for an active heave compensator 2.1 Technological facilities The experimental stand in figure 5 [2] is intended for the evaluation of the functional performances, in the laboratory, of the active heave compensators, real but reduced in scale, intended for floating cranes and marine drilling rigs. The experimental stand contains a hydraulic servomechanism with an external loop and two internal position adjustment loops. The first internal position adjustment loop is manifested at the level of a Parker hydraulic servo-cylinder, consisting of cylinder + valve + stroke transducer, figure 6, which simulates the wave agitation and is located at the bottom of an assembly of two identical hydraulic cylinders, coupled together. The second internal position adjustment loop manifests at the level of a Moog hydraulic servo-cylinder, consisting of cylinder + valve + stroke transducer, figure 7, located at the top of the same assembly, which simulates the dynamic behavior of the active component of a hybrid heave compensation system. The external position adjustment loop causes that, irrespective of the excitation signal applied to the first servo cylinder, the second servo cylinder follows the movement of the first, in the opposite direction and at the same speed, so that the end of its rod remains permanently positioned at the same elevation as a fixed reference plan (the floor of the 23 Technium Vol. 1, pp. 21-28, (2019) ISSN: 2668-778X www.techniumscience.com laboratory, for example). The two servo-cylinders were connected to a pumping group existing in the INOE 2000-IHP Servo Technique laboratory, with the following characteristics: flow = adjustable, 0 ... 120 l/min; pressure = adjustable, 0 ... 310 bar; oil tank volume = 400l; electric motor power = 55kW; engine speed el. = 1500 rev / min. Also used for the tests: a programmable logic controller from Schneider Electric, code TM221CE16U, which generates current controls for valves and takes information from the transducers; a signal generator; a PC; a control software application dedicated to the test and data acquisition stand, compatible with the programmable logic controller used. Experimental tests In the control window of the software application installed on the PC managing the samples and acquisition of experimental results, figure 8, different sinusoidal excitation signals were prescribed for the disruptive servo-cylinder (PID controller axis 0) and the dynamics of the tracking servo-cylinder was followed (PID controller axis 1). They had the frequency of 0.1 Hz and 0.2 Hz, and the amplitude of 17 mm, 31 mm, 34 mm. Six types of sinusoidal signals were used: 0.1 Hz / 17 mm; 0.2 Hz / 17 mm; 0.1 Hz / 31 mm; 0.2 Hz / 31 mm; 0.1 Hz / 34 mm; 0.2 Hz / 34 mm. The smallest tracking error (1%) was recorded for the combination of 0.1 Hz / 17 mm. In figure 8 one can notice that the tracking servo cylinder has reproduced sufficiently exactly the excitation signal of the disruptive servo-cylinder, for the mentioned combination. The prescribed values for the displacements of the two servo-cylinders are given by the relations (1) and (2), namely: The acquired experimental data were exported to an Excel file and processed graphically, figure 9. Figure 9. Data export to Excel file; graphic processing. Following the processing of the acquired data over an interval of 12.5 sec., the graphs from figure 10 and figure 11 were obtained. The experimental stand, figure 12, is characterized by: modification of the structure of the mobile assembly and structure of the fixed assembly, so that they allow also the mounting of the passive component of the compensator (variant of the solution with special hydraulic cylinder with pneumatic accumulator, according to the patent [3], or variant with two pneumatic cylinders, or variant with a hydraulic damper and a spring, connected in parallel); the introduction on the experimental stand of the system for lifting-lowering with cable of a 20 kg -load, consisting of a hydraulic motor, controlled by a hydraulic directional valve with electric control, a planetary gearbox, a hoist with two fixed pulleys and a mobile one, another fixed pulley, a cable winding drum, fitted with bearings; introducing on the experimental stand a system for constant maintenance of the tension in the cable, when raising and lowering the load, regardless of the disturbances introduced by the heave simulator; replacement of the electrical and control cabinet; replacement of the control software application (for two servo valves and a hydraulic directional valve) with electric control and data acquisition (from the two stroke transducers, from the two load-limiting cable displacement limiters and from the cable tension transducer); hydraulic connection of the experimental stand to its own mobile pumping group; transforming the product into a mobile assembly, which can be carried to fairs, exhibitions and workshops for the following demonstrations: vertical, controlled movement, of a mass suspended by a cable, precise positioning of the mass, keeping the cable tension relatively constant (tolerance ± 2%), under conditions of simulation of wave vertical movements. Numerical simulations Numerical simulations have been carried out in Amesim Simcenter, [4], and they show that the efficiency of an active or hybrid compensation system can be identified by means of a test stand, composed of two mechanically connected systems: a heave generator and the compensation system (figure 10). The heave generator is an electro-hydraulic servo system, composed mainly of a hydraulic cylinder, an electro-hydraulic servo valve, a linear displacement transducer, a PID controller and a constant pressure supply system. It can generate different types of "heaves", which are applied to a heave compensator. The compensation system includes a hydraulic motor, controlled by an electro-hydraulic servo valve, a planetary gearbox, a pulley that supports the load through a cable and an incremental encoder, which measures the rotation angle of the pulley. Another dedicated controller generates the correction input. Figure 13 shows the simulation network of Simcenter Amesim, which includes all the components and super-components used to generate the "heave" and compensate for the cable movement. The efficiency of this type of compensation system depends on the load mass and the amplitude of the heave. For the nominal mass of the load (250 kg), the amplitude of the movement of the load is reduced by 92%. The easiest way to avoid control stability problems is to use a sharp calibration hole (0.7 mm), placed between the hydraulic motor ports, which consumes a large part of the valve supply flow. The 27 Technium Vol. 1, pp. 21-28, (2019) ISSN: 2668-778X www.techniumscience.com load (load weight) has the most important influence on the compensation error. For a mass of 250 kg, the maximum error is approximately 85 mm ( figure 14). For a double load, the same error reaches 190 mm ( figure 15). In a period of sinusoidal heave, produced by the heave generator, the range of variation of the piston speed of the cylinder is three times greater than the range of variation of the speed of the load (figure16). Conclusions The degree of novelty and relevance of these preliminary results, relative to the national and international state-of-the-art, is given by solving the following problems of the field:  Testing under laboratory conditions the parts of the active component within the hybrid heave compensation systems intended for floating cranes and drilling installations (active hydraulic servo cylinder, proportional hydraulic directional valve or servo valve, position transducer, controller);  Checking under laboratory conditions the methods of adjustment and control dedicated to the active component of the hybrid systems of heave compensation;  Simulating in the laboratory of heave lifting / lowering motion, with variable amplitudes and frequencies, that is of the actual operating conditions of the heave compensators;  Simulating in the laboratory a forecasting program for the wave agitation state, in relation to which it is possible to assess the dynamic performances of the active component of the hybrid heave compensation systems (response time, stability, positioning error). A future challenge for authors is the experimental validation of a hybrid heave compensator, on a small-scale model of demonstrator type, on which an interested witnessing public can watch the dynamic performances of the system live.
2020-03-29T11:12:59.123Z
2019-12-21T00:00:00.000
{ "year": 2019, "sha1": "1f43f61cf2002057d8fb1e2288413fc3ba985cf8", "oa_license": null, "oa_url": "https://techniumscience.com/index.php/technium/article/view/6/7", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1f43f61cf2002057d8fb1e2288413fc3ba985cf8", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
271317855
pes2o/s2orc
v3-fos-license
Cost-utility and budget impact analysis of neoadjuvant dual HER2 targeted therapy for HER2-positive breast cancer in Sri Lanka This study aimed to assess the cost-utility and budget impact of dual to single HER2 targeted neoadjuvant therapy for HER2-positive breast cancer in Sri Lanka. A five-health state Markov model with lifetime horizon was used to assess the cost-utility of neoadjuvant trastuzumab (T) plus pertuzumab (P) or lapatinib (L) compared to single therapy of T with chemotherapy (C), in public healthcare system and societal perspectives. Input parameters were estimated using local data, network meta-analysis, published reports and literature. Costs were adjusted to year 2021 (1USD = LKR194.78). Five-year budget impact for public healthcare system was assessed. Incremental cost-effectiveness ratios in societal perspective for neoadjuvantLTC plus adjuvantT (strategy 3), neoadjuvantPTC plus adjuvantT (strategy 2), neoadjuvantLTC plus adjuvantLT (strategy 5), and neoadjuvantPTC plus adjuvantPT (strategy 4) compared to neoadjuvantTC plus adjuvantT (strategy 1) were USD2716, USD5600, USD6878, and USD12127 per QALY gained, respectively. One GDP per-capita (USD3815) was considered as the cost-effectiveness threshold for the analysis. Even though only the ICER for strategy 3 was cost-effective, uncertainty of efficacy parameter was revealed. For strategy 2 neoadjuvant PTC plus adjuvant T, a 25% reduction of neoadjuvant regimen cost was required to be cost effective for use in early HER2 positive breast cancer. Interventions and comparator The analysis assessed neoadjuvant therapies of dual HER2-targeted agent combinations, primarily dual therapy with pertuzumab (P) plus trastuzumab (T), with chemotherapy (C) compared to single HER2-targeted agent trastuzumab (T) with chemotherapy (C) as the comparator.An alternative dual therapy of lapatinib (L) plus trastuzumab (T) with C was also considered in the analysis.These dual HER2-targeted treatment regimens in neoadjuvant phase were followed by two sequences of adjuvant therapy (post-surgery) for 1 year, either continuation of same dual HER2-targeted agents or single HER2-targeted therapy of trastuzumab. As such, the model considered five neoadjuvant-adjuvant treatment strategies; strategy 1: neoadjuvant TC followed by adjuvant T (as the comparator), strategy 2: neoadjuvant PTC followed by adjuvant T, strategy 3: neoadjuvant LTC followed by adjuvant T, strategy 4: neoadjuvant PTC followed by adjuvant PT, and strategy 5: neoadjuvant LTC followed by adjuvant LT.Strategies 2 and 4 were primary intervention regimens with P included dual therapy, and strategies 3 and 5 were alternative intervention regimens with L included dual therapy.Strategy 4 and 5 had the same dual HER2-targeted therapy regimen in neoadjuvant phase continued in the adjuvant phase. Target population The patients included were women with locally advanced, inflammatory or early HER2-positive breast cancer, eligible for neoadjuvant treatment [i.e., tumor diameter ≥ 2 cm or with positive axillary lymph nodes ≥ N1] in accordance with treatment guideline recommendations 23,24,32 .The age of the patients entering the model was 50 years (mean age of breast cancer patients according to the recent surveys conducted in Sri Lanka) 6,7,30,33 . Model overview Our cost-utility analysis was conducted using a five-state Markov model including event free, locoregional recurrence, metastasis, remission, and death (Fig. 1).The model was developed considering the six-state model commonly used in previous economic evaluation studies 25,[34][35][36] , and was verified with clinical expert opinion based on the current practice in Sri Lanka.Although the treatment cycle length of neoadjuvant therapy strategies is 3 weeks, the cycle length used in our model was 1 month to be able to capture the effect of treatment.This length was similar to the cycle length that has been used in previous economic evaluations on neoadjuvant HER2 -targeted therapy 25,26,36 .The model was run up to the completion of 100 years of patient's age to represent lifetime horizon.All costs and health outcomes were discounted at 3% annually 37 . It was assumed that the chemotherapy regimen used in all arms of interventions were similar and did not have an effect on the final outcomes.All patients who developed locoregional recurrence were to develop it only once, and a failure of treatment or a second locoregional recurrence was assumed to be treated similar to that of distant metastasis.All patients with locoregional recurrence who responded to treatment were moved to remission state, and those who remained in remission state were assumed to be disease free.Death state included both breast cancer related and non-breast cancer related deaths, and death rates were the same as that of general population if disease free. Transition probabilities The transition probabilities for progression to different health states of the model for the treatment cohort of HER2-positive breast cancer patients were estimated from published studies (Table 1) 8,11,36,[38][39][40][41][42] .The transition probability from event free (i.e., no recurrences) to events (i.e., locoregional or metastatic recurrence) were estimated using survival data extracted from a published randomized controlled trial based on the comparator regimen 8 .For strategies with dual targeted agents but different adjuvant therapies, survival data from trials with similar respective adjuvant therapy 11,40 were used for estimation.The age matched all-cause mortality was estimated using the Sri Lankan life tables 43 . Efficacy The efficacy parameters used in the model were derived from a systematic review and network meta-analysis (NMA)(Table 1) 21 .Maximum duration for the effect of different neoadjuvant therapies was assumed to be up to 12 years, based on the maximum duration of available survival data for the regimens derived from NMA studies 11,21 . Utilities The outcome measure for this analysis was quality-adjusted life years (QALYs), which is an estimation of LYs multiplied by utility values.A systematic review was conducted to identify the most appropriate utility values for the respective health states.The values were sourced from health-related quality of life (HRQL) studies that used EuroQol five-dimension with 5-level (EQ-5D-5L) questionnaires reporting utility values among breast cancer patients on HER2-targeted treatment and from Asian countries (Table 1) 44,45 . Costs The cost parameters consisted of direct medical and direct non-medical costs associated with neoadjuvant treatment phase and each health state for societal perspective, and included only direct medical costs for public healthcare system perspective (Table 1).Indirect costs of patients were excluded in this analysis to prevent double counting.All costs were converted to the year 2021 value using the Consumer Price Index 51,52 and exchange rate of USD for 2021 (194.78LKR = 1 USD) 53 . Direct medical costs comprised initial diagnosis, treatment costs, hospitalization, clinic visits, and follow up costs.The unit costs of hospitalization, clinic visit and diagnostics, procedure (i.e., surgery, radiotherapy) were estimated from the published literature and data available from state sector hospitals [46][47][48] .The unit costs of medicines were from the database of Sri Lanka medical supplies division (MSD), Ministry of Health.All regimen dosages were based on therapeutic guidelines and dosages used in clinical trials 8,13,17,23,32,38 .The weight and body surface area used for dose calculation of treatment regimens were based on average weight and height of www.nature.com/scientificreports/Sri Lankan females extracted from the published literature 54 .Assigning the quantity of units consumed of the cost items was based on the standard care provided for the patients in Sri Lanka 8,13,17,23,30,32,38,[55][56][57] and clinical expert opinion.Direct non-medical costs included transportation costs, cost of accompanying caregiver, food and other expenses during hospital visits and costs of paid caregiver, which were extracted from the previous costing studies in Sri Lanka 48,49 . Base-case analysis The incremental cost-effectiveness ratios (ICERs) were estimated for each of the four strategies of dual HER2targeted therapy regimens compared to the single HER2-targeted therapy regimen.was used as the willingness to pay (WTP) threshold for this study in accordance with the WHO guidance 59,60 . Scenario analysis This study was conducted in two separate scenarios based on the variation of the unit cost of trastuzumab of the products currently used in the state sector hospitals.Cost parameters estimated for the health states using the highest and lowest trastuzumab unit cost (See Supplementary Table S1), were used as input parameters to assess the cost-effectiveness of each of the regimens. Uncertainty analysis Both one-way and probabilistic sensitivity analyses (PSA) were performed.One-way deterministic sensitivity analysis was performed by changing upper and lower limits of a single parameter while others remain constant, of which the results are presented in the tornado diagram.PSA was performed using Monte Carlo simulation replicated for 1,000 iterations.The cost-effectiveness planes and cost-effectiveness acceptability curves (CEACs) were constructed to show the probability of treatment being the most cost-effective at a given cost-effectiveness threshold.The standard error for the parameters was estimated from 10% of the value for the analysis, except for discounting for costs and outcomes and hazard ratio of efficacy. Budget impact analysis The budget impact for five fiscal years was conducted for the public healthcare system perspective based on the results of the Markov model.The year one included estimated budget for implementation of treatment for new and prevalent cases.The prevalence and incidence of HER2-positive breast cancer was from the published literature and cancer registry data from Sri Lanka 4,30,50 .The budget impact was estimated for coverage of 60% and 20% based on the use of HER2-targeted therapy and neoadjuvant treatment in breast cancer 30 . Ethical statement This study was to assess cost-utility and budget impact of neoadjuvant treatment with HER2 targeted medicines in the treatment of HER2-positive breast cancer in Sri Lanka using retrospective data collected from databases and relevant documents.Therefore, ethical approval for this study was granted as an exemption review and informed consent was waived by the Institutional Review Board of Faculty of Dentistry and Faculty of Pharmacy, Mahidol University, Thailand (COE.No.MU-DT/PY-IRB2022/015.0203),and the Ethics Review Committee of Faculty of Medicine, Kotelawala Defence University, Sri Lanka (RP/2022/09).Furthermore, the permissions were granted from the relevant organizations for access of data.All methods were performed in accordance with the relevant guidelines and regulations. Cost-effectiveness analysis According to our findings as presented in Table 2, all four intervention strategies (strategies 2-5) of dual HER2targeted neoadjuvant therapy showed higher outcomes (i.e., LYs, QALYs) and higher total costs than the comparator (strategy 1) with single HER2-targeted therapy T. The incremental QALY gained for strategies 2 and 4 which included PT dual therapy compared to the strategy 1 (neoadjuvant TC followed by adjuvant T) comparator were 1.86 and 3.35 respectively, while the incremental costs of treatments were also comparatively higher for strategies 2 and 4. On average, strategy 4 contributed to the highest outcomes of 12.75 LYs and 10.62 QALYs, and also highest lifetime costs of LKR10,735,582 (or USD55,116) per patient from public healthcare system perspective.The lowest ICER was for strategy 3 (neoadjuvant LTC followed by adjuvant T) with LKR 512,240 or USD2,630 per QALY gained from public healthcare system perspective, and LKR 529,117 or USD2,716 per QALY gained from societal perspective.The second lowest ICER was for strategy 2 (neoadjuvant PTC followed by adjuvant T) with LKR 1,074,254 or USD 5,515 per QALY gained from public healthcare system perspective and LKR 1,090,863 or USD 5,600 per QALY gained from societal perspective. Overall, the strategies which included dual HER2-targeted therapy in the neoadjuvant phase only (i.e., strategy 2, 3) had lower ICERs compared to the regimens with dual HER2-targeted therapy included in both neoadjuvant and adjuvant phases (i.e., strategy 4, 5). When every strategy was arranged in ascending order of the total lifetime costs, the only regimen with ICER less than the 1-GDP per capita willingness-to-pay threshold for in Sri Lanka (LKR 758,680 or USD3,815) was strategy 3 compared to the comparator in base case analysis.However, the incremental analysis showed that strategy 2 (Neoadjuvant PTC followed by adjuvant T) had 0.92 incremental QALY gain compared to strategy 5 despite including dual LT therapy in neoadjuvant and adjuvant phases.Strategy 2 compared to strategy 5 also had comparatively lower incremental costs with LKR 837,756 (USD 4,301) cost per QALY gained (Table 2), which was only 11-13% higher than the 1-GDP per capita threshold.The comparison of strategy 4 to strategy 2 also showed higher outcomes (1.48 incremental QALYs).However, the latter was not cost-effective with ICER being LKR 3,958,902 (USD 20,325) per QALY gained in societal perspective (Table 2). In scenario analysis where cost estimation was based on lowest and highest unit cost of trastuzumab instead of average unit cost used in base case analysis, the scenario 1 (highest unit cost) increased the total lifetime costs by 33%, 19%, 30%, 8% and 23%, and scenario 2 (lowest unit cost) decreased the total lifetime costs only by 10%, 6%, 9%, 3% and 7% in public healthcare system perspective, for strategies 1 to 5 respectively.However, www.nature.com/scientificreports/ the incremental costs and ICERs for the intervention strategies in both scenarios had minimum changes in comparison to the base-case (See Supplementary Fig. S1). Threshold sensitivity analysis Threshold analysis was conducted for strategies 2 and 4, among three strategies 2, 4 and 5 which were not costeffective at 1 GDP per capita threshold to assess if these interventions will achieve cost-effectiveness with reduction of treatment costs.Accordingly, a cost reduction of at least 25% of neoadjuvant treatment for strategy 2 was required for it to be cost-effective compared to strategy 1, in the societal perspective at 1 GDP per capita threshold (See Supplementary Figure S2).However, strategy 4 would not be cost-effective even with a 100% reduction of neoadjuvant treatment cost alone, while a 25% cost reduction of neoadjuvant treatment coupled with 70% reduction of adjuvant treatment costs would render the strategy 4 to be cost-effective (See Supplementary Figure S2). Deterministic sensitivity analysis (DSA) According to the one-way DSA, the first 10 parameters that resulted in greater changes of ICERs for strategies 2 to 5 are provided in supplementary material (See Supplementary Figure S3).Among parameters that the model was sensitive in all four strategies included the hazard ratios for efficacy, direct medical costs of neoadjuvant phase, direct medical cost of event free state, discount rates for costs and outcomes and transition probabilities of event-free to event.It is noteworthy, that according to the findings of the one-way DSA, the results were favourable for strategy 1 with single trastuzumab therapy in the comparisons with lapatinib dual therapy (i.e., strategy 3 and 5), as the interventions with lapatinib compared to comparator (strategy 1) had a wide range of hazard ratios with non-significant 95%CIs.At upper limit in efficacy, this resulted in ICERs with negative percentage change for those interventions with lapatinib. Probabilistic sensitivity analysis (PSA) The cost effectiveness results of the PSA were similar to that of the base case analysis and are further illustrated in scatter plot (Fig. 2).While majority of the results for dual HER2-targeted strategies were more effective and costly and fell in northeast quadrant, incremental QALY gain was higher for dual HER2-targeted therapy with PT (i.e., strategies 2 and 4) compared to strategy 1.However, the incremental cost was comparatively higher for strategy 4 compared to all other strategies.As for strategies 3 and 5 with LT, strategy 3 was not dominant compared to strategy 1 single trastuzumab therapy, and both strategies were dominated by strategy 2 with neoadjuvant PT followed by adjuvant T therapy.Furthermore, according to the cost-effectiveness acceptability curves, the probability of cost-effectiveness was 55% and 10% respectively for strategy 3 and strategy 2 at 1 GDP per capita threshold.However, probability of cost-effectiveness increased to 52% for strategy 2 when the threshold increased to 150,000 LKR (Fig. 3). Budget impact analysis In the first year of implementation, the budget impact for the interventions in comparison to the single HER2targeted therapy regimen strategy 1 were 2.78, 1.25, 7.73 and 2.09-fold higher for strategy 2, 3, 4 and strategy 5 respectively.From year 2 to 5, the budget was around twofold higher for strategy 2 compared to strategy 1 www.nature.com/scientificreports/regimen (Table 3).The first-year incremental budget was LKR 161 million (USD 0.82 million) and LKR 482 million (USD 2.47 million) for strategy 3 and LKR 1145 million (USD 5.88 million) and LKR 3,435 million (USD 17.64 million) for strategy 2, at 20% coverage and 60% coverage respectively (Table 3). Discussion Our study explored the cost-effectiveness and budget impact of neoadjuvant treatment by adding dual HER2targeted agents to single HER2-targeted therapy with trastuzumab (T) in the neoadjuvant treatment of early HER2-positive breast cancer in Sri Lanka.Among the dual HER2-targeted therapy strategies neoadjuvant PTC followed by adjuvant T regimen (strategy 2) had higher incremental outcome gain (QALYs, LYs) compared to other strategies without pertuzumab.However, the strategy 2 would be cost-effective compared to single trastuzumab therapy, only if the cost-effectiveness threshold (CE threshold) was two times GDP per capita for Sri Lanka.The findings from threshold analysis indicated that a 25% cost reduction of neoadjuvant treatment could make the neoadjuvant PTC followed by adjuvant T (strategy 2) to be cost-effective at a 1 GDP per capita CE threshold for Sri Lanka in the societal perspective.Even though strategy 4 (neoadjuvant PTC followed by adjuvant PT) had highest outcomes, the incremental analysis revealed that comparison of strategy 2 to strategy 4 had the highest cost per QALY gained when comparing dual HER2-targeted regimen strategies to each other.Furthermore, dual HER2-targeted regimens provided in both neoadjuvant and adjuvant phases tended to be not cost-effective.Based on new long-term survival and meta-analyses results with LT therapy 11,21,61 , the cost-effectiveness of alternative lapatinib included regimens were assessed in the study.The findings present that neoadjuvant LTC followed by adjuvant T (strategy 3) had lowest ICER and was below 1 GDP per capita CE threshold.However, the sensitivity analysis revealed that due to the uncertainty of efficacy of lapatinib included regimens, the results would favour the strategy 1 (neoadjuvant TC followed by adjuvant T). Our study also found that the budget impact to the public healthcare system of Sri Lanka by inclusion of dual HER2-targeted therapy in the neoadjuvant phase including strategy 2 and strategy 3 would be relatively higher compared to the single HER2-targeted therapy neoadjuvant TC followed by adjuvant T strategy. To our knowledge, this is the first economic evaluation study in Sri Lanka to compare costs and utilities of neoadjuvant therapy with dual HER2-targeted regimens for HER2-positive breast cancer patients.During the time of this analysis, very few economic evaluations explored dual HER2-targeted therapy neoadjuvant regimens in breast cancer from lower-MICs context.Our economic evaluation was also conducted using pooled efficacy with long term survival data for neoadjuvant regimens, including for neoadjuvant treatment with HER2-targeted www.nature.com/scientificreports/therapy exploring the regimens that contained lapatinib, and continuation of same dual HER2-targeted regimens in neoadjuvant and adjuvant phases.Previous economic evaluation studies conducted in high income countries (e.g., Canada and USA), and in upper-MICs (e.g., China) reported that the regimen of neoadjuvant PTC followed by adjuvant T (as strategy 2 of our study) tended to be cost-effective [26][27][28][29]35,62 in their country contexts, where their WTP or CE thresholds were much higher than the threshold applied for Sri Lanka. Howevr, the studies also reported a higher ICER value and higher costs compared to our findings 25,26,28,29,35 .It was noted that previous studies in contrast to our study also used efficacy parameters of the strategies directly based on the clinical trial 13 results.The differences in reported total health outcome gain may have been due to limitations in survival and follow up data and effects of crossover.Furthermore, some previous economic evaluations had included novel treatments such as trastuzumab emtansine (T-DM1) and T-DM1 plus pertuzumab for the adjuvant therapy treatment sequences 27,62 , which was not available in Sri Lanka at the time of the analysis.Additionally, few studies explored the cost-effectiveness for hormone receptor negative populations 27,62 . Results from uncertainty analysis were sensitive to direct medical costs of neoadjuvant and adjuvant treatment phases.In our study, we calculated cost parameters by using multiple sources representing standard costs and utilization of services, which may have caused an underestimation of cost.Hence, more comprehensive costing studies on the public healthcare system in Sri Lanka will be beneficial to provide more up-to-date information for economic evaluations.However, the model showed relatively minimal sensitivity to many parameters proving the robustness of the results. Our study findings demonstrate that, even though PT dual therapy as neoadjuvant treatment is a better choice compared to single HER2-targeted therapy, the ICER is considerably higher for the regimen that results in higher budget impact.Hence, it could be challenging for lower-MICs such as Sri Lanka to enhance funding for the therapy in the public healthcare system.Although price reductions seem to be an option for the possible use of PT dual therapy in neoadjuvant phase alone, variety of negotiation approaches could be considered for high-cost interventions (i.e., cancer therapy), such as managed entry agreements, risk-sharing agreements, or special access scheme for cancer treatments 63 .Moreover, compulsory and voluntary licensing are other options Table 3. Budget impact of dual HER2-targeted therapy regimens.C Chemotherapy, L Lapatinib, P Pertuzumab, T trastuzumab.Strategy 1: Neoadjuvant TC followed by adjuvant T, Strategy 2: Neoadjuvant PTC followed by adjuvant T, Strategy 3: Neoadjuvant LTC followed by adjuvant T, Strategy 4: Neoadjuvant PTC followed by adjuvant PT, Strategy 5: Neoadjuvant LTC followed by adjuvant LT. www.nature.com/scientificreports/ that are also being implemented for certain cancer medicines especially to improve the access of cancer treatment in lower-MICs 64,65 . In consideration of the downturn of Sri Lankan economy in year 2022, the year-on-year headline inflation increased by over 60% 66 by the end of 2022 from 2021 and the national consumer price index increased by 56% 67 accompanied by the depreciation of the Sri Lankan Rupee.This likely impacted the rise of direct medical and non-medical costs.However, the cost-effectiveness of interventions in this study was assessed based on costs estimated for year 2021.Hence, considering the final results based on USD values will be more appropriate to have an understanding on the present costs in the current year for the Sri Lankan setting.Some of the limitations experienced in our study were that firstly, novel therapies such as T-DM1, were not included as the analysis focused on dual therapies registered in Sri Lanka at the time of the study.Secondly, limitations were present in availability of evidence on effectiveness and survival along with lesser duration of follow-up for some dual therapy regimens.However, a systematic review and network meta-analysis with mixed effects parametric analysis 21 was performed to synthesize up to date evidence of treatment efficacy, and best available evidence were used for clinical parameter estimation.Thirdly, there were limitations in the data sources for the cost parameter estimations.However, the cost estimations of this study were based on the standard costs for treatment in the country which was in-line with the current clinical practice in Sri Lanka.Additionally, the uncertainty of parameters was addressed by conducting one-way and probabilistic sensitivity analyses and the results were not considerably different in terms of the final conclusions.Nevertheless, the use and the generalizability of our findings should be done with caution due to the uncertainty of some parameters used in the model.Lastly, Sri Lanka currently does not have a CE threshold set for economic evaluations, and thus 1 GDP per capita of Sri Lanka (2021) was used as the threshold for this analysis.However, this may not be adequate to reflect the WTP for a QALY gained of the country. Conclusions The dual HER2-targeted regimens that included pertuzumab showed higher health outcomes compared to single trastuzumab regimen; nevertheless, in order for the regimen of neoadjuvant PTC followed by adjuvant T to be cost-effective for the Sri Lankan public healthcare system, a cost reduction of the neoadjuvant therapy should be arranged. Figure 1 . Figure 1.Schematic diagram of the Markov model. Table 1 . Parameters of the model and the sampling distributions for the probabilistic sensitivity analysis. C Chemotherapy, DMC Direct medical cost, DNMC Direct non-medical cost, EF Event Free, L Lapatinib, P Pertuzumab, T trastuzumab, TP transition probability Strategy 1, Neoadjuvant TC followed by adjuvant T; Strategy 2, Neoadjuvant PTC followed by adjuvant T; Strategy 3, Neoadjuvant LTC followed by adjuvant T; Strategy 4, Neoadjuvant PTC followed by adjuvant PT; Strategy 5, Neoadjuvant LTC followed by adjuvant LT Cost parameters (
2024-07-22T15:02:46.730Z
2024-07-20T00:00:00.000
{ "year": 2024, "sha1": "ca18d640551986865a3ac450a08a49dd4715c180", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0b8502e741d36b9b9e26f2e9a46cc034fab3303e", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
270354614
pes2o/s2orc
v3-fos-license
Analysing the Spatio-Temporal Variations of Urban Street Summer Solar Radiation through Historical Street View Images: A Case Study of Shanghai, China : Understanding solar radiation in urban street spaces is crucial for comprehending residents’ environmental experiences and enhancing their quality of life. However, existing studies rarely focus on the patterns of urban street solar radiation over time and across different urban and suburban areas. In this study, street view images from the summers of 2013 and 2019 in Shanghai were used to calculate solar radiation in urban street spaces. The results show a general decrease in street solar radiation in 2019 compared to 2013, with an average drop of 12.34%. The decrease was most significant in October (13.47%) and least in May (11.71%). In terms of solar radiation data gathered from street view sampling points, 76.57% showed a decrease, while 23.43% showed an increase. Spatially, solar radiation decreased by 79.66% for every additional 1.5 km from the city centre. In summary, solar radiation generally shows a decreasing trend, with significant variations between different areas. These findings are vitally important for guiding urban planning, optimising green infrastructure, and enhancing the urban ecological environment, further promoting sustainable urban development and improving residents’ quality of life. Introduction The process of urbanisation facilitates rapid changes in cities within a short period, and the study of changes in the built environment of streets and related elements has increasingly attracted attention [1].The streets of a city not only connect the physical components of the built environment but also permeate the daily activities and life scenes of urban residents [2].Urban streets serve as carriers of natural spaces in the city, providing residents with venues for various activities, socialising, and leisure [3].Solar radiation in streets, as a key factor, significantly impacts the urban climate, energy consumption, and residents' quality of life.The level of solar radiation in the urban street environment directly affects individual outdoor activity levels and the utilisation of urban spaces [4][5][6][7].Moreover, due to its many influencing factors, the urban street environment often creates unique microclimates, which directly affect both the thermal comfort of the street environment and the solar radiation in urban streets [8].Therefore, assessing solar radiation changes in urban streets based on the built environment plays a significant role in enhancing the urban environment and the quality of life for residents. In summer, poor urban street thermal environments can have a significant negative impact on outdoor activities and the efficiency of outdoor work for city residents.Excessively cold outdoor urban environments can inconvenience city dwellers, while strong solar radiation in summer can greatly limit their outdoor activities [7,9].Research indicates that approximately one-fifth of natural disasters in the United States each year are caused by extreme high temperatures [10].In the context of global warming, many cities are expected to experience more severe extreme high-temperature environments, which could lead to more serious natural disasters and consequences.One important environmental parameter affecting the thermal comfort of urban streets is the amount of solar radiation entering them [9,11].However, the effect of solar radiation on pedestrian thermal perception differs between summer and winter.In winter, pedestrians prefer to be exposed to sunlight [5,9], but in summer, solar radiation is often considered uncomfortable, directly affecting pedestrians' experiences on urban streets in summer.This is particularly important in countries and cities located in cold and temperate zones [6].As urbanisation and urban planning progress, humanising urban spaces has become a focus, making the reduction of summer solar radiation in urban street design increasingly significant. Solar radiation refers to the energy emitted from the sun in the form of electromagnetic waves that radiate onto the Earth [12].This radiant energy is fundamental for all weather systems and biological survival on Earth, playing a crucial role in maintaining the Earth's surface temperature and lighting conditions.Solar radiation primarily consists of direct radiation and scattered radiation.Direct radiation is the radiation that reaches the Earth's surface directly from the sun, without being significantly scattered or absorbed by the atmosphere.Scattered radiation, on the other hand, occurs as sunlight passes through the Earth's atmosphere and is scattered by atmospheric particles such as gas molecules, water droplets, and dust, thereby changing the direction of light propagation.These two forms of radiation together determine the total amount of solar radiation received at the Earth's surface, thereby affecting the temperature of the surface and the adjacent air and forming specific microclimatic conditions. The impact of solar radiation on urban microclimates is multifaceted and complex.An urban microclimate refers to the climate variation within a city, as compared to the surrounding suburbs, over a small area.This variation is primarily caused by factors such as the city's architectural layout, materials, the degree of vegetation cover, and human activities [13].As one of the key natural factors affecting urban microclimates, the influence of solar radiation is mainly manifested in the following aspects: Urban Heat Island Effect: In cities, a large number of buildings and artificial surfaces (such as concrete and asphalt) absorb and store solar radiation energy, resulting in higher temperatures in urban areas than in surrounding rural areas.This temperature difference not only affects the comfort of urban residents but also increases the energy consumption and air conditioning load of the city [14].Surface and Air Temperature: On sunny days, the ground absorbs solar radiation and heats up, which, through heat conduction and convection, heats the air, leading to an increase in temperature.The uneven distribution of solar radiation also leads to temperature differences between different areas within the city [15].Human Comfort and Outdoor Activities: The intensity of solar radiation directly affects the thermal comfort of urban residents and their choices of outdoor activities.In summer, strong solar radiation may cause the outdoor environment to overheat, reducing the comfort and willingness of people to engage in outdoor activities.In winter, suitable solar radiation can increase the frequency of outdoor space usage, improving people's comfort [16].Regulatory Role of Green Infrastructure: Urban green infrastructure, such as parks, green belts, and rooftop gardens, can regulate the effects of solar radiation by providing shade and through the transpiration of plants, thereby reducing surface temperatures, alleviating the urban heat island effect, and improving the urban microclimate.Furthermore, suitable greenery arrangements can optimise the utilisation of solar radiation, providing a more comfortable outdoor environment for the city [17]. However, due to the complex mechanisms between solar radiation and urban development [18], there remains a significant research gap.Firstly, previous studies on urban solar radiation have focused on measurements in special or key areas [19], with little attention given to the imbalances in development within and outside cities.Secondly, discussions and observations of the characteristic distribution of solar radiation in urban streets are often limited to the same temporal cross-section [20].However, street view images possess temporal attributes, satisfying the basic conditions for discussing multidimensional temporal cross-sections [21].Relying solely on remote sensing data to simulate solar radiation in urban street canyons is challenging due to its specificity in modelling direct solar radiation to the ground.Moreover, street view images are captured from the perspective of the street, simulating the first-person view of a pedestrian.Thirdly, many urban models overly simplify the spatial geometric morphology within urban street canyons [8], often excluding the canopy of street trees, leading to an inability to incorporate the impact of urban street canyon spatial geometry on direct solar radiation and the thermal environment into calculations.To address these research gaps, we propose three research questions based on measurements of solar radiation changes in urban streets and the distribution inside and outside cities. 1. How can the distribution of urban street solar radiation over different temporal cross-sections be calculated using urban street view images from different years? 2. What is the overall trend of changes in urban street solar radiation over time? 3. Does the variation in urban street solar radiation in the inner and outer parts of a city exhibit consistency? To address these research questions and explore the spatio-temporal distribution of urban street solar radiation, this study employed an analytical framework based on multi-year street view data.Firstly, the street view data underwent preprocessing to select images that met the specified criteria.Then, using deep learning for the semantic segmentation of fisheye street view images, solar radiation was calculated by overlaying the solar trajectory.Finally, the temporal and spatial characteristics of solar radiation were analysed and discussed.The changes in Shanghai were analysed as a case study.Another major contribution of this research is the proposal of a new method for efficiently exploring the spatio-temporal distribution of solar radiation using multi-year street view data. Time Series Street View Research Using street view images for urban space assessment is a popular method currently.Street view images offer a unique perspective of pedestrian activities, characterised by their wide coverage and detailed spatial acquisition.They have been employed in studies of urban environments and phenomena at various scales [22].Street view images are typically processed using datasets from autonomous vehicle driving, as they share similar application scenarios in identifying built environment objects on roads.Examples include the ADE20K dataset [23] or the Cityscapes dataset [24]. Street view imagery is utilised for interpreting urban phenomena.It allows for a comprehensive assessment of green metrics in cities, such as the structure and quantity of urban greenery, through the detailed distribution of trees, shrubs, and herbs identified within the images.This imagery provides constructive suggestions for urban greening projects [25].When combined with computerised object detection technologies, street view imagery also finds multifaceted applications in measuring the sky, including assessing sky openness [26], calculating the solar reflectivity of building façades [27], and measuring solar radiation.Additionally, street view imagery is extensively used in identifying economic indices [28,29] and traffic conditions [30,31], and in conjunction with other data types. Street view data collected over multiple years can record the physical environment of city streets at different times, which is instrumental in providing strong reference points for urban planning and policy-making [32].Secondly, by comparing street view data from different years, one can directly observe changes in city streets in terms of functional layout [33], green coverage [34], gentrification processes [35], and seasonal variations [36].This reveals the evolutionary characteristics of streets during urban development and factors influencing urban renewal.Comparing street environments before and after policy implementation aids in understanding the effectiveness and sustainability of these policies, and offers valuable suggestions for policy optimisation [37].Multi-year street view data can also reveal spatial differences and evolutionary processes in solar radiation across different areas, helping to identify priority areas for urban street planning and providing a basis for targeted environmental interventions by urban planning and management departments. Solar Radiation in Urban Areas Solar radiation in urban areas is primarily discussed in terms of spatial distribution, influencing factors, and trends.Some studies focus on the spatial distribution characteristics of urban solar radiation, such as the impact of high-rise buildings, green coverage, and urban morphology.Research indicates that direct solar radiation in urban streets is often influenced by the tree cover ratio, geometric features of the streets, and the urban street network [5,6,8,38].Streets with a higher height-to-width ratio typically have more shaded areas and a better thermal environment in the summer.The solar radiation received by urban streets is also affected by the spatial arrangement of surrounding buildings and the orientation of urban street canyons.Streets oriented east-west tend to receive more solar radiation, as their direction aligns with the direct angle of the sun.The orientation of the streets also influences the shading effect of trees on either side [39].However, east-westaligned street trees can provide better cooling effects in the microclimate of urban street canyons [40].These findings highlight the regulatory role of urban street greening on the microclimate of urban street canyons.Despite the impact of the vertical structure of green vegetation in urban street canyons on solar radiation intake, this aspect cannot be reflected in remote sensing data [41].Therefore, choosing appropriate data sources and technical methods for the efficient simulation and calculation of solar radiation is crucial. Solar Radiation Simulation and Calculation With technological advancements, researchers have proposed and developed various methods and tools to calculate solar radiation levels within street canyon networks.Numerical simulation models can reflect changes in spatial heterogeneity [42].Models based on Computational Fluid Dynamics (CFD) software like FLUENT are widely used to study the urban climate and solar radiation levels [43].Repeated simulations of urban areas, including complex built environments (comprising buildings, vegetation, and public infrastructure), require a higher computational power and longer analysis time.Due to high demands on computer performance and computational costs, CFD models are difficult for non-experts to use or apply in large-scale urban models [44].Calculating based on remote sensing imagery is a classical method, computing canopy coverage, vegetation indices, etc., to explain the microclimate regulation role of urban street canyons.However, this only reflects the remote sensing imagery at a certain time and cannot simulate the solar radiation values influenced by complex street tree canopy structures and precise solar trajectories.With the prevalence of high-resolution digital model data, accurately simulating solar radiation in street canyons has become possible.Yet, these digital urban models often do not include the tree canopy layer. Matzarakis et al. employed ground-based hemispherical images, in conjunction with onsite measurements, to precisely gauge solar radiation and thermal environments within urban street canyons [45].This demonstrated that ground-based hemispherical images are a valuable supplemental data source when simulating and measuring solar radiation in urban street canyons.The use of these images allows for a more accurate consideration of the angle of solar incidence.Building on this, Li utilised urban street view images to measure solar radiation [46].Another study measured the potential for photovoltaic power generation in city roads through street view images [47].Traditional urban space solar radiation calculations primarily rely on manual surveys of small-scale spaces.Although these surveys achieve a high degree of accuracy, they are inefficient [16].Utilising street view image data can effectively address this issue.This is because street view data are widely distributed in the majority of cities around the world, making it possible for researchers to analyse urban solar radiation on a large scale using street view images [47].In summary, despite previous studies employing various methods to measure solar radiation, including street view data, there remains a gap in understanding the long-term variations in solar radiation in urban streets over multiple summers.Typically, research on solar radiation has focused on changes at the city-wide scale, overlooking the spatial differences between urban centres and suburbs.Therefore, this study integrates deep learning methods with the unique availability of large-scale street view data from the same locations over different time periods, to conduct a spatio-temporal analysis of a long-term solar radiation simulation and calculation in urban streets. Study Area and Data This study selects the central urban area of Shanghai as the research area (Figure 1).Shanghai is an important Chinese city for finance, culture, and international openness.According to the data from China's seventh national census, the total population of Shanghai is approximately 24.87 million.The central urban area is a highly developed region of Shanghai, accounting for more than half of the total population.The study area is located within the Shanghai Outer Ring Road, covering a land area of 664 square kilometres.This region is characterised by a diverse range of street canyon types, from the skyscrapers of the financial district to preserved traditional residential buildings.Shanghai has a subtropical monsoon climate, with an average annual temperature of 17.6 • C, 1885.9 h of sunshine, and 1173.4 mm of precipitation.Summers are typically very hot, with average temperatures exceeding 17.5 • C from May to October.The spatial variation of street canyon types and the relatively high summer temperatures make Shanghai an excellent case study area for examining changes in solar radiation. radiation calculations primarily rely on manual surveys of small-scale spaces.Although these surveys achieve a high degree of accuracy, they are inefficient [16].Utilising street view image data can effectively address this issue.This is because street view data are widely distributed in the majority of cities around the world, making it possible for researchers to analyse urban solar radiation on a large scale using street view images [47]. In summary, despite previous studies employing various methods to measure solar radiation, including street view data, there remains a gap in understanding the long-term variations in solar radiation in urban streets over multiple summers.Typically, research on solar radiation has focused on changes at the city-wide scale, overlooking the spatial differences between urban centres and suburbs.Therefore, this study integrates deep learning methods with the unique availability of large-scale street view data from the same locations over different time periods, to conduct a spatio-temporal analysis of a longterm solar radiation simulation and calculation in urban streets. Study Area and Data This study selects the central urban area of Shanghai as the research area (Figure 1).Shanghai is an important Chinese city for finance, culture, and international openness.According to the data from China's seventh national census, the total population of Shanghai is approximately 24.87 million.The central urban area is a highly developed region of Shanghai, accounting for more than half of the total population.The study area is located within the Shanghai Outer Ring Road, covering a land area of 664 square kilometres.This region is characterised by a diverse range of street canyon types, from the skyscrapers of the financial district to preserved traditional residential buildings.Shanghai has a subtropical monsoon climate, with an average annual temperature of 17.6 °C, 1885.9 h of sunshine, and 1173.4 mm of precipitation.Summers are typically very hot, with average temperatures exceeding 17.5 °C from May to October.The spatial variation of street canyon types and the relatively high summer temperatures make Shanghai an excellent case study area for examining changes in solar radiation.The datasets used in this study include road network data, street view data, and solar position data.The road network data are downloaded from OpenStreetMap (OSM), extracting the road network of the central urban area of Shanghai.Subsequently, we generated 71,546 sampling points along the streets, with a distance of 50 metres between two neighbouring points (Figure 1).Then, using the coordinates of these sampling points, we downloaded metadata from the Baidu website [46].We screened all the sampling points for the historical collection time and season.The filtering criteria were as follows: 1.Data from both 2013 and 2019 are available.2. The data collection time is between May and October.A total of 33,626 sampling points met these two conditions.For more information about Baidu Street View (BSV), please refer to the next section. Multi-Year Street View Data Collection and Seasonal Filtering In this study, in order to collect historical street view data for specific years, the process is divided into several steps.The first step is to download Shanghai's road network through OSM and generate sampling points at 50 m intervals (Figure 1c). The second step is to access the metadata of the point on the Baidu server through the latitude and longitude coordinates of the sampling points.The metadata contain more than ten types of information for the coordinates.The metadata used in this study mainly include the following types: ID (unique index number of the street view image), TimeLine (all historical data information existing for the coordinates), and MoveDir (the angle between the camera's forward direction and the northern direction when shooting).This is a BSV metadata example located at (longitude: 121.4953441, latitude: 31.2398195). Metadata of BSV panorama {"ID": "09000300121905211356019738P", "MoveDir": "65.649", "TimeLine": [ {"ID":"09000300121905211356019738P","IsCur rent":1,"Time":"day","TimeDir":"","TimeLine":"201905","Year":"2019"}, {"ID":"09000300121709121417093205B","IsCurrent":0,"Time":"day","TimeDir":"ob solete","TimeLine":"201709","Year":"2017"}, {"ID":"09000300001504110442569601A","IsCur rent":0,"Time":"day","TimeDir":"obsolete","TimeLine":"201504","Year":"2015"}, {"ID":"01000300001310131324599685J","IsCurrent":0,"Time":"day","TimeDir":"ob solete","TimeLine":"201310","Year":"2013"}]} In the third step, we collected all historical information and metadata for 71,546 street view points.This comprised a total of 214,377 data entries, which we analysed using the Pandas library to compile statistics on the historical information of these street views (Table 1).The statistical analysis revealed that large-scale data updates were conducted by the map providers in 2013, 2015, 2017, and 2019.Due to the defoliation of trees in winter and the reduced impact of solar radiation intensity on pedestrians' subjective perception, we confined the time variable to data collected from May to October during the summer, excluding the winter period.Therefore, we excluded the year 2015.As we aimed to examine the trend of solar radiation over an extended period, we also excluded the year 2017.Finally, we selected street view points that had data from both 2013 and 2019, resulting in a total of 33,626 eligible sampling points.Step four, use the unique ID obtained in the previous step to fill in the URL for server access.Obtain the left and right halves of the panoramic image, both with a resolution of 512 × 512 pixels (Figure 2a).In the script we developed, stitching the left and right halves of the panoramic image together yields a complete street view panoramic image.Step four, use the unique ID obtained in the previous step to fill in the URL for server access.Obtain the left and right halves of the panoramic image, both with a resolution of 512 × 512 pixels (Figure 2a).In the script we developed, stitching the left and right halves of the panoramic image together yields a complete street view panoramic image. Fisheye Image Generation and Azimuthal Rotation After filtering the street view sampling points by season and year, panoramic street view images are collected through metadata (Figure 2a).In this study, we convert these BSV panoramic images from equirectangular cylindrical projection to equidistant azimuthal projection to create fisheye images [46].The mathematical model for the conversion is detailed in Figure 2b.W c and H c represent the width and height of the panoramic image, so the radius of the fisheye image, r 0 , is W c /2π, and the width and height of the fisheye image are W c /π.Thus, the centre of the fisheye image (C x , C y ) is calculated using Equation (1).For any pixel position (x f , y f ) in the fisheye image, the corresponding pixel position (x c , y c ) in the panoramic image can be obtained through calculation using Equation (2). For any point in the fisheye image, the angle θ between the coordinates and the starting position, and the radius r from the centre, can be calculated using Equations ( 3) and ( 4). The aforementioned mathematical model is used to transform street view panoramic images into fisheye images.However, panoramic images are entirely in the format of left vehicle rear and right vehicle front, so the generated fisheye images do not have their top side facing north in the actual geographical space.Therefore, it is necessary to use the angle between the camera and the north side when taking pictures, as collected in the metadata.The rotation angle is calculated using Equation (5).The OpenCV tool is used for image processing and rotation, with a consistent counterclockwise rotation applied.The fisheye image is rotated, so that its top side faces north in the geographical space (Figure 2c). In Formula ( 5), θ n represents the MoveDir derived from the BSV panoramic metadata, while θ m is the angle calculated for rotating the image counterclockwise using OpenCV, as shown in Figure 2c.This alignment allows the rotated fisheye image to share the same coordinate system with the sun path projection on a two-dimensional plane.Consequently, it becomes feasible to overlay the sun path with the fisheye image, facilitating the calculation of solar radiation in this study. Calculating Solar Radiation over Many Years through Street Views In the sweltering summer streets of urban canyons, street trees and buildings are the primary means of providing shade for pedestrians.As cities develop, buildings, being artificial structures, remain largely unchanged over an extended period, barring instances of demolition or new construction.Therefore, in the summer, the shade provided by trees becomes the most significant factor influencing the study of the radiant environment.By providing specific geographic coordinates (longitude and latitude), it is possible to calculate the precise movement path of the sun at any given moment.Subsequently, overlaying this data with a fisheye lens map allows for the calculation of the solar radiation ratio and its variations. Calculating solar radiation requires considering the proportion of the sky in a fisheye image.Previous studies have employed threshold segmentation methods or machine learning approaches to identify differences in pixel colour between plants and the sky.With the development of deep learning, the use of mature image semantic segmentation technology can more accurately evaluate the proportion of the sky in fisheye images.We employed a pre-trained deep learning model based on the ResNet neural network architecture [23] (Figure 3a).Specifically, this architecture introduced the concept of residual structures, which significantly mitigates the issue of training difficulties in deep neural networks.This architecture is a classic network structure in image semantic segmentation tasks and is fully capable of meeting the accuracy requirements of this study.It should be noted that deep learning methods are only used for image semantic segmentation and cannot be directly applied to evaluate solar radiation. tecture [23] (Figure 3a).Specifically, this architecture introduced the concept of residual structures, which significantly mitigates the issue of training difficulties in deep neural networks.This architecture is a classic network structure in image semantic segmentation tasks and is fully capable of meeting the accuracy requirements of this study.It should be noted that deep learning methods are only used for image semantic segmentation and cannot be directly applied to evaluate solar radiation.Using Pysolar, the position of the sun can be precisely calculated based on geographical coordinates and time.As the sun changes position over the course of a day, its movement trajectory can be overlaid with a fisheye diagram to measure the duration of solar radiation.Figure 3b displays the sun's movement trajectory for 2013 and 2019.The time frame is from May to October, on the 15th of each month.The movement trajectory is recorded every 10 min.If the sun is not in the sky area of the fisheye diagram, sunlight will be obstructed by plants and buildings.Our study is based on ideal conditions of clear Using Pysolar, the position of the sun can be precisely calculated based on geographical coordinates and time.As the sun changes position over the course of a day, its movement trajectory can be overlaid with a fisheye diagram to measure the duration of solar radiation.Figure 3b displays the sun's movement trajectory for 2013 and 2019.The time frame is from May to October, on the 15th of each month.The movement trajectory is recorded every 10 min.If the sun is not in the sky area of the fisheye diagram, sunlight will be obstructed by plants and buildings.Our study is based on ideal conditions of clear weather without cloud cover, although there is a discrepancy with actual weather conditions.Nevertheless, this method of measuring solar radiation can provide scientific guidance for urban planning. Solar radiation consists of direct radiation and diffuse radiation [48,49].Based on fisheye images generated from BSV panoramas, reasonable predictions of solar radiation in urban streets can be made [50].Therefore, in this study, we used fisheye image data from sampling points of two different years to calculate street radiation.For direct radiation, the calculation is based on the proportion of the intersection between the solar path and the sky pixels in the fisheye images.The calculation process can be expressed by Equation (6), where h 1 is the sunrise time, h 2 is the sunset time, θ h represents the solar zenith angle at time h, and B h indicates whether the sun is obscured at time h, represented by the Boolean values 0 or 1. Diffuse radiation is a form of solar radiation scattered in the atmosphere.The amount of diffuse radiation can be estimated through the distribution of shading obstacles and diffuse sky [50].Assuming that diffuse radiation is uniformly distributed in the sky, the sky is divided into 8 × 16 sky sectors to create a sky map.The proportion of diffuse radiation reaching the ground can be predicted using Equation (7), where G a,z is the proportion of visible sky obtained from image semantic segmentation; θ a,z,2 and θ a,z,1 are the boundary zenith angles of the sky sector; and θ z is the solar zenith angle at the centroid of the sky sector. The total radiation for the streets can be calculated by adding the total direct solar radiation and the total scattered solar radiation [51].The total direct solar radiation Raddi and the total diffuse solar radiation Raddif in Equation ( 8) are collected from ground station data.These data come from the National Solar Radiation Database (http://www.nrel.gov/rredc/).The average daily direct radiation and diffuse radiation in Shanghai from 1 May to 31 October 2013, were found to be 3602.597826Wh/m 2 and 2591.847826Wh/m 2 , respectively.From 1 May to 31 October 2019, the average daily direct radiation and diffuse radiation were 3180.038043Wh/m 2 and 2516.869565Wh/m 2 , respectively. In the final analysis of solar radiation, we calculate the average total solar radiation from May to October, represented respectively as R may and R oct .For thermal comfort, the five-month average radiation index is denoted as Rc (Equation ( 9)). Distribution and Trend of Solar Radiation over Time Table 2 describes the distribution characteristics of solar radiation data in Shanghai for the years 2013 and 2019.Overall, the level of solar radiation in 2019 decreased by 12.34% compared to 2013.The greatest decrease occurred in October, with a reduction of 13.47%, while the smallest decrease was in May, at 11.71%.Kurtosis (kurt) in the table is a statistical measure used to describe the shape of the distribution of solar radiation data.Kurtosis measures the peakedness of the data distribution, assisting in understanding the distribution characteristics of solar radiation in a given month.The study results show that the kurtosis values for both 2013 and 2019 are negative, indicating that the distribution of solar radiation data for these years is flatter compared to a normal distribution, suggesting a lower degree of peakedness.This may imply significant variability in solar radiation, with considerable differences in radiation levels at different locations and times.A statistical analysis of street view sampling points reveals that the number of locations with reduced solar radiation is 25,749, accounting for 76.57% of the total, while the number of locations with increased radiation is 7877, representing 23.43% of the total.These data distribution characteristics indicate an overall decreasing trend in solar radiation.This information is also useful for understanding the variations in solar radiation distribution across different areas and times within the city.In Figure 4a, we have plotted the curve fitting the changing trends of solar radiation for two years.The R 2 value for 2013 is 0.8593, and the R 2 value for 2019 is 0.8642.These values indicate that our curve has a high degree of accuracy when fitting solar radiation.We have used the calculated data to draw box plots of radiation levels for each month in the two years in Figure 4b.It can be observed that the overall solar radiation in 2019 is lower than in 2013.From May to October, the average solar radiation in 2013 gradually increases, reaches a peak, and then starts to decrease.Similarly, the average solar radiation in 2019 also exhibits a similar trend, but at a lower overall level.4 displays the changing trends of solar radiation in different years.In Figure 4a, we have plotted the curve fitting the changing trends of solar radiation for two years.The R 2 value for 2013 is 0.8593, and the R 2 value for 2019 is 0.8642.These values indicate that our curve has a high degree of accuracy when fitting solar radiation.We have used the calculated data to draw box plots of radiation levels for each month in the two years in Figure 4b.It can be observed that the overall solar radiation in 2019 is lower than in 2013.From May to October, the average solar radiation in 2013 gradually increases, reaches a peak, and then starts to decrease.Similarly, the average solar radiation in 2019 also exhibits a similar trend, but at a lower overall level. Distribution of Solar Radiation in Geographic Space In Figure 5, we have plotted a comparison of solar radiation between May and October in the years 2013 and 2019.To ensure an intuitive comparison of geographical visualisation, we have used the same data-partitioning intervals.All value ranges are from 0 Wh/m 2 to 5000 Wh/m 2 .As a result, we can observe that there is a varying degree of Distribution of Solar Radiation in Geographic Space In Figure 5, we have plotted a comparison of solar radiation between May and October in the years 2013 and 2019.To ensure an intuitive comparison of geographical visualisation, we have used the same data-partitioning intervals.All value ranges are from 0 Wh/m 2 to 5000 Wh/m 2 .As a result, we can observe that there is a varying degree of reduction in solar radiation during different months in both summer years.The average monthly reduction in summer is 325.713Wh/m 2 , with September experiencing the highest reduction of 334.431Wh/m 2 and May the lowest reduction of 311.722Wh/m 2 . In terms of geographical spatial distribution, within the same temporal cross-section, the amount of solar radiation in urban central areas is typically less than that in regions outside the study area.This is due to the high-rise buildings in city centres blocking sunlight, further reducing the duration of direct radiation received by the surrounding streets.Across two temporal cross-sections, the reduction in solar radiation is more significant and apparent in Pudong, located in the southeast, compared to Baoshan, Putuo, and Jiading in the northwest.This is attributed to Pudong being a key area for urban development, receiving more financial investment and undergoing more rapid infrastructural development.Consequently, planting more trees has enhanced the shading effect on the streets.In contrast, the solar radiation in central urban areas like Yangpu, Hongkou, Jing'an, Huangpu, and Xuhui has not significantly decreased.This is because these central districts, being prioritised for development, have not seen substantial changes in their established urban architecture and vegetation growth.In terms of geographical spatial distribution, within the same temporal cross-section, the amount of solar radiation in urban central areas is typically less than that in regions outside the study area.This is due to the high-rise buildings in city centres blocking sunlight, further reducing the duration of direct radiation received by the surrounding streets.Across two temporal cross-sections, the reduction in solar radiation is more significant and apparent in Pudong, located in the southeast, compared to Baoshan, Putuo, and Jiading in the northwest.This is attributed to Pudong being a key area for urban development, receiving more financial investment and undergoing more rapid infrastructural development.Consequently, planting more trees has enhanced the shading effect on the streets.In contrast, the solar radiation in central urban areas like Yangpu, Hongkou, Jing'an, Huangpu, and Xuhui has not significantly decreased.This is because these central districts, being prioritised for development, have not seen substantial changes in their established urban architecture and vegetation growth. Distribution and Trend of Solar Radiation Variations in Geography We calculated the solar radiation in the urban space for both years and then calculated the difference in solar radiation at each street view collection point.In Figure 6a, we analyse the change data of solar radiation during the geographical visualisation process.The results show that the closer one is to the urban outskirts, the more likely it is that solar radiation will decrease.However, in the city centre areas, there may even be cases where solar radiation increases. In Figure 6b, we have quantified and visualised the differences in solar radiation.The number of street view points with reduced solar radiation is represented in green, while the number of street view points with increased solar radiation is represented in red.Since there are more green street view points than red ones, it indicates that areas with decreased solar radiation outnumber those with increased solar radiation in the overall collection of street view points.We found that the overall distribution of the difference in solar radiation scores is normally distributed, in line with the fundamentals of statistics.To intuitively compare the results, we reversed the direction of the y-axis for the reduction in solar radiation and displayed it with a cool colour scheme.The results show that the decrease in solar radiation is mainly concentrated between 0 Wh/m 2 and −2000 Wh/m 2 , while the increase in solar radiation is concentrated between 0 Wh/m 2 and 1000 Wh/m 2 .Although there are areas in the city where solar radiation has increased, the proportion of reduced solar radiation is greater.The overall solar radiation in the city is decreasing, and the vegetation shading and infrastructure construction in this city are relatively rapid, with significant changes. Distribution and Trend of Solar Radiation Variations in Geography We calculated the solar radiation in the urban space for both years and then calculated the difference in solar radiation at each street view collection point.In Figure 6a, we analyse the change data of solar radiation during the geographical visualisation process.The results show that the closer one is to the urban outskirts, the more likely it is that solar radiation will decrease.However, in the city centre areas, there may even be cases where solar radiation increases.In Figure 6b, we have quantified and visualised the differences in solar radiation.The number of street view points with reduced solar radiation is represented in green, while the number of street view points with increased solar radiation is represented in red.Since there are more green street view points than red ones, it indicates that areas with decreased solar radiation outnumber those with increased solar radiation in the overall collection of street view points.We found that the overall distribution of the difference in solar radiation scores is normally distributed, in line with the fundamentals of statistics.To intuitively compare the results, we reversed the direction of the y-axis for the reduction in solar radiation and displayed it with a cool colour scheme.The results show that the decrease in solar radiation is mainly concentrated between 0 Wh/m 2 and −2000 Wh/m 2 , while the increase in solar radiation is concentrated between 0 Wh/m 2 and 1000 Wh/m 2 .Although there are areas in the city where solar radiation has increased, the proportion of reduced solar radiation is greater.The overall solar radiation in the city is decreasing, and the vegetation shading and infrastructure construction in this city are relatively rapid, with significant changes. What of the spatial distribution of solar radiation in Shanghai?Inspired by the results of the solar radiation difference, we became interested in the change pattern of radiation intensity from the inner to the outer city.Is the reduction in solar radiation greater in the inner city or the outer city?Taking our selected research area as an example, we used the geometric centre of all the street view collection points as the centre of the circles and drew 13 concentric circles with a 1.5-kilometre interval.The innermost circle has a radius of 1.5 kilometres, and the radius of each subsequent circle increases by 1.5 kilometres.We aggregated and averaged the solar radiation difference scores within the concentric circles to discover the spatial changes in solar radiation within the inner and outer city.In Figure What of the spatial distribution of solar radiation in Shanghai?Inspired by the results of the solar radiation difference, we became interested in the change pattern of radiation intensity from the inner to the outer city.Is the reduction in solar radiation greater in the inner city or the outer city?Taking our selected research area as an example, we used the geometric centre of all the street view collection points as the centre of the circles and drew 13 concentric circles with a 1.5-km interval.The innermost circle has a radius of 1.5 km, and the radius of each subsequent circle increases by 1.5 km.We aggregated and averaged the solar radiation difference scores within the concentric circles to discover the spatial changes in solar radiation within the inner and outer city.In Figure 7, we draw a demonstration diagram.This demonstration diagram can cover most of the city's street view collection points. In Figure 8a, we can compare the mean solar radiation of each ring.Overall, except for the innermost 1.5 km, the mean solar radiation in the remaining rings in 2019 is lower than that in 2013.This indicates that the decrease in urban solar radiation has a linear consistency in both the inner and outer urban spaces.This reflects the city's construction strategy during these six years, with the greening and shading effect being stronger in rings closer to the outer city.This also reflects the city's historical development process as a highly modernised inner city, currently undergoing an expansion phase from the central area outwards. We have depicted in Figure 8b the variances in solar radiation across different urban zones, illustrating the trend of solar radiation from the inner to the outer city.The X-axis represents the distance from the inner to the outer city, while the Y-axis indicates the difference in solar radiation between two years.A regression line was drawn based on these two observed values, correlating the independent and dependent variables.However, the confidence band reveals a deceleration in the reduction trend of solar radiation in the last five urban zones.The slope of this line is −41.51, with an intercept of −52.11.Thus, in comparison to the solar radiation variation within a 1.5 km radius of the city centre, for every additional 1.5 km spread outward, the solar radiation decreases by an additional 79.66%.The results indicate that, as the distance from the city centre increases, the reduction effect on street-level solar radiation also intensifies.In Figure 8a, we can compare the mean solar radiation of each ring.Overall, except for the innermost 1.5 km, the mean solar radiation in the remaining rings in 2019 is lower than that in 2013.This indicates that the decrease in urban solar radiation has a linear consistency in both the inner and outer urban spaces.This reflects the city's construction strategy during these six years, with the greening and shading effect being stronger in rings closer to the outer city.This also reflects the city's historical development process as a highly modernised inner city, currently undergoing an expansion phase from the central area outwards.We have depicted in Figure 8b the variances in solar radiation across different urban zones, illustrating the trend of solar radiation from the inner to the outer city.The X-axis represents the distance from the inner to the outer city, while the Y-axis indicates the difference in solar radiation between two years.A regression line was drawn based on these two observed values, correlating the independent and dependent variables.However, the confidence band reveals a deceleration in the reduction trend of solar radiation in the last five urban zones.The slope of this line is −41.51, with an intercept of −52.11.Thus, in com- Interannual Differences in Street View Data and Data Quality In our research, we employed street view images from different years to assist in understanding the potential limitations and impacts of the data.When collecting street view images from various years, we used the same resolution of 1024 × 512 pixels.The street views from both years were all panoramic images, facilitating their conversion into fisheye images for analysis without the need to consider the angle of capture.This ensured the uniformity of data quality.We were unable to obtain street view data from specific months at will, as the timing of the data collection by map service providers is beyond our control.Therefore, we relaxed the time constraints, limiting the collection period to May through October.This approach guaranteed that the tree growth cycle was not in the leaf-falling stage.The method we employed for analysis involved converting panoramic images into fisheye images, with the most significant natural influencing factors being changes in plant growth and the consequent sky obstruction.These also became the main factors affecting shadow and solar radiation.Due to the consistency of the street view data, our analytical method was able to accommodate these interannual differences. Interannual variations in street view data could also potentially have a certain impact.For example, despite the use of images with the same pixel resolution and panoramic images that do not differentiate angles, different models of panoramic cameras used in different years might result in inconsistencies in image white balance and colour temperature processing.This could lead to varying degrees of processing precision by deep learning networks in predictions across different datasets.Another example is the difference in leaf growth seasons for plants; leaves may grow differently in different months, which could inevitably have a slight impact on measurements of solar radiation.These issues could be addressed in the future by waiting for map service providers to offer more consistent data sources, as well as through improvements in data preprocessing steps and other solutions. Summary of the Phenomenon and Implications for Development Policy This study aims to explore the spatial distribution and trends of solar radiation in urban areas.Our results reveal that, in terms of temporal distribution, the overall average solar radiation on streets in 2019 decreased by 12.34% compared to 2013.The greatest decrease occurred in October, at 13.47%, while the smallest decrease was in May, at 11.71%.A statistical analysis of street view sampling points indicates that the quantity of locations experiencing reduced solar radiation was 76.57%, while those with increased radiation accounted for 23.43%.Spatially, for every additional 1.5 km distance from the city centre, solar radiation decreased by 79.66%.Hence, over time, solar radiation generally shows a decreasing trend, and the variations in solar radiation in urban streets differ significantly between regions.Furthermore, the Pudong area in the southeast, a key urban development zone, exhibited a more pronounced decrease in solar radiation, which is associated with its economic investment and the pace of infrastructure construction.The decline in solar radiation was more evident in the urban periphery, whereas the central urban area even experienced an increase in solar radiation.By analysing the differences in solar radiation across various distance bands, we found a negative correlation between the distance from the city centre and the level of reduction in solar radiation.However, in the last five distance bands, the trend of decreasing urban solar radiation showed signs of slowing down. This study reveals the spatio-temporal relationship between urban spatial development and solar radiation.This provides useful insights for the practice of urban development policies. 1. Urban development policies need to emphasise the balance between greening and infrastructure construction.The shading effect of greenery can lower urban temperatures and alleviate the urban heat island effect.However, tree canopies may also hinder ventilation and contribute to the accumulation of emissions, affecting quality of life.We recommend rationally planning urban ventilation corridors, considering the impact of ventilation coefficients on cooling.Therefore, in the process of urban planning and development, it is crucial to consider the relationship between urban buildings, infrastructure, and greenery, and to correctly select parameters such as the location and spacing of trees, in order to maintain both the ecological environment and the comfort of human living conditions.2. Urban planning should fully consider the impact of solar radiation on renewable energy.With the development of cities and the growth of populations, the demand for energy is continuously increasing.Therefore, improving the utilisation rate of renewable energy is particularly important.By rationally arranging road solar photovoltaic power generation facilities to charge electric vehicles in transit, solar energy resources can be fully utilised, thereby enhancing the efficiency of solar energy use [47]. Additionally, the research findings have a certain reference value for architectural design and the selection of building materials.Based on the identification results, it is possible to provide early warnings for the energy consumption of buildings in specific areas, thereby achieving the goals of energy conservation and emission reduction. 3. The practice of urban development policies should emphasise the coordination of internal and external urban development.The research results indicate that the reduction in solar radiation increases with the distance from the city centre, suggesting a potential imbalance between internal and external urban development.Therefore, policymakers should focus on the coordinated development of internal and external areas, rationally allocating urban resources and infrastructure.By reducing the solar radiation in city centres, the heat island effect can be mitigated, achieving overall sustainable urban development.The research results hold significant theoretical and practical value for guiding urban planning and construction, optimising urban infrastructure, and promoting sustainable urban development. 4. Considering the specific applications of solar radiation in urban streets, solar radiation impacts not only the overall energy consumption and environmental temperature of cities but also directly influences residents' comfort and the conduct of outdoor activities.High-intensity solar radiation can lead to excessively high street temperatures, affecting pedestrians' comfort and health, and may even restrict the duration and frequency of outdoor activities.To address these issues, urban planning should consider the reasonable layout of shading facilities in street design, such as tree canopies, awnings, and pavilions, to reduce areas directly exposed to sunlight.Additionally, studying the heating effects of solar radiation on different materials can guide the selection of appropriate building and paving materials to lower street temperatures.Furthermore, optimising the spacing and orientation of buildings can enhance urban ventilation and lighting conditions, thereby improving the outdoor activity environment.These measures can not only improve the quality of life for residents but also promote the sustainable development of cities. The Scientific Contribution of the Practical Approach On a technical level, this study utilises the trajectory of the sun and deep learning technology to calculate street-level solar radiation, significantly enhancing the efficiency of solar radiation computation.From a data perspective, by calculating street solar radiation using street view images from different years, this research methodology helps to reveal the trend of solar radiation changes during the urban development process.This approach can uncover the speed and scale of urban development, and further contribute to the discussion of impacts in aspects of urban planning, photovoltaic power generation, and improvements in the urban thermal environment.Methodologically, the research employs an equidistant concentric circle analysis method to study the pattern of changes in urban solar radiation across different-distance zones.Through the analysis of concentric circle solar radiation variation statistics, centred on downtown Shanghai, it reveals a consistent linearity in the reduction in solar radiation from the inner city to the suburbs, reflecting the historical development process of urban expansion from the central area outward.Analytically, the study combines solar radiation with Geographic Information Systems (GIS) to achieve a spatial visualisation of solar radiation differences.This visualisation technique aids in more intuitively presenting the geographical distribution characteristics and changing patterns of solar radiation, enabling researchers and decision-makers to more easily comprehend the actual situation of urban spatial perception. Finally, by comparing radiation differences across different geographic areas, we can identify issues and shortcomings in urban development, providing strong support for optimising urban planning and enhancing the quality of life for residents.Therefore, this study's practical method makes significant scientific contributions in the field of urban science, offering new perspectives and methodologies for studying urban spatial radiation and aiding in advancing urban science research. Research Limitations This study has certain limitations, and we aim to propose solutions for these constraints.Firstly, the research only examines two points in time: 2013 and 2019.This may not fully capture the long-term trends of urban solar radiation.Although the framework proposed in this study is limited by the street view data collection periods, it is possible to seek alternatives such as using multi-source data from remote sensing to calculate the long-term trends of solar radiation.Secondly, this study only explores the case of Shanghai and does not consider other cities, which may limit the applicability of the research findings to other urban areas.Future solutions could involve expanding the study to include more cities to test the generalisability of the results.Lastly, while this study reveals trends in urban spatial solar radiation changes, it is necessary to further investigate the specific causes behind these changes and their impact on the urban ecological environment.Future related research could combine analyses of urban historical development policies and infrastructure construction processes to provide a deeper interpretation of the changing trends. In future research, we could also consider introducing more factors related to urban development, such as population density, traffic flow, and types of land cover, to reveal their interactions with urban solar radiation.Additionally, we could attempt to use more complex mathematical models and machine learning methods to improve the accuracy of predictions regarding changes in urban solar radiation.This would bring a more comprehensive and in-depth understanding to the field of urban science and offer strong support for sustainable urban development. Conclusions Solar radiation in urban streets is a significant characteristic of urban space, substantially impacting human welfare and urban development.However, previous studies have predominantly evaluated solar radiation from a unified time snapshot or a holistic urban perspective, focusing on its characteristics [15,52,53].The temporal and spatial variations of street-level solar radiation have not been adequately addressed.Street view images can record a continuous sequence of changes in streets from a pedestrian's perspective.This feature has not been fully utilised in the computation of solar radiation.In this study, we investigated the temporal and spatial changes of solar radiation within a city, based on street view images from the same location at different times.Specifically, this allowed us to reveal the characteristics of solar radiation distribution across different time dimensions.The results showed a consistent pattern of decreased solar radiation in urban spaces, with a linear increase from inner to outer city areas, reflecting urban construction strategies over these six years.The outskirts of Shanghai showed stronger effects of greenery shading, indicative of Shanghai's status as a highly modernised inner city undergoing expansion from the central area outwards. This work leverages a plethora of street-level images to observe variations in solar radiation across a city, offering decision-makers a free and efficient method to capture urban changes and predict future solar radiation.We believe this approach has great potential for extension to other urban studies, providing longer-term planning guidance and predictions Figure 1 . Figure 1.Study area.(a) Map of China; (b) map of Shanghai; (c) study area and Street View points.Figure 1. Study area.(a) Map of China; (b) map of Shanghai; (c) study area and Street View points. Figure 1 . Figure 1.Study area.(a) Map of China; (b) map of Shanghai; (c) study area and Street View points.Figure 1. Study area.(a) Map of China; (b) map of Shanghai; (c) study area and Street View points. Figure 2 . Figure 2. Using BSV panorama for azimuth fisheye view.(a) An example of a panorama metadata url, (b) a cylindrical BSV panorama, (c) the generated fisheye image based on the geometrical transform model, adjusted to generate the correct-orientation fisheye image. Figure 2 . Figure 2. Using BSV panorama for azimuth fisheye view.(a) An example of a panorama metadata url, (b) a cylindrical BSV panorama, (c) the generated fisheye image based on the geometrical transform model, adjusted to generate the correct-orientation fisheye image. Figure 3 . Figure 3. Fisheye diagrams are used to calculate solar radiation and thermal comfort. Figure 3 . Figure 3. Fisheye diagrams are used to calculate solar radiation and thermal comfort. Figure 4 Figure4displays the changing trends of solar radiation in different years.In Figure4a, we have plotted the curve fitting the changing trends of solar radiation for two years.The R 2 value for 2013 is 0.8593, and the R 2 value for 2019 is 0.8642.These values indicate that our curve has a high degree of accuracy when fitting solar radiation.We have used the calculated data to draw box plots of radiation levels for each month in the two years in Figure4b.It can be observed that the overall solar radiation in 2019 is lower than in 2013.From May to October, the average solar radiation in 2013 gradually increases, reaches a peak, and then starts to decrease.Similarly, the average solar radiation in 2019 also exhibits a similar trend, but at a lower overall level. Figure 4 . Figure 4. Changing trends of solar radiation in the same months of different years.(a).Variation trend of solar radiation; (b).Monthly solar radiation statistics Figure 4 . Figure 4. Changing trends of solar radiation in the same months of different years.(a).Variation trend of solar radiation; (b).Monthly solar radiation statistics. reduction in solar radiation during different months in both summer years.The average monthly reduction in summer is 325.713Wh/m 2 , with September experiencing the highest reduction of 334.431Wh/m 2 and May the lowest reduction of 311.722Wh/m 2 . Figure 5 . Figure 5. Spatial distribution of solar radiation during May to October in summer 2013 and 2019. Figure 5 . Figure 5. Spatial distribution of solar radiation during May to October in summer 2013 and 2019. Figure 6 . Figure 6.Geographic spatial distribution and trend of solar radiation changes in two years. Figure 6 . Figure 6.Geographic spatial distribution and trend of solar radiation changes in two years. ISPRS Int.J. Geo-Inf.2024, 13, x FOR PEER REVIEW 14 of 21 7, we draw a demonstration diagram.This demonstration diagram can cover most of the city's street view collection points. Figure 7 . Figure 7.A statistical diagram of solar radiation changes, with concentric circles centred on the city centre of Shanghai.Using the centre of the study area's shape as the midpoint, 13 concentric circles with incremental radii of 1.5 km are drawn.For each concentric circle area, the solar radiation amounts in the urban space are summarised and averaged. Figure 7 . 21 Figure 8 . Figure 7.A statistical diagram of solar radiation changes, with concentric circles centred on the city centre of Shanghai.Using the centre of the study area's shape as the midpoint, 13 concentric circles with incremental radii of 1.5 km are drawn.For each concentric circle area, the solar radiation amounts in the urban space are summarised and averaged.ISPRS Int.J. Geo-Inf.2024, 13, x FOR PEER REVIEW 15 of 21 Figure 8 . Figure 8. Solar radiation change trends.(a) Bar chart of solar radiation amounts at different distances from city centre; (b) line chart of solar radiation changes at different distances from city centre. Table 1 . Statistical analysis of temporal distribution of Baidu Street View data in Shanghai's central urban area. Table 2 . Data distribution of solar radiation in different years (Wh/m 2 ).
2024-06-09T15:10:05.484Z
2024-06-07T00:00:00.000
{ "year": 2024, "sha1": "0cb87d27dee91e90c0d207b145732d4b52e43eab", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijgi13060190", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a2a5a57901efb98b8a2b2edeadd1aeda9410c195", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
54655473
pes2o/s2orc
v3-fos-license
Interactive comment on “ A model-data comparison of the Last Glacial Maximum surface temperature changes ” by Akil Answer to reviewers’ comments: A model-data comparison of the Last Glacial Maximum surface temperature changes Akil Hossain, Xu Zhang, Gerrit Lohmann Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Bremerhaven, Germany. General remarks We are very thankful to the editor and reviewers for the effort and time dedicated to the reviewing of our manuscript and for the helpful reviews. In order to address all According to Bartlein et al. (2011), July temperature in the northern hemisphere (southern hemisphere -December) has been combined with reconstructions of mean temperature of the warmest month (MTWA).Similarly, December temperature in the northern hemisphere (southern hemisphere -July) has been combined with reconstructions of mean temperature of the coldest month (MTCO) (Fig. S3, see also Bartlein et al., 2011). During the LGM, Africa show warmer (1 to 4 • C) than today in the reconstruction of MTWA (Fig. S3, see also Wu et al. 2007).A few sites in the northern hemisphere especially in the Alaska, reconstruction of warmer conditions as shown by seasonal temperature variable MTWA and similar or slightly warmer than today is registered chiefly in MTCO (Fig. S3) (Bartlein et al., 2011).The LIS was large enough to cause atmospheric circulation pattern reorganization.This reorganization could have originated in more southerly landward flow into Alaska, that would have produced advective warming in this region year-round (Bartlein et al., 2011).In general, the summer temperatures changes as represented by MTWA (Fig. S3) are smaller than the winter temperatures changes as represented by MTCO (Fig. S3, see also Bartlein et al., 2011). For a comparison with proxy data, the warmest and coldest months of the model results have been compared with the seasonal temperature variables MTWA and MTCO.For MTWA, the highest correlation coefficient and lowest deviations are found for the LGMctl (R = 0.50, RMSE = 6.5‰ and Ice6g_LIS (R = 0.50, RMSE = 6.5‰ icesheet reconstruction and the lowest correlation coefficient and largest deviations for the Gowan_NAIS (R = 0.44, RMSE = 6.3‰ (Fig. 5).Similarly, for MTCO, the highest correlation coefficient and lowest deviations are also found for the LGMctl (R = 0.46) C4 and Ice6g_LIS (R = 0.46) and the lowest correlation coefficient for the Gowan_NAIS (R = 0.43) (Table 3).Overall, the correlation coefficient value for warmest and coldest months of the model has been increased than the model Annual mean value (Table 3). L291: 3.2.2 Land Surface temperature changes The annual mean SAT of PMIP3 LGM climate is on average 4.5 oC colder than the PI climate and CNRM is comparatively warmer (annual mean temperature -2.6 oC) than other models.PMIP3 model results have been compared with the LGM continental temperature reconstruction by Bartlein et al. (2011).The reconstructions show yearround cooling during the LGM over the continents except a few sites in Alaska (Fig. 7) (Bartlein et al., 2011).Similar as SST reconstructions, among the eight PMIP3 model, IPSL-CM5A-LR (R = 0.27, RMSE = 3.3‰ shows the highest correlation (Table S5), although most of the model show low correlation coefficient with the reconstructed data-set.MTWA (highest R is 0.53) show higher correlation than MAT and MTCO (highest R is 0.27 and 0.48).Overall, the correlation between model and data has been increased for MTWA and MTCO than the model Annual mean value (Table S5). Comment R1.4 Equally importantly, the comparison between the reconstructions and the models could be improved.A simple correlation can be very misleading and the RMSE (deviation from the 1:1 line, why in per mille?) is a much more useful measure of the difference.Moreover, there is no statistical treatment of the uncertainties in the data or the model (at the minimum interannual variability in the model and the reported errors on the reconstructions should be taken into account).None of the statements about significance are accompanied by an explanation how this was determined and at what confidence level.This leaves the reader wondering whether the differences between the different ice sheet configurations or the different season/depth biases are real or meaningful.This is crucial as many differences between the models are very small.At some places in the manuscript the authors mention uncertainty in the models too.It would be good C5 if they discuss this more upfront.With so many models and different configurations of the same model (in this case the ice sheet topography) there are many degrees of freedom and there is a large chance of being right for the wrong reasons, not only because the proxies are biased (L163).How do the authors deal with that?Related to this, what have we learned about the model (configuration)?If some of the observed differences between the model runs are real/significant, then why? Where?Can the authors go deeper into the mechanisms or the physics that explain the differences?AC: In our study, correlation coefficients between the reconstructions and the models show the similar pattern as RMSE value.As a unit of RMSE we have used per mille. Discussion about potential uncertainties in the model is added to the manuscript. Author's changes in manuscript: Different local feedbacks working in upwelling systems might complicate the SST datamodel comparison, since local cooling can occur within regions where widespread warming is found (Leduc et al., 2010b).Similarly, mismatches can be occurred due to difficulties in capturing variations in oceanic fronts in the climate models. Figure 4b shows the difference between best-fit seasonal SST and temperature recorded by proxies.In the North Atlantic, still there is a big difference between the best-fit SST and temperature recorded by proxies especially for dinoflagellates (Fig. 4b).The observed mismatch between modelled and reconstructed LGM climate evolution is might be related to the lack of representativeness of long-term temperature anomalies in climate models. The large discrepancy between data and model is likely caused by the large uncertainties in the reconstructed data as well as model deficiencies. The interpretation of our data-model comparison suggests Mg/Ca proxies are winter biased, while foraminifera, dinoflagellates, and alkenones are summer biased.We find the similar results by using the COSMOS model LIS simulations and the PMIP3 C6 simulations indicates that the deviation between model outputs and proxy data does not seem to be due to specific climate models, but because of a robust feature of LGM climate simulations with coupled climate models.One hypothesis is that proxies can therefore correctly capture local temperature trends that is not possible to simulate by the models.A possible way to test this effect is to use a new ocean model of high resolution with deep water formation areas up to 7 km and highly sensitive coastal areas to external forcing (Scholz et al., 2013) and apply this model to the LGM. Palaeoclimate information collected from data-model comparisons are difficult to be put into a context which goes beyond a description of observed data-model discrepancies, as both proxy reconstructions and climate models are imperfect and have many different characteristics.Proxy reconstructions are patchy and sparse, and can be affected by different local processes and proxy specificities, which are not always counted in proxy reconstructions.Usually, palaeoclimatologists tend to collect data in the regions where the signal is clear and where sedimentation allows it.Therefore, there is a possibility of overestimation of the SST signals due to selection of the sites.Regional dynamics and spatially heterogenous patterns provide an additional uncertainty for our proxy data and model comparison. For our model-data comparison, it is worth to mention that climate models have limitations in spatial resolution and are unable to represent the full complexity of the physical Earth System.The proxy records used in most of the studies are more often located in coastal areas, and climate models do not well represent these regions because of their low resolution (Lohmann et al., 2013).Coastal areas may be particularly sensitive to external forcing, as their thermal inertia is lower than the open ocean due to landocean interactions and a shallower thermocline.Moreover, the representation of mixed layer dynamics may be essential to improve climate simulations and its agreement with palaeoceanographic reconstructions.Comment R1.5 C7 In addition, the manuscript lacks a clear separation between results and discussion and the discussion section itself does hardly discuss the results, but rather summarises what others have said about potential recording biases in marine proxies.A lot of this could be placed in the introduction instead.Finally, there are numerous spelling and style errors.I have indicated some in the line-by-line comments below, but I recommend that the authors thoroughly proofread a revised version.AC: The results and discussion parts are significantly restructured and edited.The manuscript is thoroughly checked and proofread for spelling and style errors. Author's changes in manuscript: Land Surface temperature changes The annual mean SAT of the LGMctl run is 5.9 oC colder than the modelled PI climate.Most regions show a rather uniform cooling for all of the model runs in the range of −4 to −8 oC (Fig. 5).Alaska is the only region that is warmer than average in the model because of the increased distance to sea ice covered Arctic Ocean regions during the LGM, possibly due to the glacial sea level drop of approximately 120 m (Werner et al., 2016).The cold regions are mostly adjacent to the FIS and LIS, e.g., most of central North America and central Europe.There is another region of exceptional cooling located in northern Siberia where the temperature decreased down to −15 oC.The results agree with the temperature change of ensemble-mean LGM by the fully coupled climate simulations within the CMIP5/PMIP3 and PMIP2 projects (Braconnot et al., 2007;Harrison et al., 2014). For a comparison with proxy data, the model results have been compared with the LGM continental temperature reconstruction by Bartlein et al. (2011), which is mainly based on plant macrofossil and subfossil pollen data.The highest correlation coefficient and lowest deviations are found for the Tarasov_LIS ice-sheet reconstruction (R = 0.41, RMSE = 5.0‰ and the lowest correlation coefficient and largest deviations for the Gowan_NAIS (R = 0.29, RMSE = 5.4‰ (Fig. 5, Table 3).Different core lo-C8 cations with the largest model-data variations are located near the boundary of the FIS and LIS.These deviations might simply be due to the coarse model resolution of 3.75o×3.75othat cannot resolve small-scale temperature changes close to the glacier area in sufficient detail.Overall, the model results agree well with the reconstructed LGM-PI temperature changes at the different core points (Fig. 5). Mean temperature of coldest and warmest month According to Bartlein et al. (2011), July temperature in the northern hemisphere (southern hemisphere -December) has been combined with reconstructions of mean temperature of the warmest month (MTWA).Similarly, December temperature in the northern hemisphere (southern hemisphere -July) has been combined with reconstructions of mean temperature of the coldest month (MTCO; Bartlein et al., 2011). During the LGM, Africa show warmer (1 to 4 • C) than today in the reconstruction of MTWA (Fig. S3, see also Wu et al. 2007).A few sites in the northern hemisphere especially in the Alaska, reconstruction of warmer conditions as shown by seasonal temperature variable MTWA and similar or slightly warmer than today is registered chiefly in MTCO (Fig. S3) (Bartlein et al., 2011).The LIS was large enough to cause atmospheric circulation pattern reorganization.This reorganization could have originated in more southerly landward flow into Alaska, that would have produced advective warming in this region year-round (Bartlein et al., 2011).In general, the summer temperatures changes as represented by MTWA are smaller than the winter temperatures changes as represented by MTCO (Fig. S3, see also Bartlein et al., 2011). For a comparison with proxy data, the warmest and coldest months of the model results have been compared with the seasonal temperature variables MTWA and MTCO.For MTWA, the highest correlation coefficient and lowest deviations are found for the LGMctl (R = 0.50, RMSE = 6.5‰ and Ice6g_LIS (R = 0.50, RMSE = 6.5‰ icesheet reconstruction and the lowest correlation coefficient and largest deviations for the Gowan_NAIS (R = 0.44, RMSE = 6.3‰ (Fig. 5).Similarly, for MTCO, the highest C9 correlation coefficient and lowest deviations are also found for the LGMctl (R = 0.46) and Ice6g_LIS (R = 0.46) and the lowest correlation coefficient for the Gowan_NAIS (R = 0.43) (Table 3).Overall, the correlation coefficient value for warmest and coldest months of the model has been increased than the model Annual mean value (Table 3). Sea surface temperature changes In most of the PMIP3 models, tropical cooling is more pronounced than in the MARGO reconstruction.The models and MARGO both show a more uniform LGM cooling in the Indian Ocean than in Pacific and Atlantic (Fig. 2, see also Wang et al., 2013).The greatest mismatch between the data and model is located in the North Atlantic and Northwestern Pacific.All of the models produced a significant cooling of 4-6 • C during LGM in the Northwestern Pacific, whereas a few MARGO records indicate that there was warming (2 • C or higher).The large discrepancy between data and model is likely caused by the large uncertainties in the reconstructed data as well as model deficiencies. In this study, we analyze simulations from the PMIP3 model experiment to test the capability of current models to simulatie the LGM SSTs and land surface temperatures, with particular attention to model-data comparisons.Therefore, the anomaly of the LGM and PI simulated SST fields of all PMIP3 models have been compared with MARGO data-set and also with four individual proxy-based SSTs separately (Fig. 2, S4-S5).However, all of the considered PMIP3 models underestimate the temperature anomaly when compared to the proxy-inferred temperature data.A large mismatch and low correlation are found for most of the cases (listed in Table S3).Overall, the anomaly of the LGM and PI SST fields simulated by the PMIP3 models and the LIS simulation runs are comparable.Because of space limitations, all individual model anomalies and their agreement/disagreement with the proxy-derived SST trends are shown in the supplementary material (Figs.S4-5).Instead, the ensemble median of C10 them is shown here (Fig. 2a) which typically displays the common signal.In this case, it is the mean value of the fourth and fifth ensemble member out of eight models which are ordered according to ranked values.Among all models, IPSL-CM5A-LR shows the highest correlation and lowest RMSE with the MARGO data-set (Fig. 2b; Table S3).Since the results of the PMIP3 runs show large mismatches, we have compared with four MARGO proxies and seasonality.The seasonality in all models have been compared with individual proxies (listed in Table S4).In this case, correlation between PMIP3 models and proxies increases significantly.Overall, the agreement between the PMIP3 models and the SST reconstructions is similar to our COSMOS simulations. Land Surface temperature changes The annual mean land surface temperature of PMIP3 LGM climate is on average 4.5 oC colder than the PI climate and the CNRM-CM5 model is comparatively warmer (annual mean temperature -2.6 oC) than other models.PMIP3 model results have been compared with the LGM continental temperature reconstruction by Bartlein et al. (2011).The reconstructions show year-round cooling during the LGM over the continents except a few sites in Alaska (Fig. 7, see also Bartlein et al., 2011).Similar as SST reconstructions, among the eight PMIP3 model, IPSL-CM5A-LR (R = 0.27, RMSE = 3.3‰ shows the highest correlation (Table S5), although most of the model show low correlation coefficient with the annual mean reconstructed data-set.MTWA (highest R is 0.53) show higher correlation than MAT and MTCO (highest R is 0.27 and 0.48).Overall, the correlation between model and data has been increased for MTWA and MTCO than the model Annual mean value (Table S5). Uncertainties of the land surface temperature reconstructions From the analysis of the result show that, in general, changes in the land surface temperature in the model and proxy-inferred temperature data show a similar pattern and are in a good agreement although there is some mismatches at some cores location (Fig. 5).The simulated global-mean land surface temperature in LGM is 5.9 • C colder C11 than PI is comparable with the most recent estimate of the global-mean temperature anomalies based on reconstructions is 4.0 ± 0.8 • C (Annan and Hargreaves, 2013;Shakun et al., 2012), the global-mean cooling ranged from 3.6 to 5.7 • C in PMIP2 (Braconnot et al., 2007), as well as a global-mean cooling ranging from 4.41 to 5 • C in five PMIP3 models (Braconnot and Kageyama, 2015).It is also comparable with the LGM-PI simulation of CCSM3 revealed a global cooling of 4.5 • C with amplification of this cooling at high latitudes (Otto-Bliesner et al., 2006).Hence, the simulated estimate of this study appears reasonable, being slightly colder than the reconstructions and well within the range of previous simulations.Overall, the simulation of seasonal temperature over land is higher than seasonal temperature over the ocean (Annan and Hargreaves, 2015). Seasonal biases The interpretation of our data-model comparison suggests Mg/Ca proxies are winter biased, while foraminifera, dinoflagellates, and alkenones are summer biased.We find the similar results by using the COSMOS model LIS simulations and the PMIP3 simulations indicates that the deviation between model outputs and proxy data does not seem to be due to specific climate models, but because of a robust feature of LGM climate simulations with coupled climate models.One hypothesis is that proxies can therefore correctly capture local temperature trends that is not possible to simulate by the models.A possible way to test this effect is to use a new ocean model of high resolution with deep water formation areas up to 7 km and highly sensitive coastal areas to external forcing (Scholz et al., 2013) and apply this model to the LGM. The seasonal contrast of temperature or annual amplitude of temperature is a source of uncertainty for planktonic foraminifera proxies.The seasonality of the temperature signal depends on thermal diffusion and stratification in the upper water layer.In the open ocean, particularly in modern offshore of the North Atlantic, the weak stratification advances high thermal inertia in a thick mixed layer, which creates low thermal amplitude between winter and summer.Because of this, most open ocean proxies C12 commonly give a mixed temperature signal which does not allow seasonal temperatures to be easily differentiated (de Vernal et al., 2006).On the other hand, the timing of the maximum foraminiferal production during the LGM did not occur at the same time of the year as present day.The change in the timing of the maximum production of planktonic foraminifera could lead to a bias in reconstructed paleotemperature if the seasonality change is not taken into account (Fraile et al., 2009).Due to the temperature sensitivity of the foraminifera, during the LGM, the most significant production occurred during warmer seasons of the year (Fraile et al., 2009). Proxy-recording organisms would likely try to hold their preferred ecological conditions by changing their blooming seasons in a way which mitigates the climate changes (Mix, 1987).Planktonic organisms have several limiting factors such as temperature, nutrient, and light-availability.When those factors alter oppositely, the organisms try to change their living season without modifying their basic ecological requirements.For example, nutrient or food availability might shift towards autumn or spring so that living season might change accordingly.To explain such changes, more research using complex ecosystem models of different planktonic organisms need to be performed, such as ecophysiological models, used to reproduce the growth of planktonic foraminifera (Lombard et al., 2011). Foraminiferal Mg/Ca is influenced by different parameters like pH, salinity, and dissolution (Glacial Ocean Atlas, 2017).Mg/Ca measurements in surface dwelling foraminifera from the central North Atlantic tend to represent slightly colder than PI conditions in the corresponding water layers (de Vernal et al., 2006).Fraile ( 2008) and Fraile et al. ( 2009) using a planktonic foraminifera model analyzing the seasonality of the foraminifera showed that the organisms usually record a weaker temperature signal when the global temperature change is applied.By decreasing the global temperature by 2 oC and 6 oC, they did a model sensitivity study and observed a shift in abundance of the maximum planktonic foraminifera towards warmer seasons, which would reduce the temperature trend recorded in Mg/Ca (Fraile et al., 2009). C13 According to Ternois et al. (1996), seasonal variability in alkenones biological production should be considered if they are used as a proxy to reconstruct temperature.There is a possibility that the SST reconstruction based on alkenones might be biased towards warmer than average climatic conditions or might represent a summer signal if the growth season of alkenone-producing organisms shifted towards the summer (de Vernal et al., 2006).Records of alkenone-based reconstructions of SSTs have been analyzed accounting for shifts in the seasonality of alkenone production (Haug et al., 2005).Therefore, in the North Atlantic, alkenone production might be more concentrated in summer months during the LGM than at present, which is consistent with our LGMctl run.In the high-latitude, the timing of maximum production of alkenone could conceivably occur during the summer, rather than during the autumn or spring (Antoine et al., 1996;de Vernal et al., 2006).The degree of seasonal bias might be spatially dependent since the biogeographical characteristics of the ocean differ from one place to another (Prahl et al., 2010).As summarized by Lorenz et al. (2006), the maximum production of coccolithophorids occurs in summer in high latitudes (Baumann et al., 1997(Baumann et al., , 2000)), which agrees with the idea that UK37 record summer temperature signal (Sikes et al., 1997;Prahl et al., 2010).Satellite data also agrees with the idea of summer-biased alkenone records (Iglesias-Rodriguez et al., 2002).Seasonality in phytoplankton production is commonly less pronounced in tropical and subtropical regions (Jickells et al., 1996), and alkenone-derived SST from low-latitude sites are therefore more likely to be representative for temperatures close to the annual mean values (Müller and Fischer, 2001;Kienast et al., 2012). The reconstructed LGM temperatures by dinocyst are much warmer than PI as well as much warmer than reconstructed by other proxies even after considering the best-fit SST (Fig. 3-4).One source of uncertainty in dinocyst proxies is low productivity and fluxes, particularly in the Nordic Sea, which could have resulted in over representation of transported material (de Vernal et al., 2005).The results from the seasonality are based on the model output which does not provide any diagnostic on the planktonic organisms real ecological behavior.However, they provide an oceanic regions map-C14 ping where even small changes in the ecology of planktonic organisms can have huge consequences on the reconstructed SST anomalies.It reinforces the idea that proxy organisms may be affected by ecological specificities (Leduc et al., 2010, Lohmann et al., 2013).Changes in recording season could have been caused by changes in insolation over the LGM or by related changes in the nutrient distribution and ocean temperature that the individual organisms are exposed to (Lohmann et al., 2013). Palaeoclimate information collected from data-model comparisons are difficult to be put into a context which goes beyond a description of observed data-model discrepancies, as both proxy reconstructions and climate models are imperfect and have many different characteristics.Proxy reconstructions are patchy and sparse, and can be affected by different local processes and proxy specificities, which are not always counted in proxy reconstructions.Usually, palaeoclimatologists tend to collect data in the regions where the signal is clear and where sedimentation allows it.Therefore, there is a possibility of overestimation of the SST signals due to selection of the sites.Regional dynamics and spatially heterogenous patterns provide an additional uncertainty for our proxy data and model comparison. For our model-data comparison, it is worth to mention that climate models have limitations in spatial resolution and are unable to represent the full complexity of the physical Earth System.The proxy records used in most of the studies are more often located in coastal areas, and climate models do not well represent these regions because of their low resolution (Lohmann et al., 2013).Coastal areas may be particularly sensitive to external forcing, as their thermal inertia is lower than the open ocean due to landocean interactions and a shallower thermocline.Moreover, the representation of mixed layer dynamics may be essential to improve climate simulations and its agreement with palaeoceanographic reconstructions. Comment R1.6 Line by line comments L8: 'abrupt'.Reconsider wording What is meant here?C15 AC: Here, abrupt mean a large or steep change.The presence of vast Northern Hemisphere ice-sheets during the LGM caused a large changes in surface topography. Author's changes in manuscript: No change. Comment R1.7 L11-12: reword ' . ..pollen and plant macrofossil based. ..' AC: This term has been revised.The annual temperature is mainly based on pollen data and sites with macrofossils data are very few for the LGM.That's why the term "plant macrofossils" is avoided. Comment R1.8 L16: it is the simulation using the Tarasov reconstruction that shows the highest correlation, not the reconstruction. AC: This sentence has been revised. Author's changes in manuscript: Among the six LIS reconstructions, simulation using Tarasov's LIS reconstruction shows the highest correlation with reconstructed terrestrial and SST.Author's changes in manuscript: uncertainty of variables Comment R1.11 L54: please add a sentence or two to explain the link between the beginning and end of this paragraph.Importantly, Jonkers and Kucera [Jonkers and Kučera, 2017] -and before them several others [e.g.Mix, 1987;Schmidt, 1999;Schmidt and Mulitza, 2002;Skinner and Elderfield, 2005] -showed that there is predictability in the recording bias.This is an important point as it may help to distinguish between different models and or estimates of recording depth/season. AC: This paragraph is revised and edited. Author's changes in manuscript: A recent study by Jonkers and Kučera (2017) analyzed core top stable oxygen isotope (δ18O) values of different planktonic foraminifera species.They found that planktonic foraminifera ecology exerts a significant influence on the proxy signal since bloom seasons of planktonic foraminifera vary at different locations and that there is predictability in the recording bias (Mix, 1987;Schmidt, 1999;Schmidt and Mulitza, 2002;Skinner and Elderfield, 2005;Jonkers and Kučera, 2017).Seasonality of planktonic foraminifera changes with temperature to minimize the environmental change that they experience.et al., 2009).Different types of records provide various information about ocean surface conditions: for example, alkenone data only give a measure of mean annual SST while foraminiferal assemblages can be analyzed statistically to obtain seasonal variation in SSTs (Waelbroeck et al., 2009).The MARGO dataset combines 696 individual SST reconstructions.The coverage is especially dense in the tropics, the North Atlantic and the Southern Ocean while several oceanic regions remain undersampled: for example, the subtropical gyres, especially in the Pacific Ocean (Waelbroeck et al., 2009). L256: the data is not composed of planktonic organisms, it's based on measurements of their fossil remains.Also reword 'shift in the different water columns'.L260: Coccolithophores (the alkenone-producing organisms) are phytoplankton and require light for photosynthesis.The same holds for other phytoplankton and symbiont-bearing planktonic foraminifera.183 m seems rather deep for phytoplankton.I assume that light availability is not modelled, but the authors should look into this and assess whether the inferred recording depths (e.g.L269) are consistent with the ecology of the proxy carriers.There is also a lot of discussion in these sections.L270-274: this sentence begins and ends with different statements about the habitat depth of planktonic foraminifera.Please explain the difference, or discuss it.See also Rebotim et al. [2017] for a discussion on the variability of depth habitat. AC: Considering habitat depth of the planktonic organisms make our manuscript more complicated and there are many debates about habitat depth of the organisms, therefore, according to our new structure, we have removed the habitat depth analysis of proxies.So this section is no more in the manuscript. C21 Author's changes in manuscript: L255-288 is removed from the manuscript. Comment R1.23 L289-295: I disagree, if the data and the model disagree, and consistently disagree the reason is unlikely to be due to uncertainty in the data alone.Uncertainty in the data would lead to random variations around the mean value, not indicate consistent (temporal/spatial) changes.It is more likely that the mismatch is due to uncertainties/unknowns in both the data and the models.It would be good if the authors acknowledge that more. AC: Yes, I agree with this comment, the disagreement between data and model is not uncertainty in the data alone.It might be caused by misinterpreted and/or biased proxy records as well as by model deficiencies.In our case, we have compared data with different PMIP3 models and observed that the relation we found between proxyderived and modelled SSTs and land surface temperature is not model dependent.However, we have discuss about model deficiencies and uncertainties in the data in the discussion part. Author's changes in manuscript: See answer to the comment R1.31 Comment R1.24 L327-329: this section on sediment traps needs referencing.It is also well known that there is no uniform seasonality of planktonic foraminifera, rather seasonality varies spatially [Jonkers and Kučera, 2015;Tolderlund and Bé, 1971] and has hence likely varied in the past. AC: It from the same reference from the next sentences (Glacial Ocean Atlas, 2017).Yes, overall there is no uniform seasonality of planktonic foraminifera, rather seasonality varies spatially but in our case we found in the North Atlantic the best agreement of planktonic foraminifera for local summer. Author's changes in manuscript: reference 'Glacial Ocean Atlas, 2017' is added for the C22 sediment trap. Comment R1.25 L336-337: please be specific: uncertainty for planktonic foraminifera proxies, not the foraminifera themselves.Moreover, this not only holds for planktonic foraminifera, but for all proxy carriers with a short (< 1 year) life span [e.g. for coccolithophores that produce the alkenones Rosell-Melé and Prahl, 2013]. AC: Yes, it is uncertainty for planktonic foraminifera proxies. Comment R1.26 L344-357: so it seems that there is a pattern in the season that is preferably reflected in the UK37 ratio.Is this resolved in the model-data mismatch?Does any model yield data more consistent with such a pattern?It is this kind of analysis that is lacking from the present manuscript. AC: Yes, there is a pattern in the season that is preferably reflected in the UK37 ratio.In some part model output agree with that.Model agreement and disagreement is added to the manuscript. Comment R1.27 L364: proxies are not exposed to nutrient conditions, the organisms are. AC: It is corrected. Author's changes in manuscript: Changes in recording season could have been caused by changes in insolation over the LGM or by related changes in the nutrient distribution and ocean temperature that the individual organisms are exposed to. Comment R1.28 L377: Deuser and Ross and Anand et al used the same sediment trap time series for C23 their analysis, so this is only regionally constrained information.Crucially, one cannot infer living depth from sediment traps (perhaps the authors mean calcification depth).L380-384: this idea is hardly new, Emiliani [Emiliani, 1954;1955] already touched on this.Please include.L395: There is also observational data that shows the dampening effect of changing habitat of the proxy carrier [Ganssen and Kroon, 2000;Jonkers and Kučera, 2017]. AC: Same as comments R1.22 Author's changes in manuscript: L360-412 is removed from the manuscript. Comment R1.29 L391: it is unclear what is meant with 'in such a way'. AC: It means in a way they would likely try to hold their preferred ecological conditions by changing their blooming seasons to mitigates the climate changes.However, It is edited. Author's changes in manuscript: Proxy-recording organisms would likely try to hold their preferred ecological conditions by changing their blooming seasons in a way which mitigates the climate changes (Mix, 1987). Comment R1.30 L400: why on the contrary, I don't understand the difference.And please explain why it is important to model foraminifera growth, rather than abundance.Note also that Fraile et al used many more variables than temperature alone [Fraile et al., 2008] (in fact, more than Lombard) and see Kretschmer et al [Kretschmer et al., 2017] for an update of this model. AC: It is corrected.As previously discussed in the paragraph that planktonic organisms have several limiting factors such as temperature, nutrient, and light-availability.When those factors alter oppositely, the organisms try to change their living season without C24 modifying their basic ecological requirements.To explain such changes an ecosystem models can be used to reproduce the growth of planktonic foraminifera (Lombard et al., 2011) which also explain foraminifera abundance. Comment R1.31 L406-412: I think a more upfront discussion of inherent uncertainties in the model is essential and should be placed not at the end of the discussion and include more than just model resolution. AC: Discussion about potential uncertainties in the model is added in the earlier sections. Author's changes in manuscript: Different local feedbacks working in upwelling systems might complicate the SST datamodel comparison, since local cooling can occur within regions where widespread warming is found (Leduc et al., 2010b).Similarly, mismatches can be occurred due to difficulties in capturing variations in oceanic fronts in the climate models. Figure 4b shows the difference between best-fit seasonal SST and temperature recorded by proxies.In the North Atlantic, still there is a big difference between the best-fit SST and temperature recorded by proxies especially for dinoflagellates (Fig. 4b).The observed mismatch between modelled and reconstructed LGM climate evolution is might be related to the lack of representativeness of long-term temperature anomalies in climate models. The large discrepancy between data and model is likely caused by the large uncertainties in the reconstructed data as well as model deficiencies. The interpretation of our data-model comparison suggests Mg/Ca proxies are winter biased, while foraminifera, dinoflagellates, and alkenones are summer biased.We find the similar results by using the COSMOS model LIS simulations and the PMIP3 simulations indicates that the deviation between model outputs and proxy data does C25 not seem to be due to specific climate models, but because of a robust feature of LGM climate simulations with coupled climate models.One hypothesis is that proxies can therefore correctly capture local temperature trends that is not possible to simulate by the models.A possible way to test this effect is to use a new ocean model of high resolution with deep water formation areas up to 7 km and highly sensitive coastal areas to external forcing (Scholz et al., 2013) and apply this model to the LGM. Palaeoclimate information collected from data-model comparisons are difficult to be put into a context which goes beyond a description of observed data-model discrepancies, as both proxy reconstructions and climate models are imperfect and have many different characteristics.Proxy reconstructions are patchy and sparse, and can be affected by different local processes and proxy specificities, which are not always counted in proxy reconstructions.Usually, palaeoclimatologists tend to collect data in the regions where the signal is clear and where sedimentation allows it.Therefore, there is a possibility of overestimation of the SST signals due to selection of the sites.Regional dynamics and spatially heterogenous patterns provide an additional uncertainty for our proxy data and model comparison. For our model-data comparison, it is worth to mention that climate models have limitations in spatial resolution and are unable to represent the full complexity of the physical Earth System.The proxy records used in most of the studies are more often located in coastal areas, and climate models do not well represent these regions because of their low resolution (Lohmann et al., 2013).Coastal areas may be particularly sensitive to external forcing, as their thermal inertia is lower than the open ocean due to landocean interactions and a shallower thermocline.Moreover, the representation of mixed layer dynamics may be essential to improve climate simulations and its agreement with palaeoceanographic reconstructions. C26 AC: Sentence is modified a little. Author's changes in manuscript: It is assumed that the SST indicators have seasonal biases. Comment R1.33 L423-427: this fundamental mismatch between the models and the data is mentioned here for the first time.It deserves mentioning in the results and discussion.As to the question whether it is the models or the data that cause this discrepancy, it is important to note that our current understanding of proxy carriers (in particular planktonic foraminifera) is that they tend to underestimate the environmental change (see suggested references and studies cited in the manuscript).Such homeostatic behaviour only exacerbates the mismatch. AC: This comments is taken into account and a section of data model discrepancies is added to the discussion part. Author's changes in manuscript: See answer to the comment R1.31 Comment R1.34 Fig. S1 is directly copied from the MARGO paper, I don't know if this is appropriate with regards to copy rights etc. AC: We already have the permission from Nature Geoscience to reuse this figure. Author's changes in manuscript: Fig. S1: Distribution of MARGO data points, indicating also which proxy was measured at each location (Waelbroeck et al., 2009 © Nature Geoscience). Comment R1.35 Table 1: why is there no RMSE for the Tarasov reconstruction?Also, none of the errors have units.Similarly, the legends in the figures often lack units. C27 AC: RMSE value for the Tarasov reconstruction has been added.Units for errors and legends in the figures and has been revised in the manuscript. Author's changes in manuscript: RMSE value of Foraminifera is 2.65‰ MgCa is 5.90‰ Dinos is 6.64‰ and Uk37 is 3.44‰ Units for error is added at the Figure 5 and Table 1-3, S3-S5.Units for legends is added to all the figures Please also note the supplement to this comment: https://www.clim-past-discuss.net/cp-2018-9/cp-2018-9-AC4-supplement.pdfC31 Comment R1. 9 L33: Project instead of Projection AC: This term has been corrected Author's changes in manuscript: Paleoclimate Modeling Intercomparison Project (PMIP) Comment R1.10 L40: please be more specific, uncertainty of what?C16 AC: Uncertainty of variables due to a large spread of reconstructed LIS with fundamental different geometries. Interactive comment on Clim.Past Discuss., https://doi.org/10.5194/cp-2018-9(a)The circles localize the foraminifera, MgCa, dinoflagellates and U k 37 records and the colors fill of the circles represent the seasonal/annual mean in which the reconstruction agrees best with model.(b) Colors fill of the circles show the anomalies between proxies and temperature trend (in o C) recorded by corresponding seasonal/annual mean shown in (a) at the sample locations. Habitat tracking can lead to reduce in the amplitude of this recorded environmental change and enable more improved reconstructions and data-model comparison(Jonkers and Kučera, 2017).: what exactly is compared, the gridded products of the reconstructions or the individual sites?If the latter, why is the gridding explained and how were the data compared precisely?AC: The individual sites of the reconstructions is compared with the model results but the gridding is described as an explanation of the dataset how it is organized.The individual sites of the temperature variables (annual mean temperature, MTWA and MTCO) of Bartlein et al. (2011) is compared with the LIS reconstructions and PMIP3 models.However, description of gridding is removed and paragraph is edited.The model results of our study is compared with the LGM continental temperature reconstruction byBartlein et al. (2011), which is mainly based on subfossil pollen data.This dataset includes reconstructions of different temperature variables: mean temperature of the warmest month (MTWA), mean temperature of the coldest month (MTCO) and mean annual temperature (MAT)(Bartlein et al., 2011).The dataset considers a quantified estimate of combined uncertainties arising from the age scale uncertainties, data resolution and sampling, calibration model uncertainty, and analytical uncertainties(Bartlein et al., 2011).The individual sites of the temperature variables (annual mean temperature, MTWA and MTCO) of Bartlein et al. (2011) is compared with the LIS reconstructions of our model.The Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO) project in 2009 has compiled and analyzed an updated synthesis of seasonal sea surface temperatures (SSTs) during the LGM(Kucera et al., 2005)based on all prevalent microfossil-based (planktonic foraminifera, diatoms, dinoflagellates and radiolarian abundances) and geochemical (alkenones and planktonic foraminifera L95: is Zhang et al. 2013 appropriate for the PMIP3 protocol?AC: Yes, Zhang et al. 2013 used external forcing and boundary conditions according to the PMIP3 protocol for the LGM.The respective boundary conditions for the LGM comprise greenhouse gas concentrations (CO2 = 185 ppm; CH4 =350 ppb; N2O = 200 ppb), orbital forcing, land surface topography, run-off routes, ocean bathymetry according to PMIP3 ice sheet reconstruction.C19 Mg/Ca) palaeothermometers from deep-sea sediments (Waelbroeck Table S1 : LIS reconstructions used in this study.
2018-12-12T17:04:34.133Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "b886fec9ac0e265a438462be6fb10a2cbe99f6a5", "oa_license": "CCBY", "oa_url": "https://www.clim-past-discuss.net/cp-2018-9/cp-2018-9.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "b1206919307dc67681d28a4dd0fc6d5d50076bf2", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [ "Geology" ] }
255657896
pes2o/s2orc
v3-fos-license
Assessment of WHO core drug use indicators at a tertiary care Institute of National importance in India Rational prescribing of medicines is an important aspect of drug prescribing which helps in safe and efficacious and cost-effective drug treatment for patients. WHO Prescription indicators are intended to evaluate the services provided to the population concerning the rational use of medicines. The study aims to study prescription practices and rational use of medicines in the department of Internal medicine, using WHO prescribing indicators in a tertiary care teaching institute of national importance. A total of 50 prescriptions were digitally photographed and analysed for prescription practices and rational drug use, using standard WHO core prescribing indicators. A total of 301 drugs with multiple and diverse diagnoses were used. Statistical analysis was done using SPSS 22 version. The average number of drugs per prescription was 3.48%. It was found that only 13.79% of prescriptions have generic names, whereas 27.58% of patient encounters had at least one drug from the National List of Essential Medicine, 6.8% of prescriptions have antibiotics and 0.7% of prescriptions were injections. The number of prescriptions with fixed drug combinations was 27.55%. Indicators such as percentage of the National List of Essential Medicine, fixed drug combinations and prescribing with a generic name are used. Hence, we will suggest regular prescription audit practices and conducting CMEs and training workshops for clinicians for the rational use of medicines in all healthcare settings to succeed in the rational use of medicine. Background: A drug prescription pattern audit is an important aspect of patient care, which is a part of the clinical audit which serves as a measure of the quality of care provided to the patient and helps in the improvement of patient care by changing or implementing the needed changes [1].This is also an integral part of medical education which helps clinicians in improving prescription quality and ultimately better patient care.Many recent research studies recommended constant evaluation of the quality of prescribing patterns.Prescription error is an unacceptable medication error that is very common in many hospitals worldwide.Prescription pattern audit studies are highly useful tools in assessing the prescribing pattern and dispensing of medicines prevalent in a particular area.The main aim of these studies is to facilitate the rational use of medicines [2].Currently several reasons such as an increase in new drug marketing, wide variations in the pattern of prescription and consumption of drugs, growing concern about delayed adverse effects, and cost of drugs all enhance the importance of prescribing patterns audit [3].Currently, rational use of medicines is an important requirement due for many reasons, among them; antimicrobial resistance is one of the important concerns.Irrational prescribing or overuse of medicines is an arising major problem worldwide.According to World Health Organization (WHO), more than half of all medicines prescribed, are dispensed inappropriately.Overuse, under use or misuse of medicines, will lead to drug resistance, cost of treatment and duration of treatment increases which ultimately leads to wastage of resources and widespread health hazards.WHO defines the rational use of medicines (RUM) as "Patients should receive medications appropriate to their clinical needs, in doses that meet their requirements, for an adequate period, and at the lowest cost to them and their community [4].Presently, the WHO and the National Health Policy of India, have focused on prescribing drugs by generic names from the list of essential medicines, because prescribing with the generic name is also one of the major issues to fix in India and many other countries [5][6].This type of study is imperative to bridge the areas such as rational use of drugs, pharmaco economics, antimicrobial stewardship and evidencebased medicine. The WHO developed core medication use indicators consisting of prescription indicators intended with an aim to assess the services provided to the population concerning medications [7].These are universally useful for any setting in the world in any nation which is highly standardized and are recommended for inclusion in any drug usage study using these indicators.Accordingly, drug use indicators provide a simple tool for quickly and reliably assessing a few critical aspects of pharmaceutical use in primary health care (8).Results with these indicators point towards the particular drug use issues that need examination in more detail [8].WHO core prescription indicators allow for assessing the therapeutic actions taken in similar institutions, enabling subsequent comparison of parameters between them, and to evaluate the population's medication needs and determining the most commonly used medications in a given locality, to identify the prescription profile and quality of services offered to the population by the hospital.This study was designed to study the drug prescribing pattern at the medical outpatient department (OP) at our tertiary care centre which is a teaching medical college cum hospital, by using the following prescription indicators [8]: The WHO prescribing indicators include: [1] The average number of drugs per prescription. [2] Percentage of drugs prescribed by generic name Evaluation of all the prescribing indicators irrespective of the diagnosis in a particular department like in our study would enable capturing a wider picture of the current trends rather than evaluating only some particular group of drugs like anti-epileptics, antimicrobials, anti asthmatics and anti-hypertensive drugs [5].Therefore, it is of interest to evaluate the rational use of medicines in the department of Internal medicine, at our tertiary care teaching institute of national importance depending on the WHO prescribing indicators. Methodology: The present cross-sectional, OP-based study was carried out in a tertiary care teaching hospital which is an institute of national importance, in Central India, after taking ethical clearance from the institutional human ethics committee.The study was carried out for one month as a pilot study at the All India Institute of Medical Sciences, Bhopal.A total of 66 outpatient prescriptions of the internal medicine department were digitally photographed at the pharmacy of the hospital, out of which 16 prescriptions were incomplete.Prescriptions of patients attending Internal Medicine OPD and treated on an outpatient basis for their ailments were included irrespective of the comorbidities.Data were collected on the demographic details of age, gender, diagnosis, and the treatment prescribed which were mentioned in the prescription. All the prescriptions were analysed based on the following parameters: Results: The data were entered in Microsoft Excel 2010 and analysed using SPSS 22 software for frequency distributions and percentages to assess the prescribing indicators.A total of 50 prescriptions were analysed over one month.The demographic distribution of patients mirrored a rising trend with increasing age as the higher proportion of patients were 30-50 years of age.Both males and females were almost equal in proportion.There were multiple and diverse diagnoses.Hence, we categorized it into communicable and noncommunicable diseases and the majority had non-communicable diseases (Table 5).It was found that a total number of 301 drug products had been prescribed in the 50 patient encounters and thus, the average number of drugs per prescription was 3.48% and the standard deviation was 1.32.Moreover, the median number of drugs per prescription was 4. Overall, the study revealed a higher value for this indicator than the standard reference (Table 1).It was found that 13.79% of prescriptions have generic names, whereas 27.58% of patient encounters had at least one drug from the national list of essential medicines list (NLEM 2015).It was evident that 6.08% of prescriptions have antibiotics and around 0.67% have been prescribed as injections. The number of prescriptions with fixed drug combinations was 27.55%.Among the prescriptions analysed for FDCs composition, it was found that the total number of FDCs having 2 drugs was 53%, the three-drug combination was 14%, the four-drug combination was 4%, five drug combination was 8% and more than 5 drugs combinations was 3% [Table 3].The most common one being prescribed was metformin plus glimepiride for type 2 diabetes followed by pantoprazole plus domperidone for gastritis. Prescriptions analyzed for the number of drugs per prescription showed that patient encounters with two drugs were (18%), three drugs (20%) and four drugs (20%) accounting for a total of prescriptions falling under either of these three categories, 14% with 5 drugs and 4% with six drugs and 14% with seven drugs respectively [Table 4].The most highly prescribed antimicrobial agent was amoxicillin-clavulanic acid, anti-helminthic was albendazole, anti-fungal was itraconazole followed by ketoconazole.The most commonly prescribed anti-malarial drugs combination was the artemether-lumefantrine combination, which was not approved for use in central India, where artesunate plus sulfadoxine-pyrimethamine are recommended [9].The most common indication for antibiotic use was found to be a variety of respiratory and urinary tract infections.The most common drug prescribed for acute urinary tract infection was nitrofurantoin [Table 2].Discussion: Core drug prescribing indicators measure the prescribing practices and performance of healthcare providers concerning the rational use of medicines.The core prescribing indicators for the prescriptions in the department of internal medicine were assessed in the study institute based on a sample of 50 patient encounters that took place at the OPD in the dept. of Internal medicine.The data that were collected prospectively by analysing the prescriptions demonstrated that the average number of drugs prescribed per encounter was 3.48%.Comparison to the standard range advocated by the WHO for this indicator which estimates the degree of polypharmacy revealed that the measured average was much higher than the reference range of 1.6-1.8which was considered ideal [10].The same was seen in FDC drugs, where a high percentage of fixed drug combinations were prescribed in addition to the use of a combination of different drugs for a single indication in one patient encounter.Some of the Indian studies which were conducted using the WHO core prescribing indicators have shown similar results which were unlike our results, where they had mentioned 2.955%, 3.76 % and 4.98% respectively [11][12][13]. The high average number of drug products per prescription exceeding the WHO reference range demonstrates that a high degree of polypharmacy is prevalent in our centre which might be due to the high prevalence of non-communicable metabolic diseases such as hypertension, diabetes, and coronary vascular diseases and dyslipidaemia which are often coexistent contributing to the need for management of more than one disease entity in a single patient simultaneously (14).India is a major country suffering from the burden of diabetes globally.The prevalence of diabetes in adults aged 20 years or older in India increased from 5•5% in 1990 to 7•7% in 2016 (15).In our study, we encountered the same fact that a high proportion of prescriptions had the diagnosis of non-communicable disease with diabetes ranking highest.In this type of patient prescribing FDCs is a rationale, due to the increasing requirement of drugs in patients with more than one disease [16][17].But recently unfortunately 424 FDCs were banned due to inappropriate combinations, so prescribers should be vigilant about prescribing rationally in FDCs.In our study, the number of prescriptions with fixed drug combinations was accounting for 27.55%.In a similar study, they were encountered to be 32.57%[11]. In the present study, only 27.58% of prescriptions were from the current list of essential medicines (NLEM 2015).This could be due to the lack of sensitization of the physicians and the lack of rules being enforced to mandate prescribing from the essential drugs list.Around 6.08% of prescriptions have antibiotics and a very less percentage (0.7%) have been prescribed as injections, whereas the recommended range by WHO was 13.4-24.1%,which will show the rationale for prescribing antibiotics and injections in the current study centre.In the present study, only 13.79% prescribed medicines with a generic name.Previous studies in a tertiary care teaching hospital found that almost 100% of prescriptions were with generic names.Other studies of the western part of India had similar observations to our study where only 0.05 % of the drugs out of 1842 products were prescribed in the generic name [11,18]. Results of a spate of similar studies have shown that the higher the doctor's education and training experience, the proportion of drugs they prescribed by generic names showed a decline, and attitudinal differences have been seen in physicians in low-and middleincome countries compared to those in high-income countries [19][20].Hence, frequent clinical prescription audits along with training on good prescribing practices to clinicians improve the quality of prescribing practice [21][22].The most common reason for the low percentage of generic prescribing could be due to repeated and effective promotion of the branded products by pharmaceutical companies and in certain instances, clinicians are forced to agree to the insistence of patients demanding the latest medicines for ©Biomedical Informatics (2022) Bioinformation 18(10): 888-893 (2022) 892 treatment, and the presumed belief among a subset of prescribing physicians that the quality differences between generic and brand drugs could adversely affect the therapeutic outcomes.Such an opinion could affect the prescribing practice of generic drugs and leads to confusion among people.Sometimes the pharmaceutical industries play an important role in branded drug prescription, by offering financial aid to prescribers like free foreign visits.Previous studies have also shown that prescribing with the generic name was more in public centres in comparison to that in private sector hospitals [23].We have to increase awareness of generic prescribing, considering the burden due to the high cost of treatment on the public by the practice of brand name prescribing. Another study on the cost differences in prescribing generic vs. brand name prescribing in chronic disease patients concluded that all generics were more than 40% cheaper, per defined daily dose per month than the brand version [24].In low-economic countries, generic prescribing is much more helpful to the public.This practice can be increased by an integrated approach of training the medical students who are future prescribers about the pharmacoeconomic significance in their routine pharmacology study course, in addition to conducting regular continuous medical education programs (CME) for clinicians with the focus of alleviation of their doubts on quality or bioequivalence regarding the use of generic medicines.Governments should also ensure quality control of generic medicine as a part of an ongoing exercise, routinely conducted by the US FDA.A variety of strategies have been recommended by experts to overcome the barriers to genetic prescribing and the most vital of these include enforcing statutory obligations, setting clear guidelines for generic prescribing and legally de-incentivizing prescribing by propriety name [25][26][27][28][29].A major limitation of our study the number of prescriptions.This study implies the need for implementing interventions such as continuous medical education programs and workshops to improve awareness of rational prescribing among the medical fraternity.As our study has been conducted in a government institute of national importance, the pitfalls that we found in our prescription practices should be improved for the benefit of the public. Conclusion: With this, we conclude that our study of the prescribing patterns of drug use by using WHO core prescribing indicators has clearly shown the prescribing practices for essential medicine list, fixed drug combinations and generic prescribing were injudicious and irrational.A regular trend of poly pharmacy was found and inappropriate use of multivitamins was seen.Therefore, we conclude that frequent prescription audit studies provide a bridge between areas like rational use of drugs, pharmaco economics, evidence-based medicine, pharmaco genomics and pharmaco vigilance.Hence, we suggest appropriate measures like teaching pharmaco economics, rational prescribing to medical students during their undergraduate level only, regular CMEs and training workshops for clinicians on these two issues particularly on generic prescribing should be implemented by policymakers and administrators to reduce prescribing with a brand name, irrational fixed drug formulations, and injudicious multivitamin prescription. Encouraging clinicians to practice the prescribing of medicines from the list of essential medicines is a must.The administrative team and policymakers should implement all these essential needs to ensure rational and safe prescribing. [ 3 ] 5 ] Percentage of prescriptions containing antimicrobial agents (antibiotics)[4] Percentage of injections per prescription [Percentage of drugs prescribed from the EML. The average number of drugs per prescription: Average diagnosis, investigations, correct dose and dosage, duration of treatment, follow-up advice, referral details, do's and don'ts, legible signature, and medical council registration number). , calculated by dividing the total number of different drug products prescribed, by the number of encounters surveyed.Irrespective of whether the patient received the drugs or not.[2] Percentage of drugs prescribed by generic name:Percentage, calculated by dividing the number of drugs prescribed by generic name divided by the total number of drugs prescribed, multiplied by 100.[3]Percentage of prescriptions containing antimicrobial agents (antibiotics):The percentage was calculated by dividing the number of patient encounters during which an antibiotic was prescribed, by the total number of encounters surveyed and expressed as a percentage [ Table 3 : Numbers of drugs per formulation Table 4 : Percentage distribution of number of drugs per prescription
2023-01-12T16:26:21.907Z
2022-10-31T00:00:00.000
{ "year": 2022, "sha1": "707253d2846d7fd2a95433e0e2b62f2325050769", "oa_license": "CCBY", "oa_url": "http://www.bioinformation.net/018/97320630018888.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a19f398e2fcfa40ac3c7406642e1ce73d9a6b0ce", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [] }
267267613
pes2o/s2orc
v3-fos-license
Psychometric assessment of the Persian adaptation of the attitudes toward seeking professional psychological help scale-short form Background and purpose Mental health disorders are a growing concern worldwide, with a significant impact on public health. Understanding attitudes toward seeking professional psychological help is essential in addressing these issues. In the Iranian context, there is a need for a reliable tool to measure these attitudes. This study aims to assess the validity and reliability of the Persian Adaptation of the Attitudes Toward Seeking Professional Psychological Help Scale-Short Form (ATSPPH-SF). Materials and methods A cross-sectional study was conducted in May 2023, utilizing a convenience sampling method with 1050 participants aged 10 to 65 years in Iran.The ATSPPH-SF questionnaire, consisting of 10 items and 2 subscales, was employed. The questionnaire underwent translation and cultural adaptation, and its validity was assessed through qualitative face and content validities. Confirmatory factor analysis (CFA) was used to evaluate construct validity. Reliability was assessed using McDonald’s omega coefficient and Cronbach’s alpha coefficient. Data collection was conducted through an online survey. Results The CFA results indicated a two-factor structure for the ATSPPH-SF, with one factor representing openness to seeking treatment for emotional problems and the other factor reflecting the value and need for seeking treatment. The model demonstrated acceptable fit indices. Both McDonald’s omega coefficient and Cronbach’s alpha coefficient suggested good internal consistency for the scale. The mean total score for the ATSPPH-SF was 21.37 (SD = 5.52), indicating the reliability and validity of the scale for the Iranian population. Conclusion This study confirms the suitability of the short-form ATSPPH-SF with 10 items and 2 subscales as a valid and reliable tool for assessing attitudes toward seeking professional psychological help in the Iranian population. With no prior appropriate instrument available, this scale fills a crucial gap. It can be employed to measure attitudes among various demographic groups, aiding in the design of targeted interventions to enhance mental health literacy and reduce the stigma associated with seeking professional psychological help in Iran. Introduction Mental health is a crucial component of overall wellbeing, impacting individuals, families, and communities across the globe [1,2].As societies evolve, understanding and addressing attitudes toward seeking professional psychological help becomes increasingly vital [3][4][5].This study is motivated by the importance of exploring such attitudes within the Iranian population, as they navigate the complex interplay between cultural influences, personal beliefs, and societal factors in their approach to mental health support [6,7]. Global context of attitudes toward seeking professional psychological help The prevalence of mental health disorders and the associated burden they impose are well-documented on a global scale.Worldwide, mental health issues account for a substantial proportion of the global disease burden and contribute to reduced quality of life, increased mortality, and decreased overall productivity.It is evident that the widespread impact of mental health disorders necessitates a comprehensive approach, including improving attitudes toward seeking professional psychological help [8][9][10]. Specific challenges in the Iranian context In Iran, as in many other countries, mental health issues represent a significant public health concern.Despite the increasing recognition of these issues, attitudes toward seeking professional psychological help in Iran remain an area requiring exploration and understanding.Reviewers have previously noted that limited studies have addressed this specific challenge, making it essential to delve into the attitudes and perceptions of Iranians regarding professional mental health support [11,12]. Rationale for the study To address the aforementioned gaps in knowledge, this study focuses on the adaptation and psychometric evaluation of the Attitudes Toward Seeking Professional Psychological Help Scale-Short Form (ATSPPH-SF) within the Iranian context.The ATSPPH-SF is a widely-used instrument for assessing attitudes toward seeking professional psychological help, providing insights into individuals' willingness and openness to obtaining mental health support.Given the cultural and contextual variations observed in attitudes toward mental health, this research aims to validate a tool that aligns with the Iranian cultural landscape and is tailored to the unique characteristics of the Iranian population.[13]. Adaptation of the ATSPPH-SF An essential step in ensuring that a measurement tool is fit for use in a new cultural context is its adaptation and validation.The successful adaptation of the ATSPPH-SF for use in Iran hinges on rigorous translation processes, cultural relevance, and rigorous psychometric evaluation.Through this process, we aim to provide a reliable and valid tool that can assist in assessing and improving attitudes toward seeking professional psychological help among Iranians.[14][15][16]. Bridging the knowledge gap This study seeks to bridge the knowledge gap by systematically assessing the psychometric properties of the adapted ATSPPH-SF within the Iranian general population.The investigation delves into aspects of face validity, content validity, construct validity, and reliability to ensure that the adapted tool effectively measures attitudes relevant to the Iranian context.By doing so, this research endeavors to provide a valid and reliable instrument for screening, assessment, and intervention purposes in Iranian mental health settings. In summary, this study strives to address the unique challenges faced by the Iranian population concerning attitudes toward seeking professional psychological help.By adapting and validating the ATSPPH-SF in this context, we aim to contribute to a growing body of knowledge, potentially improving mental health services and attitudes in Iran, and offering a blueprint for similar research in other culturally diverse populations. Study design and participants This This cross-sectional psychometric study aimed to assess the validity and reliability of the Iranian version of the Attitudes Toward Seeking Professional Psychological Help Scale-Short Form (ATSPPH-SF) within the general population.The study was conducted in May 2023, utilizing a convenience sampling method.The sample consisted of 1050 individuals who met the inclusion criteria, which required them to be between the ages of 10 and 65 years and to have provided informed consent to participate in the study. was considered sufficient for assessing the psychometric properties of the ATSPPH-SF within the scope of this research. Data collection We employed an online survey hosted on Porsline's website to gather data for this study.Porsline is a web-based platform that facilitates the creation, distribution, data collection, and analysis of surveys for researchers.Additionally, Porsline offers features that enable researchers to obtain informed consent from participants in a secure and ethical manner.We created and hosted our questionnaire on Porsline, making it accessible for participants to complete online.To enhance participation from the general population, we shared the survey link across multiple social media platforms. Instruments The data collection instruments included: 1 were derived from the results of the Confirmatory Factor Analysis (CFA) conducted during the study, and the factor loadings were based on the responses from the study participants [13]. Translation and cultural adaptation A rigorous process was employed for the translation and cultural adaptation of the ATSPPH-SF.The forwardbackward method was utilized [18].Initially, two independent experts performed separate translations of the original English version of the questionnaire into Persian.Subsequently, the translated versions were reconciled, resulting in a single Persian version of the questionnaire.An English language expert, unfamiliar with the psychology-specific content, performed a back-translation into English, and the English back-translation was compared to the original English version.Finally, the English translation was re-translated into Persian by two psychology specialists proficient in the English language.The questionnaire's validity and reliability were thoroughly assessed during this process. Validation The questionnaire utilized in this study, the Attitudes Toward Seeking Professional Psychological Help Scale-Short Form (ATSPPH-SF), is a standardized instrument with established reliability and validity [19].The validation process for this questionnaire involved an assessment of its qualitative face and content validity, which were crucial for ensuring its cultural appropriateness and Qualitative Face Validity: To evaluate qualitative face validity, the questionnaire underwent review by a panel of 10 individuals who represented the target population.This panel assessed the questionnaire items in terms of ambiguity, relevance, suitability, and question difficulty.Their feedback was instrumental in identifying areas for improvement and enhancing the clarity and cultural relevance of the questionnaire.Subsequent modifications were implemented based on their valuable insights. Qualitative Content Validity: The assessment of qualitative content validity was conducted by submitting the questionnaire to 13 specialists in the fields of public health and health education.These experts conducted a comprehensive evaluation, taking into account various attributes, including grammar, word choice, item importance, the time required to respond to each question, item placement, and other relevant factors.Their expertise contributed significantly to refining the questionnaire's content and ensuring it met the highest standards of quality. This iterative process of review and feedback, involving both members of the target population and subject matter experts, allowed us to enhance the questionnaire's quality.It ensured that the questionnaire was culturally appropriate, linguistically precise, and retained its core meaning and structure from the original scale. The rigorous validation process underscores the questionnaire's suitability for our study, taking into account the cultural diversity of the Iranian population.It also demonstrates our commitment to obtaining accurate and reliable data for our research. Confirmatory factor analysis (CFA) The study employed CFA to evaluate construct validity.Prior to CFA, the data were examined for outliers using Mahalanobis statistics.Normality was assessed using skewness and kurtosis.CFA was conducted using AMOS version 24 software.Items with weak internal consistency were excluded from the questionnaire to obtain an acceptable model.Items with factor loadings lower than 0.3 were removed to achieve an acceptable final model [20]. Reliability assessment The internal consistency of the ATSPPH-SF and its individual attributes was assessed using McDonald's omega coefficient and Cronbach's alpha coefficient.McDonald's omega coefficient was calculated using SPSS version 24 software, as it provides a more precise estimate of internal consistency than Cronbach's alpha coefficient [25].We considered reliability coefficients above 0.70 as acceptable, aligning with the recognized standards for developing a new measure [26].Moreover, the minimum criterion for the internal reliability of the questionnaire was set at a Cronbach's alpha coefficient of 0.60 [27].It's important to note that lower values of McDonald's omega coefficient and Cronbach's alpha coefficient may be observed for attributes with a smaller number of items. This comprehensive reliability assessment underscores the internal consistency and stability of the ATSPPH-SF and its individual attributes.It provides confidence in the reliability of this instrument, as the majority of attributes met or exceeded the acceptable threshold, ensuring that it is well-suited for assessing attitudes toward seeking professional psychological help in our study. A summary of the modifications made to the ATSPPH-SF is presented in Fig. 1. Demographic characteristics A summary of the demographic characteristics of the study participants is presented in Table 2. Of the 1050 participants, 57.4% were female, with 42.6% being male.The participants' age had an average (standard deviation) of 29.87 (7.98) years, with the youngest participant being 10 years old and the oldest being 65 years old.In terms of education, 29.4% had a diploma or lower qualifications, 49.7% held an associate or bachelor's degree, and 20.9% had a master's or Ph.D. degree.Family members' education levels varied, with 73.9% of fathers having a diploma or lower qualifications, 20.0% with an associate or bachelor's degree, and 6.1% with a master's or Ph.D. degree.For mothers, 82.1% had a diploma or lower qualifications, 14.7% held an associate or bachelor's degree, and 3.2% had a master's or Ph.D. degree.Economic status distribution showed 13.7% of participants had a weak economic status, 60.3% had an average economic status, and 26% had a good economic status.The mean (standard deviation) age of the participants was 29.87 (7.98) years, and the average number of family members was 4.75 (1.78). Qualitative validity assessment (face and content validity) No question was deleted during the translation and cultural adaptation processes because the subject's statements in the original questionnaire were similar to the culture of the Iranian population.During the processes of face and content validities' assessment, the questionnaire was given to 13 specialists (from the fields of psychology, and health education and promotion).As a result, four items were corrected based on their feedback.The corrections included changing the wording of some items to make them more clear and understandable (For example, we translated professional help to expert help to create the target meaning.),and modifying the grammatical structure of some sentences to make them more consistent with the Persian language.(Fig. 1). Confirmatory factor analysis The ATSPPH-SF's factor structure should have one or two factors in theory.Its items were related to one underlying factor in the study that developed the short form (Fischer and Farina, 1995), which supports a one-factor solution.However, the items were only taken from two of the original scale's four factors: Recognition of Need for Psychotherapeutic Help, and Confidence in Mental Health Practitioner (Fischer and Turner, 1970), which supports a two-factor structure. We tested these one-and two-factor structures with maximum likelihood confirmatory factor analyses (CFA).The one-factor structure, based on previous research, demonstrated poor factor loadings (with five items having factor loadings less than 0.4) and unsatisfactory fit indices (χ²/df = 252.427/25,CFI = 0.879, TLI = 0.884, RMSEA = 0.055, SRMR = 0.047).In contrast, the twofactor structure, incorporating items from the original scales "Recognition of Need for Psychotherapeutic Help" and "Confidence in Mental Health Practitioner, " revealed an acceptable fit with a good range of fit indices (χ²/ df = 3.542, RMSEA = 0.049, PCFI = 0.729, PNFI = 0.720, CFI = 0.965, TLI = 0.954, IFI = 0.965) (see Table 3 for details).In the CFA stage, one of the items (question 2) had a factor loading of less than 0.4, but we did not Reliability assessment Reliability assessments were conducted using both McDonald's omega coefficient and Cronbach's alpha coefficient for the entire ATSPPH-SF and its attributes, which include openness to seeking treatment for emotional problems and value and need in seeking treatment.The reliability results indicate that the entire scale showed good internal consistency (McDonald's omega = 0.785, Cronbach's alpha = 0.789).Additionally, the attributes demonstrated good reliability: openness scale (McDonald's omega = 0.803, Cronbach's alpha = 0.795) and value and need scale (McDonald's omega = 0.659, Cronbach's alpha = 0.656).The test-retest reliability, conducted over a two-week period, was 0.855 for the entire scale, 0.741 for the openness scale and 0.787 for the value and need scale (see Table 4). These results confirm that the ATSPPH-SF, following translation and adaptation, exhibits good psychometric properties, making it a reliable and valid tool for assessing Attitudes Toward Seeking Professional Psychological Help Scale in the Iranian context. Discussion The primary objective of this study was to assess the psychometric properties of the ATSPPH-SF among the Iranian general population and to provide a valid and reliable tool for measuring and improving attitudes toward seeking professional psychological help in this population.This validated scale offers promising prospects for screening, assessment, and intervention purposes in mental health settings within Iran. Interpretation of factor structure The factor structure revealed a two-factor model, consistent with the findings of Elhai et al. [17].These factors represent "Perceived Stigma" and "Perceived Psychological Openness." In the Iranian context, "Perceived Stigma" may underscore the influence of cultural factors such as stigma, shame, and self-reliance.The lower scores on "Perceived Psychological Openness" suggest a potential lack of recognition of the value and need for professional psychological help, reflecting cultural nuances.This finding highlights the importance of addressing these cultural barriers in promoting positive attitudes toward seeking help in Iran. Perceived stigma In the context of the two-factor structure, the "Perceived Stigma" factor signifies that individuals in Iran may harbor concerns about social repercussions and negative judgment when considering professional psychological help.These concerns might be rooted in societal attitudes, which could perpetuate stereotypes about mental health issues.Addressing this aspect of stigma is crucial, as it can deter individuals from seeking help when needed.Public awareness campaigns, education, and open discussions about mental health can play a significant role in diminishing perceived stigma.[12,[28][29][30]. Perceived psychological openness The "Perceived Psychological Openness" factor encapsulates the extent to which individuals in Iran recognize the value and necessity of professional psychological help.A lower score on this factor suggests that there might be room for improving the acknowledgment of the positive impact that seeking professional psychological help can have on mental well-being.This calls for interventions aimed at elucidating the advantages of early intervention and destigmatizing mental health services.Encouraging open conversations within families, communities, and educational institutions can contribute to a more accepting and supportive atmosphere.[28,29,31]. Cultural variations in attitudes The differences in factor structures between our study and others [13,32,33], emphasize the significance of adapting and validating the scale within specific cultural contexts.The variations in attitudes toward seeking professional psychological help among different populations and settings further underline the need for culturally sensitive and tailored interventions. These cultural variations may be driven by complex sociocultural factors unique to the Iranian context.Therefore, future research should delve into the specific cultural and contextual factors that influence attitudes toward mental health help-seeking in Iran.Qualitative studies, focus groups, and in-depth interviews could provide a richer understanding of the multifaceted cultural dynamics at play. Reliability and consistency The reliability of the ATSPPH-SF was consistent with previous studies, demonstrating good internal consistency and stability over time.This aligns with findings from Elhai [17], Picco [32], and Fischer and Farina [13].These results collectively indicate that the scale can provide consistent and accurate results across different situations and samples. Implications for mental health in Iran The moderately positive attitude toward seeking professional psychological help indicates room for improvement, particularly in recognizing the value and need for such help.Cultural factors, including stigma, shame, and self-reliance, may contribute to these findings, reflecting the complex interplay between attitudes and cultural norms in Iran [30][31][32]34]. Addressing cultural barriers The low score on the "Perceived Psychological Openness" factor underlines the importance of addressing cultural barriers to seeking professional psychological help.Stigma, a well-documented obstacle, is one of the main barriers to seeking and utilizing mental health services in Iran.People with mental health problems may face negative social reactions, discrimination, and isolation, all of which can deter them from seeking help [34,35]. Promoting awareness and access Efforts to raise awareness, reduce stigma, and increase access to mental health services are essential.Strategies should encompass public education about the nature, causes, and treatments of mental disorders, the promotion of positive attitudes and behaviors toward people with mental health problems, and the provision of accessible, affordable, and culturally appropriate mental health services.Additionally, engaging family, community, and religious leaders in the prevention and intervention of mental health problems can be instrumental in challenging prevailing cultural norms and fostering a supportive environment for those seeking help.[36]. Future directions To continue advancing our understanding of attitudes toward seeking professional psychological help in Iran, future research should explore the predictive validity of the ATSPPH-SF.This involves investigating how well the scale can predict actual help-seeking behavior and outcomes among individuals with mental health concerns.This could provide insights into the real-world impact of attitudes on help-seeking behavior. Incorporating qualitative research A deeper exploration of individual experiences and cultural nuances can be achieved through qualitative research.Conducting interviews and focus groups with diverse segments of the Iranian population can yield valuable qualitative data that complements the quantitative findings.Such studies can provide a more comprehensive understanding of the factors that influence attitudes toward seeking professional psychological help [37,38]. Subpopulation analysis Future studies should examine the validity and reliability of the ATSPPH-SF in various subgroups of the Iranian population, such as ethnic minorities, rural residents, or people with specific mental disorders.These analyses can help uncover variations in attitudes and needs among different segments of the population. Limitations The study had some limitations that warrant attention.One of them was the use of social media to distribute the questionnaire, which might have excluded people who were not active or present on these platforms.This could have reduced the inclusivity of the study by leaving out a part of the population.Another limitation was the dependence on self-rating scales to measure different aspects, such as depressive symptoms, stigma related to depression, and help-seeking attitudes.These self-report measures might have introduced response bias, as participants could have answered according to social expectations or norms, which might have affected the validity of our findings.Despite these limitations, the study's weaknesses should also be recognized.The sample size, though adequate for a preliminary assessment, might not reflect the diversity of the Iranian population.Our convenience sampling method might have caused selection bias, which further limited the applicability of our results.Moreover, the cross-sectional study design prevented us from establishing causal relationships.We suggest future research to use more diverse and comprehensive sampling methods to enhance the external validity of the study and enable the inference of causality. Conclusion In conclusion, this study has validated the Persian version of the ATSPPH-SF, comprising 10 items and 2 factors, for assessing attitudes toward seeking professional psychological help among the Iranian general population.This achievement marks a significant step toward promoting mental health awareness, reducing stigma, and enhancing access to mental health services in Iran.The results emphasize the need for comprehensive efforts to improve attitudes toward seeking professional psychological help in the country.Strategies encompass public education, the promotion of positive attitudes and behaviors, accessible mental health services, and engagement with community and religious leaders.These initiatives hold the potential to enhance mental health and well-being across the Iranian population, ultimately improving the lives of individuals and the broader community. Fig. 1 A Fig. 1 A summary of the modifying of ATSPPH-SF Fig. 2 Fig. 2 Standardized parameter estimates for the factor structure of the ATSPPH-SF Table 1 Promax rotated maximum likelihood factor loadings for the Attitudes Toward Seeking Professional Psychological Help Scale-Short Form (ATSPPH-SF). Table 2 Frequency distribution of demographic characteristics (n = 1050) Table 3 The model fit indicators of ATSPPH-SF Table 4 Descriptive statistics of the ATSPPH-SF and its attributes
2024-01-28T05:13:52.773Z
2024-01-26T00:00:00.000
{ "year": 2024, "sha1": "096a852a6656ffff266eecef843fef2e4a02b9be", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7c552605cbecc88891ea661cabde914e0da26f9c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
212688795
pes2o/s2orc
v3-fos-license
Integrated stratigraphy of the Guşteriţa clay pit: a key section for the early Pannonian (late Miocene) of the Transylvanian Basin (Romania) Abstract The Neogene Transylvanian Basin (TB), enclosed between the eastern and southern Carpathians and the Apuseni Mountains in Romania, is a significant natural gas province with a long production history. In order to improve the (bio) stratigraphic resolution, correlations and dating in the several 100-m-thick upper Miocene (Pannonian) succession of the basin, the largest and most fossiliferous outcrop at Guşteriţa (northeastern part of Sibiu) was investigated and set as a reference section for the Congeria banatica zone in the entire TB. Grey, laminated and massive silty marl, deposited in the deep-water environment of Lake Pannon, was exposed in the ~55-m-high outcrop. The uppermost 25 m of the section was sampled in high resolution (sampling per metres) for macro- and microfossils, including palynology; for authigenic 10Be/9Be dating and for magnetostratigraphy; in addition, macrofossils and samples for authigenic 10Be/9Be isotopic measurements were collected from the lower part of the section as well. The studied sedimentary record belongs to the profundal C. banatica mollusc assemblage zone. The upper 25 m can be correlated to the Hemicytheria tenuistriata and Propontoniella candeo ostracod biozones, the uppermost part of the Spiniferites oblongus, the entire Pontiadinium pecsvaradense and the lowermost part of the Spiniferites hennersdorfensis organic-walled microplankton zones. All samples contained endemic Pannonian calcareous nannofossils, representing the Noelaerhabdus bozinovicae zone. Nine samples were analysed for authigenic 10Be/9Be isotopic measurements. The calculated age data of six samples provided a weighted mean value of 10.42 ± 0.39 Ma. However, three samples within the section exhibited higher isotopic ratios and yielded younger apparent ages. A nearly twofold change in the initial 10Be/9Be ratio is a possible reason for the higher measured isotopic ratios of these samples. Magnetostratigraphic samples showed normal polarity for the entire upper part of the outcrop and can be correlated with the C5n.2n polarity chron (11.056–9.984 Ma, ATNTS2012), which is in agreement with the biostratigraphic data. Based on these newly obtained data and correlation of the biozones with other parts of the Pannonian Basin System, the Guşteriţa section represents the ~ 11.0–10.5 Ma interval, and it is a key section for correlation of mollusc, ostracod, dinoflagellate and calcareous nannoplankton biostratigraphic records within this time interval. Introduction The Transylvanian Basin (TB) is one of the largest gas provinces of Eastern Europe with a long production record (Ștefănescu et al., 2006). The upper Miocene sedimentary sequence of the basin fill has an average thickness of ca. 300 m, but it can reach ca. 1400 m thickness in the central part of the basin, in the surroundings of Sighişoara (Ciulavu et al., 2000;Sanders et al., 2002;Krézsek and Filipescu, 2005;Krézsek and Bally, 2006;Tiliţă et al., 2013). The upper Miocene deposits in the TB are present in a more or less contiguous area throughout the central, southwestern, and eastern part of the basin, in an area of ca. 7500 km 2 (representing about one-third of the total basin area) (Fig. 1c). Fossils from the upper Miocene sedimentary record of the TB are largely identical with the endemic molluscs, ostracods, and algae that once lived in Lake Pannon, an enormous and long-lived lake that covered most of the intra-Carpathian Pannonian Basin (PB) in the late Neogene. Therefore, it was inferred long ago that in the late Miocene, the TB was part of Lake Pannon, and the regional chronostratigraphic term "Pannonian" can be applied for these sediments (Lőrenthey, 1902). The biostratigraphic subdivision and chronostratigraphic framework of this several hundred-meter-thick sequence in the TB, representing ~2.5 Ma, are still relatively poorly developed and imply much uncertainty. The mollusc and ostracod biozonations were largely based on the biostratigraphy of shallow-water deposits of the Vienna Basin developed many decades ago (Papp, , 1953. The results of some recent magnetostratigraphic studies are available (Vasiliev et al., 2010;de Leeuw et al., 2013), but their interpretations are partly debatable (see the Discussion section). Radiometric age measurements have never been published from the Pannonian of the TB. Our main objective was the development of a comprehensive Pannonian biochronostratigraphy in the TB; therefore, we conducted integrated stratigraphic research in the most fossiliferous Pannonian outcrop of the TB, the Guşteriţa clay pit, in Sibiu. We investigated various fossil groups; identified and correlated mollusc, ostracod, dinoflagellate cyst and calcareous nannoplankton biozones; performed magnetostratigraphic research and experimented with the authigenic 10 Be/ 9 Be dating method. Our study has relevance not only in the TB but also in the PB, where surface distribution of the coeval deep-water sediments is confined to the eastern and southern margins of the basin, whereas they are usually deeply buried and comprise hydrocarbon source rocks and reservoirs in other parts of the PB. Geographic and geological settings The TB is surrounded by the chains of the eastern and southern Carpathians. It is separated from the PB by the Apuseni Mountains ( Fig. 1a-b) and has a relatively high present-day altitude of 300-500 m above the mean sea level. The Cenozoic evolution of the TB was controlled by the Carpathian orogeny. Synchronously with the uplift of the Carpathians, a more than 3500-m-thick middle to upper Miocene sedimentary sequence accumulated in the TB. Exhumation and erosion of the infilled basin started at the end of the Miocene (Krézsek and Bally, 2006), which resulted in the erosion of younger than 9-8 Ma deposits (Sanders et al., 1999(Sanders et al., , 2002. Lower Pannonian sands, marls and conglomerates are the youngest of the preserved sediments in the TB; however, Pliocene brackish-water deposits can be found in the small basins of the Eastern Carpathians (Brașov-Baraolt, Ciuc and Gheorgheni Depressions -Fielitz and Seghedi, 2005;László, 2005). At the end of the middle Miocene (end of Sarmatian), connection with the Eastern Paratethys ceased due to the uplift of the Carpathians, and Lake Pannon was born. Brackish-and freshwater endemic faunas evolved in the lake (Lubenescu, 1981;Magyar et al., 1999a;Müller et al., 1999). Older theories suggested continental environment and erosion around the Sarmatian-Pannonian transition (Vancea, 1960;Marinescu, 1985;Magyar et al., 1999a). According to Marinescu (1985), the oldest Pannonian littoral mollusc biozone (Congeria ornithopsis zone) is totally missing from the TB. More recent studies, however, indicated that the sedimentation was continuous through the Sarmatian-Pannonian boundary, as witnessed by the deep-water facies of the Oarba de Mureş (ODM) sections located in the depocenter of the TB (Sztanó et al., 2005;Sütő and Szegő, 2008;Vasiliev et al., 2010;Filipescu et al., 2011). At the beginning of the late Miocene (beginning of Pannonian), a deep-lacustrine environment formed in most parts of the basin. Unlike in the PB, deep-water sediments can be studied in surface exposures due to the subsequent erosion that uncovered them. Deep-water fans are preserved in the southwestern part, while in the southeastern part, some 100-m-thick shallow-water (delta), freshwater-paludal and continental (fluvial) formations can be found. In the latter region, Pliocene volcanics cover and protect the loose Pannonian rocks from erosion (Krézsek et al., 2010). In the eastern part of the basin, deep-water turbiditic successions are preserved (Bartha et al., 2016). Deposition in the TB probably lasted until the end of the Miocene, but most of the shallow-lacustrine, continental-fluvial deposits were eroded during the Pliocene to Quaternary. According to apatite fission track thermochronological analyses on borehole samples and numerical flexural-isostatic 3-D modelling, it is likely that at least a 500-m-thick sedimentary succession was eroded (Sanders et al., 1999(Sanders et al., , 2002. Săndulescu et al., 1978). DEM: digital elevation model. represent the transgressive system tract of the early Pannonian (Krézsek et al., 2010). Guşteriţa is one of the largest outcrops and perhaps the most fossiliferous site of the deep-water Pannonian formations in the TB. The Pannonian macrofauna of the locality was examined by some earlier authors, but their faunal lists contain a relatively low number of taxa (Ackner, 1852;Lőrenthey, 1893;Koch, 1876Koch, , 1895Bielz, 1894;Lubenescu, 1981). Plant remains from the outcrop were described by Givulescu (1969). Material and methods Samples were collected from four different section parts of the Guşteriţa clay pit. In October 2015, macrofossils and marl samples for authigenic 10 Be/ 9 Be isotopic measurements were collected from the lower, middle and upper parts of the mine (Guşteriţa 1, 2 and 3) (Fig. 2a). Later, in June 2017, the uppermost 25 m of the quarry (Guşteriţa 4) was sampled (Fig. 2b). Samples were collected for macro-and microfossils (ostracods, dinoflagellates and calcareous nannoplankton), for magnetic polarity measurements (per metre) and for authigenic 10 Be/ 9 Be dating (per 5 m). In addition, numerous trace fossils (Fig. 2d), thecamoebians, fish teeth, otoliths, some partial fish skeletons and fossil plant remains were found. Biostratigraphy Altogether 1295 mollusc specimens were determined. The bulk of the studied material was collected by the authors from various parts of the clay pit (Guşteriţa 1, 2, 3 and 4). The studied material also comprised the collections of the Brukenthal Museum, Sibiu, Romania, and the Paleontology-Stratigraphy Museum of the Babeş-Bolyai University, Cluj-Napoca, Romania. The collected molluscs were prepared in the laboratory of the Department of Palaeontology of Eötvös Loránd University, Budapest, Hungary. Polyvinyl butyral and polyvinyl acetate were used for solidifying the thin and fragile shells. A total of 25 micropalaeontological samples were examined from the upper part of the outcrop (Guşteriţa 4). The microfossils with carbonate shells were processed with hydrogen peroxide (10%) from about 250 g of air-dried sediments. The scanning electron microscope (SEM) images were made with a Hitachi S-2600N Variable-Pressure Scanning Electron Microscope at the Botanical Department of the Hungarian Natural History Museum in Budapest. The ecological limits of the Pannonian ostracods are based on recent analogies with taxa that are still living; in the case of the extinct forms, the co-occurring faunal elements, sediment type and previous ostracod studies were referred to. Palynological analysis was carried out on 25 samples collected from the uppermost 25 m of the quarry. Standard palynological processing techniques were used to extract the organic matter (e.g. Moore et al., 1991;Wood et al., 1996). The samples were treated with sodium pyrophosphate (Na 4 P 2 O 7 ), cold HCl (15%) and HF (40%), removing carbonates and silica. Heavy liquid The Pannonian lithostratigraphy of the TB is not uniform. Beside formations, informal units are used as well, and due to the heterogeneity of lithofacies, different classifications are created for different parts of the basin. The Lopadea Formation (Lubenescu and Lubenescu, 1977) comprises sandy-clayey layers in the western basin margin. In the eastern part, the Ocland Formation (Rado et al., 1980) was erected for the deltaic, sandy-marly deposits. Sediments of the Guşteriţa and Vingard formations (Lubenescu, 1981), as well as the pebbly Săcădate Member, are located in the southern-southwestern part of the basin. The clayey-marly deposits and fauna of the Guşteriţa Formation provide evidence for a deep-water, profundal environment, while the sand and fauna of the Vingard Formation indicate shallow-water, littoral deposition. The conglomerate and sand of the Săcădate Member contain a mixed Sarmatian-Pannonian fossil fauna (Lubenescu, 1981;Chira et al., 2000). These formations can be paralleled with the Pannonian formations of the PB. Deep-water marls of the Guşteriţa Formation correspond to the Endrőd Marl Formation (Juhász, 1997). The turbiditic succession of the Lopadea Formation is similar to the Szolnok Sandstone Formation . The Săcădate Member resembles the Békés Conglomerate Formation . In the case of the regressive sediments (Vingard Formation, Ocland Formation and the unassigned sequences in the eastern part of the basin), the correlation is less straightforward, because their fossil content is somewhat different from their PB relatives. A sequence stratigraphic framework of the Pannonian of the TB was proposed by Krézsek and Filipescu (2005) and Krézsek et al. (2010), using the original three-system tract model of Vail et al. (1977). They divided the middle to late Miocene sedimentary succession of the basin into minimum eight different sequences based on seismic profiles and well logs. The Pannonian sediments included the following system tracts: TST7, HST7, LST8, TST8, HST8 and LST9 (Krézsek and Filipescu, 2005;Krézsek et al., 2010). The Wienerberger clay pit and brickyard of Gușteriţa (German: Hammersdorf, Hungarian: Szenterzsébet) is located along the southern rim of the TB, in the northeastern part of Sibiu (German: Hermannstadt, Hungarian: Nagyszeben) (45°48′20.23″N, 24°11′47.30″E) (Fig. 1c). The exposed thick (~55 m) Pannonian marl has been mined here for more than a century (Oebbeke and Blanckenhorn, 1901) (Fig. 2a-b). Light grey, laminated or massive, highly calcareous (~75%), silty marl layers and thin, very fine-grained, cross-laminated sand intercalations are observed in the mine (minor Bouma-type: Tc sandy turbidites) (Fig. 2c). Based on sedimentological investigations and surface gamma-ray logging, the marl can be a product of background sedimentation, with occasional low-density turbidites (sand intercalations), which is a characteristic of inner fan overbank deposits as well as outer fan lobes (Tőkés, 2013;Tőkés et al., 2015). Based on seismic interpretation, the locality can 1000× magnification at the Department of Sedimentary Geology, Geological Survey of Austria, Vienna, Austria. Quantitative data were obtained by counting at least 300 specimens from each smear slide. Magnetostratigraphy Guşteriţa 4 section was sampled for magnetostratigraphic purposes by drilling 26 marl samples from the quarry. Measurements were carried out in the Fort Hoofddijk Paleomagnetic Laboratory of the Utrecht University, Utrecht, the Netherlands. Magnetic susceptibility measurements were made on an AGICO MFK1-FA Multi-Function Kappabridge automatic device, using the Saphyr6 software. For the alternating field (AF) measurements, a laboratory-built automated AF-coil-interfaced measuring device with a 2G cryogenic magnetometer was used (Mullender et al., 2016). The following field steps were used: 0, 5,10,15,20,25,27,30,32,35,40,45,50,60 and 80 mT. The thermal (TH) measurements were carried out with a manually operated 2G Enterprises DC (ZnCl 2 , density >2.1 kg/l) was used to separate the organic matter from the undissolved inorganic components. The organic residue was sieved through a 10 mm mesh. Palynological slides were mounted in glycerin for palynofacies analysis and in silicon oil for palynomorph analysis. Microscopic analyses were performed using Olympus BH-2 and Leitz Aristoplan microscopes. Photomicrographs were taken using an AmScope TM camera adapter connected to the AmScope v.3.7 camera software and an Olympus DP25 camera connected to the Olympus Stream Motion software. The samples, organic residues and palynological slides were curated at the Department of Geology, Croatian Geological Survey, and at the Rock and Fluid Analysis, INA Oil Industry Plc., Zagreb, Croatia. The calcareous nannoplankton distribution was studied in 25 samples from the Guşteriţa 4 section. Smear slides were prepared for all samples using standard procedures described by Perch-Nielsen (1985) and examined under a light microscope (cross and parallel nicols) with where R (t) is the measured 10 Be/ 9 Be isotopic ratio, R 0 the initial 10 Be/ 9 Be isotopic ratio, l the decay constant of 10 Be isotope (l = (4.997 ± 0.043) × 10 -7 a -1 ) and t the elapsed time. The initial 10 Be/ 9 Be isotopic ratio (R 0 ) is usually determined from recent sediment representative of the former environment and assuming constant deposition processes and source areas through time. For authigenic 10 Be/ 9 Be isotopic dating, ~40 g air-dried marl from each sample was grinded in an agate hand mortar and oven-dried. The sample preparation followed the procedure of Bourlés et al. (1989) and Carcaillet et al. (2004), adopted by Šujan et al. (2018). Approximately 1.5 g of each sample was leached in a solution of acetic acid and hydroxylammonium hydrochloride. After lixiviation, aliquots for 9 Be measurements were taken and a beryllium carrier was added (~0.3 g of a 1000 ppm ICP standard beryllium solution). The beryllium was separated from other elements using ion chromatography (Merchel and Herpers, 1999). Purified samples were oxidised at 800°C and cathoded for accelerator mass spectrometry (AMS) measurements of their 10 Be/ 9 Be ratio. AMS measurements were performed at the French national facility ASTER (CEREGE, Aix-en-Provence, France). The concentrations of 9 Be were determined by AAS in CEREGE (samples ODM and GUS1, 2, 3) and by ICP-MS in the laboratory of the Institute of Chemistry, Slovak Academy of Sciences, Bratislava, Slovakia (Šujan et al., 2018; samples G01-G25). The comparability of both 9 Be measurement approaches was tested using replicated measurements. The 10 Be concentrations were corrected according to chemical processing blank values (Table 1). The weighted mean ages were calculated using the KDX software by Spencer et al. (2017). The mollusc biostratigraphy of the offshore deposits of Lake Pannon is poorly developed. For the time being, only three biozones are distinguished: the Lymnocardium SQUID cryogenic (He-cooling) magnetometer, operating using the Cryo2Go software. Heating of samples took place in a magnetically shielded cylindrical metal oven, controlled by the Oven2Go software. The following temperature steps were applied: 20, 80, 120, 170, 200, 220, 240, 260, 280, 300, 320, 340 and 370°C. During heating, a conservative heating profile, 25°C linger time and 7°C T-tolerance were used. In order to avoid coil drift, samples were always placed in the same line-up. Then, a calibration phase with two blank measurements was executed with the empty sample holder. Every measurement was performed in two positions. Zijderveld projections were interpreted to understand the magnetic behaviour and to determine the magnetic directions of the samples (Zijderveld, 1967). Principal component analysis was used to fit regression lines onto the measured values. All the samples showed strong magnetic characteristics; therefore, no quality groups were separated. All measured declination values were corrected for the present-day declination at the study location (MSL=450 m; day of sampling: 20 June 2017), with the help of the magnetic field calculator of the National Centers for Environmental Information, USA (https://www.ngdc.noaa.gov). Authigenic 10 Be/ 9 Be dating Authigenic 10 Be/ 9 Be isotopic dating method was applied on altogether eleven marl samples, nine from four sections of the Gușteriţa clay pit (Gușteriţa 1, 2, 3, and 4) and two from the ODM A section (ODM-15.2 and ODM-28). Samples were collected from the most clayey parts of the outcrops. Physical preparation of the samples was carried out in the laboratory of the Department of Palaeontology of the Eötvös Loránd University, Budapest, while chemical preparation was performed in the research institute of the Centre Européen de Recherche et d'Enseignement des Géosciences de l'Environnement (CEREGE), Aix-en-Provence, France (samples ODM and GUS1 to GUS3) and in the laboratory of the Department of Geology and Paleontology, Faculty of Natural Sciences, Comenius University in Bratislava, Slovakia (samples G01 to G25). The method is based on the radioactive decay of the initial 10 Be/ 9 Be ratio after the sediment deposition. The stable 9 Be is derived from chemical weathering of rock massifs, whereas the radionuclide 10 Be is produced by spallation reactions induced by cosmic rays in the atmosphere (Bourlés et al., 1989). Since beryllium is strongly chemically reactive, it adsorbs abruptly to the surface of sediment particles in a water column, and after their deposition, the initial 10 Be/ 9 Be ratio is determined. Hence, if the system is chemically closed, the ratio decreases only by the decay of 10 Be (with the half-life of 1.387 ± 0.012 Ma; Chmeleff et al., 2010;Korschinek et al., 2010). Then after the determination of the actual 10 Be/ 9 Be ratio, the depositional age of a sediment can be calculated using the equation of radioactive decay, which is given as follows: the species name U. nobilis instead. V. velutina (usually smooth, more whorled form) is also a common form in the Pannonian of the TB. In Gușteriţa, we found specimens slightly different from the type. The shell surface of this species is usually completely smooth, while in the case of some specimens, slightly bulged growth lines are observed, which are not strong enough to call them ribs. These specimens may represent a transitional form between V. velutina and U. nobilis. Similar specimens from Beočin, Serbia, were described by Gorjanović- Kramberger (1901) as Velutinopsis rugosa. Ostracods Samples from the Gușteriţa 4 section produced a relatively diverse benthic ostracod material. The preservation is moderate and sometimes poor (with a lot of broken valves and carapaces). There are more adult specimens than juvenile ones. Altogether 18 euryhaline benthic ostracod taxa were identified belonging to eight species, eleven genera, five families and one order (Podocopida) ( Fig. 4 and Suppl. S2). Older strata of the section Gușteriţa 4 (samples G1 to Modern Bakunella lives at salinities of 11.5 to 13.5‰ in sublittoral to profundal depths of the central and praeponticum or Radix croatica zone, the C. banatica zone and the "Dreissenomya" digitifera zone (for a summary, see Magyar et al., 1999b). In the mollusc biostratigraphic system developed for the TB by Lubenescu (1981), the deep-water sediments were subdivided into the older C. banatica and the younger C. prezujovici zones. In both stratigraphic schemes, the molluscan record from Gușteriţa 1 to 4 belonged to the C. banatica zone, based on the presence of C. banatica throughout the entire section. The stratigraphic distributions of other species from Gușteriţa were either not known or not narrow enough to be used for further subdivision of the C. banatica zone. The only exception was the Radix-Velutinopsis-Undulotheca-Provalenciennesia -Valenciennius evolutionary lineage of lymnaeid snails, which was characterised by progressively larger shell size, widening of the aperture, reduction of whorl number and appearance and strengthening of transversal ribs (e.g. Gorjanovič-Kramberger, 1901, 1923Moos, 1944). The morphotypes of this lineage are good candidates for high-resolution biostratigraphic markers, but only after their taxonomy, nomenclature and stratigraphic range of individual taxa are revised. In the Gușteriţa material, we recognised that the names Velutinopsis nobilis (Reuss, 1868) and Undulotheca pancici (Brusina, 1893) refer to the same species (Fig. 3h-i). The difference between the two types is probably due to the different direction of compaction that affected the shells after burial. The type specimen of V. nobilis is laterally compacted, while the name U. pancici is used for dorsoventrally compacted specimens. According to our observations and opinion, these two forms belong to one species, because otherwise they are characterised by the same morphological traits (large aperture, reduction in number of whorls and strong rounded ribs) (Fig. 3h-i). Applying the priority rule, the valid species name would be V. nobilis, but because of the rounded ribs characteristic for the genus Undulotheca, we propose to use Sample ID Depth (m) 9 Be (at.g -1 ) × 10 16 10 Be (at.g -1 ) × 10 5 Natural 10 Be/ 9 Be × 10 −11 (Puri et al., 1969), and their fossil representatives are known from mesohaline lacustrine environments (Gross, 2002;Witt, 2010). The southern Caspian Basin (Gofman, 1966;Boomer et al., 2005). Euxinocythere is not only known from brackish environment but also tolerates freshwater littoral to deep limnic conditions (e.g. Pipík and Bodergat, 2004;Cziczer ostracod assemblages of the younger strata indicate meso-to pliohaline (5-16 ‰) sublittoral depositional environment with a few littoral elements transported from the margins. In the uppermost layer (sample G25), nearshore faunal elements become dominant beside the common sublittoral forms. Two successive ostracod biozones were identified in the studied Gușteriţa 4 section, based on the system of Krstić (1985): the Hemicytheria tenuistriata (samples G1 to G9) and P. candeo zones (samples G10 to G25). According to Krstić (1985), the older E. naca and L. rhombovalis overlap in their stratigraphic ranges with the younger L. granifera exclusively within the H. tenuistriata and P. candeo zones. Within this interval, the first appearance of the species P. candeo marks the bottom of the P. candeo zone (sample G10 in our section). Krstić (1985) also claimed that C. (Thaminocypris) transylvanica is restricted to zones older than the P. candeo zone. In our material, there is a slight overlap between the stratigraphic ranges of the older C. transylvanica and the younger P. candeo (samples G10-G14). Nevertheless, we mark the boundary between the older H. tenuistriata and the younger P. candeo zones between the samples G9 and G10, with the first occurrence of P. candeo. H. croatica zone (Serbian Substage of the Pannonian). This phenomenon requires further discussion, because H. croatica was also found by Rundić in ter Borgh et al. (2013) in older "Slavonian" strata in Beočin. The stratigraphic range of H. croatica thus seems to be wider than supposed by Krstić (1985), so its stratigraphic marker role should be reconsidered. The dinocyst assemblages through the Guşteriţa 4 section have allowed three biozones to be identified. Samples G1-G9 reveal a rich assemblage with Spiniferites pannonicus and Spiniferites oblongus and are assigned to the S. oblongus zone. The zone is characterised by the high abundance of S. pannonicus and S. oblongus in the Hungarian part of the Pannonian Basin System (PBS), while the zone is defined as ranging from the first appearance date of S. oblongus to the first appearance date of Pontiadinium pecsvaradense in Croatia (Bakrač et al., 2012). Similar associations are known from the entire PBS and have been recorded from Serbia (Rundić et al., 2011) and Austria (e.g. Kern et al., 2013) as well. The first occurrence of P. pecsvaradense is recorded in sample G10, and it remains common throughout the section with higher abundance ratios in the uppermost samples (G21-G24). The P. pecsvaradense biozone is characterised by the common occurrence of the species P. pecsvaradense and P. obesum together with various proximate cysts, such as Impagidinium spp. and Virgodinium spp. in Hungary (Sütő-Szentai, 1988, 2000. Bakrač et al. (2012) defined this zone as an interval from the first occurrence of P. pecsvaradense to the first occurrence of Spiniferites bentorii coniunctus in distal and/or Spiniferites validus in proximal settings. In the Guşteriţa 4 section, samples G10-G21 are assigned to the P. pecsvaradense zone. The dinocyst composition of samples G22-G25 is similar to the dinocyst assemblage of the lower part of the Spiniferites hennersdorfensis zone (Sütő-Szentai, 1988, 2000Soliman and Riding, 2017) in Hungary and the distal association of the S. validus zone (Sve) in Croatia (Bakrač et al., 2012) by the common occurrence of Spiniferites specimens with membranous crests, especially S. hennersdorfensis. S. validus is not recorded in Guşteriţa, although its absence is explained by the more distal depositional setting in the TB. The Sve zone has a rich and diverse dinocyst assemblage in distal settings, including membranous Spiniferites types, Spiniferites maisensis, S. oblongus, P. pecsvaradense and various Virgodinium species (Bakrač et al., 2012), which is a good match for the association in samples G22-G25. It has to be noted though that the differences in dinocyst species composition might be also related to changes in environmental parameters, e.g. salinity variation from incoming river runoff, nutrients and/or hydrodynamic conditions suggesting slightly different environmental conditions for the uppermost part of the section. Calcareous nannoplankton All samples from the Guşteriţa section contain very well-preserved and common calcareous nannoplankton assemblages (Fig. 6). Endemic calcareous nannofossils are represented by the species Isolithus semenenko, Isolithus pavelici, Noelaerhabdus jerkovici, Noelaerhabdus bozinovicae and Praenoelaerhabdus banatensis. The genus Isolithus dominates the assemblages in the lower part of the section, in samples G1-G2, G6-G8, G10-11 and G14 (Fig. 6j, o and q). In contrast, the upper part of the section (samples G14-G25) is characterised by the dominance of Noelaerhabdus, which occurs in increasing number from G4 to the top of the section (Fig. 7), reaching the highest values in samples G21 (97.8%) and G24 (86.3%). Species of genus Noelaerhabdus are characterised by possession of a central spine placed vertical on the basal plate. The shape ending of the central spine is a crucial feature for distinguishing various species within the genus. Upon this criterion, all Noelaerhabdus specimens from the Guşteriţa 4 section can be assigned to N. bozinovicae and N. jerkovici. During preparation, the central spine was usually broken, and the original shape of fossils could not be always reconstructed. Therefore, coccoliths without spine were counted separately (Noelaerhabdus spp.) from coccoliths with spine. This group also included endemic nannofossils described as P. banatensis. Coccoliths with spine in the central field (N. bozinovicae and N. jerkovici) were grouped in Noelaerhabdus spp. and subdivided into three morphotypes according to the length of the spine: 3-7 mm, 7-15 mm and >15 mm (Suppl. S4). In assemblages from the middle and upper parts of the section, Noelaerhabdus spp. with longer spine (7-15 mm and >15 mm) dominated. These changes in the length of the spine can be caused by changes in the palaeoecological conditions. Blooms of ascidian spicules (Perforocalcinella fusiformis) in samples G2-G5 and G13 and in high amounts in samples G13, G16 and G20-G21 may point to periods when sediment transport was more effective. The correlation between endemic calcareous nannofossils and standard nannofossil zones is still not clear (see Mărunţeanu, 1997;Chira, 2006;Chira and Malacu, 2008). Mărunţeanu (1997) investigated the evolution trends in Pannonian endemic calcareous nannofossils and erected three biozones: P. banatensis, N. bozinovicae and Noelaerhadus bonagali zones. Sediments from the Guşteriţa clay pit can be attributed to the N. bozinovicae zone, based on the presence of N. bozinovicae, N. jerkovici and the absence of N. bonagali in the investigated samples. Trace fossils and other remains During the collection and preparation of molluscs, several remains of other fossil groups were unearthed (Suppl. S2). Two types of trace fossils were frequent. One of them was a few centimeter long residence tube of probably annelid worms, such as Pectinaria. This tube was lined (agglutinated) with calcareous shell fragments of tiny animals (ostracods and/or bivalve embryos, shell fragments) or with mineral grains during the life activity of the worm. This trace fossil can be easily recognised by the regular and tight positions of the tiny shells. Jámbor and Radócz (1970) distinguished and described several morphotypes based on the composition of the tubes from drill cores in the PB. We were able to distinguish and identify two of them, Pectinaria ostracopannonicus and Pectinaria gigantea. The first one was made of almost exclusively carapaces of ostracods (Fig. 8c), and the latter consisted of bivalve embryos and shell fragments (Fig. 8e). Another frequent trace fossil was Diplocraterion isp. These appeared as dumbbell-like forms on the bedding planes, but in fact they were U-shaped burrows (Fig. 8d). Their creators were probably crustaceans (Fürsich, 1974). Fishes are represented by a relatively large number of teeth, a few otoliths and further unidentifiable elements. Teeth of Morphotype 1 are the most characteristic among all. The high, curved base is circular in cross-section, bearing a fine apicobasal striation. The slightly reclined tip is lanceolate and usually translucent. Morphologically identical teeth were published by Brzobohatý and Pană and subtropical water (Froese and Pauly, 2019). Recent Sciaenidae members are generally bottom-dwelling fish, living in the neritic zone of temperate and warm shallow seas and estuaries, playing a key role in estuarine ecosystems (Carnevale et al., 2006). In the micropalaeontological samples, plant remains, bone fragments, fish scales, fish vertebra and thecamoebians were common together with some reworked older Miocene fossils (foraminifers and bryozoans). During the preparation process, a specimen of a regular, oval thecamoebian, similar to Silicoplacentina majzoni (Kőváry, 1956;Fig. 8b), and a partial fish skeleton (Fig. 8a) were found in the Gușteriţa 2 section. Magnetostratigraphy From the Guşteriţa 4 section, two types of palaeomagnetic measurements (TH demagnetisation and AF demagnetisation) were performed on 26 samples. Suppl. S5 contains the results of TH measurements, while Suppl. S6 includes the outcomes of AF measurements. The investigated samples had good magnetic characters; thus, only one quality group was created. We chose four TH samples to figure them on Zijderveld diagrams. Two different T-sessions were separated (T1: orange and T2: black) based on the measured values (Fig. 10a-d). A total of 24 samples were chosen for AF measurements. We chose four AF samples to figure them on Zijderveld (1985) as teeth of indeterminate gadid fishes (Figs. 9a-d). Teeth of Morphotype 2 include simple recurved teeth, circular in cross-section. The small, shiny and smooth cap is separated from the apicobasally striated base (Fig. 9e). Teeth of Morphotype 3 are of simplest morphology. The teeth are minute, narrow and shiny, tapering to the tip, bearing no surface striations. They are also weakly bent to the supposed lingual direction. The taxonomic identification of these isolated teeth is very problematic due to their simple, almost featureless morphology; however, here we tentatively attribute them to family Gadidae or Gobiidae (Fig. 9) (see Brzobohatý and Pană, 1985;Kramer et al., 2009;Berkovitz and Shellis, 2017). These forms frequently occur in late Miocene deposits of the PB. Two generally poorly preserved otoliths were also unearthed, both representing the family Sciaenidae (after Schwarzhans, 1993;Bosnakoff, 2008). Since the collected fish material is isolated and only hardly identifiable (only at the family level), it is less important regarding the paleoenvironmental reconstructions. Families Gadidae, Gobiidae and Sciaenidae occur in fresh-water, brackish-water and normal marine conditions as well (see Froese and Pauly, 2019). Modern members of Gadidae are found in circumpolar water and temperate water. Most gadid species are demersal or benthopelagic, feeding mainly on fish and invertebrates. Extant gobiids are distributed mostly in tropical water Kőváry, 1956, Gușteriţa 2. c: Pectinaria ostracopannonicus (Jámbor and Radócz, 1970), Gușteriţa 3. d: Diplocraterion isp., Gușteriţa 3. e: Pectinaria gigantea (Jámbor and Radócz, 1970), Gușteriţa 2. diagrams. Two different F-sessions were separated (F1: orange and F2: black) based on the measured values ( Fig. 10e-h). In the case of some samples, gyroremanent magnetisation was observed, which means the effect of increased random direction that can happen above 35 mT ( Fig. 10e and g). Owing to this phenomenon, the given sample could not be properly demagnetised. It usually predicted the presence of greigite (Fe 3 S 4 ) in the sample (Babinszki et al., 2007); however, no rock thermomagnetic analyses were carried out. All the results show normal polarity for the entire section, i.e. all the samples gave positive inclination and declination values above 270° (Suppl. S5-S6). It must be tested whether this normal polarity is in the primary or near-primary direction and may be used for correlation to the global time scale. To check if they represent a present-day overprint, the mean inclination and declination of the samples were compared to the present-day magnetic field in the study area. Present-day magnetic field values were the following on the day of sampling at the locality: declination 5.467° and inclination 63.004°. The mean inclination of the samples was clearly different from the present-day field direction, and thus interpreted as a sub-recent viscous component; however, the mean declination was similar to the present-day value. The palaeomagnetic signal was interpreted as primary or penecontemporaneous with deposition. Authigenic 10 Be/ 9 Be dating The initial ratio, which is essential for the age calculation, could be determined either by the analysis of recent equivalents of the studied depositional environment or by independent dating of a sample taken from the same basin and depositional environment. In first calculations of this study, the lacustrine initial ratio (6.97 ± 0.14) × 10 -9 (R 0-lacus ) from Šujan et al. (2016) was applied providing ages apparently slightly older compared to the biostratigraphic age proxies (Table 1). Hence, to test the validity of the lacustrine initial 10 Be/ 9 Be ratio, it was decided to calculate independently the initial ratio relevant to the eastern part of Lake Pannon. The ODM "A" outcrop, which is located in the central TB and represents an equivalent of the Gușteriţa locality in terms of depositional environment, contained a tuff layer dated at 11.62 ± 0.12 Ma by the 40 Ar/ 39 Ar method (Vasiliev et al., 2010). Two samples (ODM) were taken from a horizon above the tuff layer. The sample ODM-28 was chosen for the calculation of the initial ratio due to its proximity to the tuff horizon. Its estimated age was 12.05 ± 0.9 Ma based on the R 0-lacus . The resulting initial 10 Be/ 9 Be ratio (R 0-ODM ) of (5.61 ± 0.41) × 10 -9 was then used for the age calculations of all samples taken from the Guşteriţa locality. The authigenic 10 Be/ 9 Be ages of the samples from the Gușteriţa outcrop were calculated using both the initial ratio determined by Šujan et al. (2016) for lacustrine facies (R 0-lacus ) and the new initial ratio based on the ODM sample ODM-28 (R 0-ODM ) ( Table 1 and Fig. 11). Two groups of samples could be distinguished. Six samples (GUS1, GUS2 and GUS3 from Gușteriţa 1, 2 and 3 sections and samples G01, G20 and G25 from the Gușteriţa 4 section) attained ages in agreement with other geochronological proxies with a weighted mean age of 10.83 ± 0.26 Ma using R 0-lacus and 10.42 ± 0.39 Ma using R 0-ODM . These two ages are statistically identical within uncertainties. We consider the ages calculated by the local initial ratio (R 0-ODM ) to be the best estimates of the deposition age of the sediment succession at Gușteriţa; thus, these are discussed in the following. The remaining three samples (G06 to G14 from the Gușteriţa 4 section), however, exhibited higher isotopic ratios and yielded ages between 9.17 ± 0.74 Ma and 8.51 ± 0.70 Ma (R 0-ODM ). The estimated age of these samples overlapped within uncertainties with a weighted mean of 8.84 ± 0.42 Ma (N 0-ODM ), considerably younger than the mean age calculated using the other six samples. Depositional environment The abundant and diverse benthic life, represented by the body and trace fossils of the Gușteriţa outcrop, indicates oxygen-rich bottom conditions. Sand intercalations and the silt grain size suggest weak, but continuous flows, probably events of low-density turbidity currents, which maintained the permanent dissolved oxygen level. The occurrence of partial fish skeletons may indicate short periods of dysoxia, but there seems to be no disturbance in the permanent benthic life. The recovered fossil fish fauna refers to a warm to temperate water. It is composed of euryhaline taxa (tolerating a wide range of salinities) with variable habitat preferences. The mollusc and ostracod fauna consist of mostly deep-water or offshore species that live well below the storm wave base as suggested by their very thin shells. Extant relatives of some of the ostracod taxa live at salinities of 11.5-13.5‰ in sublittoral to profundal depths of the central and southern Caspian Basin. Based on the available and observed sedimentological and faunal characteristics, the depositional environment of the locality could be around the toe of slope (Krézsek et al., 2010). In the early Pannonian offshore sediments of the TB, two clearly different mollusc assemblages occur. The older one is the L. praeponticum assemblage, which contains small-sized pioneer mollusc species, such as L. praeponticum, Gyraulus vrapceanus, G. tenuistriatus, Gyraulus praeponticus, O. levis and Orygoceras fuchsi brusinai. A similar association is present in the entire PBS, probably representing a short time interval and a relatively deep-(sublittoral or profundal) and brackish-water stressed environment. This assemblage is only found at some localities in the central and eastern parts of the TB (Sztanó et al., 2005;Magyar, 2010). The younger assemblage is the C. banatica association, which indicates profundal water depth and a stable environment, and it can be found in the entire PBS as well. Characteristic species of the C. banatica biozone are the dominant C. banatica; thin-shelled cardiids, such as P. lenzi and P. syrmiense; L. undatum; pulmonate gastropods, such as G. tenuistriatus and G. praeponticus; the tiny scaphopod-like Orygoceras; Micromelania and lymnaeid snails. The index fossil of the youngest profundal Pannonian mollusc zone in the PBS, "Dreissenomya" digitifera, has not been recovered from the TB so far (Fig. 12). The age of the C. banatica zone was assessed by correlation with dinoflagellate and polarity zones in various locations (Magyar et al., 1999b;ter Borgh et al., 2013). Lying directly above the very thin, basal Pannonian (i.e. basal upper Miocene, <11.6 Ma) L. praeponticum or R. croatica zone, the bottom of the C. banatica zone can be dated as ca. 11.4 Ma, whereas its top is younger than the top of C5n chron (9.7 Ma), so it is ca. 9.6 Ma (Fig. 12). The biostratigraphic subdivisions based on ostracods are different within the territory of Lake Pannon, depending on the local character of the depositional environment (e.g. Pokorný, 1944;Kollmann, 1960;Sokač, 1972;Krstić, 1985;Jiřiček, 1985;Szuromi-Korecz, 1992;Olteanu, 2011;Rundić et al., 2011). In the TB, no comprehensive ostracod zonation has been established yet; therefore, various biostratigraphic schemes were applied at different localities (cf. Filipescu, 1996;de Leeuw et al., 2013;Kovács et al., 2016). In this study, we tentatively use the most detailed Pannonian biozonation, erected by Krstić (1985) in the southern part of the PB, which takes into consideration some basic differences in the depositional environment. Data on the numerical ages of these zones, however, are not available in the literature. Organic-walled microplankton assemblages, in particular dinocysts, are extensively used for the biostratigraphic subdivision of late Miocene sediments in the PBS. Dinocysts are the hypnozygotic resting cysts of the dinoflagellates representing a eukaryotic plankton group (Fensome et al., 1996). The majority of the late Miocene dinocysts from the PBS are endemic taxa that originate from marine dinocysts (e.g. Soliman and Riding, 2017). The brackish-water conditions of Lake Pannon initiated The S. oblongus zone is correlated to the upper part of C5r polarity zone and the lower part of C5n polarity zone indicating an age of ca. 11.3-10.8 Ma for the entire biozone from the Hungarian part of the PBS (Magyar et al., 1999b;Magyar and Geary, 2012). The overlying P. pecsvaradense zone is magnetostratigraphically correlated to C5n chron (Magyar et al., 1999b). This zone is usually thin, representing a relatively short time interval in the Hungarian and Croatian parts of the PBS; therefore, it was tentatively dated between 10.8 and 10.6 Ma (Magyar and Geary, 2012). The base of the S. hennersdorfensis zone (former S. paradoxus zone) cannot be younger than the Pannonian sequence of the name-giving Hennersdorf outcrop. The age of the latter was estimated by Harzhauser et al. (2004) as 10.3-10.4 Ma based on the vertebrate fauna of Hennersdorf, Vösendorf and Inzersdorf (Daxner-Höck in Harzhauser et al., 2004) and cyclostratigraphic considerations (Harzhauser et al., 2008). Data on the numerical ages of endemic nannoplankton biozones have not been published yet. Dating and integrated stratigraphy In the TB, the age of both the oldest and the youngest Pannonian sediments is debated. Based on magnetostratigraphic correlations, Vasiliev et al. (2010) dated the Sarmatian-Pannonian boundary at 11.3 Ma, and de Leeuw et al. (2013) suggested an age of 8.4 Ma for the youngest erosional top of the Pannonian. In the central part of the TB, however, where the Sarmatian-Pannonian boundary is characterised by continuous a remarkable radiation among organic-walled dinoflagellates after the connection to the Eastern Paratethys and the Mediterranean region ceased around 11.6 Ma ago. Most of the newly emerged Pannonian taxa are exclusively known from the Central Paratethyan areas, the late Miocene sedimentary successions of the PBS and the Pliocene of the Dacian Basin in Romania, but some of them (e.g. Spiniferites cruciformis) are closely related to dinocysts occurring in the Pliocene-Pleistocene of the Black Sea and the Caspian Sea (e.g. Richards et al., 2018). The rapid morphological changes formed the basis of several regional biozonation schemes developed for the Hungarian and Croatian parts of the PBS (e.g. Sütő-Szentai, 1988, 2000Bakrač et al., 2012). The biozonation is primarily based on the different morphological variants of the Spiniferites Mantell, 1850 complex. The endemic nature of these dinocyst assemblages prohibits correlation to the Miocene-Pliocene dinocyst zones of the Mediterranean region or beyond (Magyar and Geary, 2012). Similarly, the taxonomy of Lake Pannon dinocysts is not without its problems due to the varied morphology of the cysts and is currently under revision (e.g. Soliman and Riding, 2017;Mudie et al., 2018). Here, the nomenclature of Sütő-Szentai (1988, 2000 updated with the most recent taxonomical developments from Soliman and Riding (2017) is applied. In particular, the term Spiniferites paradoxus zone of Sütő-Szentai (1988, 2000 is eliminated and changed to S. hennersdorfensis zone since S. paradoxus was renamed (Soliman and Riding, 2017). All magnetostratigraphic samples from the Guşteriţa 4 section show normal polarity, i.e. positive inclination values and declination values more than 270°. This signal may be the primary palaeomagnetic component according to the inclination values. Based on the biostratigraphic data mentioned earlier, the section can be correlated with the C5n.2n normal polarity magnetic chron (11.056-9.984 Ma;ATNTS2012 -Hilgen et al., 2012 (Fig. 13). The authigenic 10 Be/ 9 Be dating of the GUS1-3, G01, G20 and G25 samples gave a weighted mean age of 10.42 ± 0.39 Ma (N 0-ODM ), indicating that the outcrop is younger than 11 Ma. The considerable scatter (0.61 Ma) of the ages did not enable to identify a trend of increasing age with depth. This is indicative of sedimentation rates at which the age difference between the bottom and top of the studied succession remains within the uncertainties of the authigenic 10 Be/ 9 Be method. Authigenic 10 Be/ 9 Be ratios of the three samples from Gușteriţa 4 (G06 to G14) differ from the remaining samples (Fig. 11). This discrepancy might be explained by a change in the initial isotopic ratio within the depositional environment or by a post-depositional transport of beryllium isotopes. The basin floor environment with turbidite flows is prone to mixing of various sources of sediment, depending on the depositional system proximity and river drainage basin pattern. The continuous growth of the authigenic rims around the clay particles causes that the duration of a particle transport (sediment-source sedimentation in deep water (Sztanó et al., 2005;Sütő and Szegő, 2008;Filipescu et al., 2011), the 11.62 and 11.65 Ma 40 Ar/ 39 Ar age data gained from an andesitic tuff from the uppermost part of the more than 1-km-thick Sarmatian at ODM (Vasiliev et al., 2010) is a very solid argument in favour of a ca. 11.6-Ma-old boundary (similar to other parts of the PBS established by e.g. Paulissen et al., 2011 andter Borgh et al., 2013). The 8.4 Ma age is based on the combination of palaeomagnetic measurements from Viforoasa and Șoimușu Mic and seismic stratigraphy (de Leeuw et al., 2013), but for the time being, no fossil remains younger than 9.0 Ma and no deep-water fauna younger than 9.6 Ma are known from the TB to confirm this hypothesis. Although we are aware that the chronostratigraphic value of Lake Pannon biozones requires further testing and confirmation in the future, here we use them as biochronozones with supposedly synchronous boundaries across the entire PBS. The mollusc record of the Gușteriţa outcrop indicates the C. banatica zone (11.4-9.6 Ma, according to Magyar and Geary, 2012). The presence of the S. hennersdorfensis zone in the uppermost layers of the Gușteriţa outcrop indicates that the top of the Gușteriţa sequence cannot be younger than 10.5 Ma, because the long-known and well-studied Pannonian sublittoral clays representing the S. hennersdorfensis zone at Vienna (Soliman and Riding, 2017) have recently been dated between 10.5 and10.3 Ma (Harzhauser et al., 2004, 2008). Based on this age model and supposing that the ~0.11 m/kyr average sedimentation rate was more or less constant during deposition of the sequence, we have the opportunity to estimate, for the first time, the age of the boundaries between the S. oblongus and P. pecsvaradense dinoflagellate zones, the H. tenuistriata and P. candeo ostracod zones (both ~10.75 Ma) and the P. pecsvaradense and S. hennersdorfensis dinoflagellate zones (~10.65 Ma). Conclusions The 55-m-thick, highly fossiliferous sedimentary sequence exposed in the clay pit of Gușteriţa (Sibiu, Romania) was deposited in the deep-water zone of Lake Pannon during the late Miocene. It can be considered as a reference section for the "C. banatica beds", widely distributed in the TB as well as in the neighbouring PB. The upper 25 m of the profile displays normal magnetic polarity. As the authigenic 10 Be/ 9 Be dating of six samples gave a weighted mean age of 10.42 ± 0.39 Ma (initial ratio based on independent 40 Ar/ 39 Ar dating of an analogous profile at ODM), the outcropping sequence can be correlated most probably with the C5n.2n normal polarity chron (~11.1-10.0 Ma). While the entire sequence represents the C. banatica profundal mollusc biozone, the upper 25 m belongs to three dinoflagellate zones, two ostracod zones and one regional calcareous nannoplankton zone. Because the S. hennersdorfensis dinoflagellate zone, dated elsewhere as 10.5-10.3 Ma, occurs only in the topmost layers of the outcrop, the age of the Gușteriţa sequence can be constrained between 11.0 and 10.5 Ma; the section thus represents a time interval of maximum 500 kys. Supposing that the (at least) ~0.11 m/kyr average sedimentation rate was more or less constant during deposition of the sequence, the age of the boundaries between the H. tenuistriata and P. candeo ostracod zones, the S. oblongus and P. pecsvaradense dinoflagellate zones (both ~10.75 Ma) and the P. pecsvaradense and S. hennersdorfensis dinoflagellate zones (~10.65 Ma) can be substantiated for the first time. These new data are valuable contributions to the high-resolution biochronostratigraphy of the PBS. Cluj-Napoca, Romania) supported our work with his wide knowledge of field geology and by offering some hardly obtainable pieces of Romanian literature. Collection of old literature could not have happened without our librarians, Monica Baciu (Babeș-Bolyai University, Cluj-Napoca, Romania) and Tímea Szlepák (Mining and Geological Survey of Hungary, Budapest, Hungary). Assistance in the collections of the Paleontology-Stratigraphy Museum, Babeș-Bolyai University, Cluj-Napoca, Romania, and Brukenthal Museum, Sibiu, Romania, was provided by Liana Săsăran and Nicolae Trif. We would like to thank the help of Krisztina proximity) also affects the resulting isotopic ratio (Wittmann et al., 2017). The gained results indicate that a change in beryllium isotopic input might appear within the studied sedimentary section. A backward calculation of the initial ratio of the samples G06 to G14 assuming their age to be in agreement with the weighted mean of the rest of the samples of the same set (10.42 ± 0.39 Ma) yielded initial ratios between (14.6 ± 0.80) × 10 -9 and (10.05 ± 0.53) × 10 -9 , with a mean value of (12.39 ± 2.07) × 10 -9 , what differs from both applied initial ratios by a factor of ~2. Although the fossils and the sedimentary facies do not indicate any major change, a study of sediment provenance may prove the hypothesis of change in sediment source as the cause of observed discrepancy in 10 Be/ 9 Be concentrations. Another possible explanation for the change in the initial ratio in a water column during sedimentation could be an overall rise of the water level of Lake Pannon, which attained its largest extent at ~10 Ma (Magyar et al., 1999a). This transgression was probably related to flooding and retrogradation of a depositional system, causing increase in distality of the sediment source and decrease in the 9 Be input. A significant change in precipitation, which would affect the delivery of 10 Be, is not expected in the studied time period. The calculated mean initial ratio of samples G06 to G14 provides an important insight into the variability of the initial isotopic ratios within the depositional environment of sediment gravity flows on a basin floor. Nevertheless, the changes observed in the studied depositional record could be considered as not significant in the light of the high variability of 10 Be/ 9 Be ratios in recent continental environments reaching the value of 3.5 × 10 -8 to 1.55 × 10 -10 (e.g. Brown et al., 1992;Graham et al., 2001;Wittmann et al., 2012;. The 10 Be/ 9 Be record of the Gușteriţa section implies that analysing of higher number of samples might be useful to effectively determine fluctuations of isotopic ratios, which should be expected in comparable depositional settings. The approach of independent calculation of the initial isotopic ratio could substitute its determination from the recent samples, which would be problematic in sedimentary successions similar to those of Lake Pannon in the TB. The circumstances that call for using the above-mentioned approach are as follows: (1) there is no recent equivalent of turbiditic depositional environment and (2) major changes appeared in the petrology of the drainage basins since the late Miocene, mostly because of the latest Miocene to Quaternary volcanism (Fielitz and Seghedi, 2005) in the catchment areas of the incoming rivers. Although ages calculated using the R 0-lacus determined by Šujan et al. (2016) in Holocene lakes in the western PB provided statistically similar results, the ages calculated by the local, 40 Ar/ 39 Ar-based initial ratio (R 0-ODM ) are suggested to be the best estimate authigenic 10 Be/ 9 Be age of the studied sediments. The 55-m-high Gușteriţa section thus represents a time interval of 500 kys at most (between 11.0 and 10.5 Ma).
2020-03-14T13:23:16.526Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "6d505f4f5458d71fc62bf8800fcb4d280158f588", "oa_license": null, "oa_url": "https://doi.org/10.17738/ajes.2019.0013", "oa_status": "GOLD", "pdf_src": "DeGruyter", "pdf_hash": "6d505f4f5458d71fc62bf8800fcb4d280158f588", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
6931924
pes2o/s2orc
v3-fos-license
Radiation-driven lipid accumulation and dendritic cell dysfunction in cancer Dendritic cells (DCs) play important roles in the initiation and maintenance of the immune response. The dysfunction of DCs contributes to tumor evasion and growth. Here we report our findings on the dysfunction of DCs in radiation-induced thymic lymphomas, and the up-regulation of the expression of the lipoprotein lipase (LPL) and the fatty acid binding protein (FABP4), and the level of triacylglycerol (TAG) in serum after total body irradiation, which contribute to DCs lipid accumulation. DCs with high lipid content showed low expression of co-stimulatory molecules and DCs-related cytokines, and were not able to effectively stimulate allogeneic T cells. Normalization of lipid abundance in DCs with an inhibitor of acetyl-CoA carboxylase restored the function of DCs. A high-fat diet promoted radiation-induced thymic lymphoma growth. In all, our study shows that dysfunction of DCs in radiation-induced thymic lymphomas was due to lipid accumulation and may represent a new mechanism in radiation-induced carcinogenesis. Total-body irradiation. A 60Co irradiator was used for total-body ionizing irradiation. Un-anaesthetized mice were placed in well-ventilated plastic boxes and exposed to the 60Co-c radiation at a distance of 3 m 4 from the source. Four weekly sub-lethal doses of 1.75 Gy gamma-ray irradiation were delivered at a dose rate of 0.58 Gy/min as described previously [22][23][24] . The mice were then removed from the plastic box and allowed free access to food and drinking water. To evaluate lymphoma incidence, Three weeks after 6 Gy gamma-ray exposures, thymus were isolated from C57BL/6 mice. The thymus was inspected, and the number of mice with lymphoma were recorded, and the lymphoma incidence was calculated. RNA extraction and Real time q-PCR. RNA was extracted with Trizol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol. The cDNA synthesis and real-time qPCR were subsequently performed using the Qiagen system as described detail in previous studies 25 . The primers used are listed in Supplementary Table S1. Cells purification and preparation. Three weeks after 6 Gy gamma-ray exposures, thymus and spleen were isolated from C57BL/6 mice. Single cells were prepared by mechanical disruption and red cells depletion. These cells were collected separately and purified by anti-CD11c microbeads as DCs. Assays for Ag-specific CD41 T cell response. For assay of Ag-specific CD41 T cell proliferation splenic CD41 T cells from DO11.10 OVA 323-339 -specific TCRtransgenic mice were positively selected with anti-CD4-coated microbeads (Miltenyi Biotec) by MACS and the cocultured with DCs treated as indicated in the presence of OVA 323-339 peptide at a ratio of 1510 (DC5T) in round-bottom 96-well plates (1 3 10 5 T cells/200 ml/well) for 5 days. Proliferation of T cells was analyzed by double staining with anti-CD41 and 7-AAD-cells were counted by FACS. Serum preparation and co-cultured with DCs. Whole blood was collected in test tubes by removing eyeball. 30 min later, the clot was removed by centrifuging at 2000 g for 10 min. The supernatant was collected as serum 26 . DCs were co-cultured with serum for 60 h (with or without TOFA), Then DCs were washed by PBS for further experiments. Preparation of DCs from mouse bone marrow. DCs were prepared from bone marrow progenitors according to a published method 27 , with minor modifications. Figure 1 | The phenotype, cytokine, number and T cell proliferation stimulating ability of DC in thymic lymphomas. Thymic lymphomas were isolated from 15 mice with radiation-induced thymic lymphomas. Total mRNA was then extracted for qRT-PCR analysis. Data were normalized to GAPDH. The expression in normal thymus tissues was arbitrarily defined as 100% (A). Cytokine levels were assayed by qRT-PCR. Data were normalized to GAPDH. The expression in normal thymus tissues were arbitrarily defined as 100%. (B). Thymic lymphomas were freshly isolated from C57BL/6 mice with thymic lymphomas. After acquiring single cells, these cells were double-stained with CD11c and co-stimulator or cytokine molecular FACS antibody. The mean fluorescence index of other molecules in the gate of CD11c positive was assayed. Normal thymic tissues were used as a control (C). After the preparation of single cell from the thymus or spleen, the percentage of CD11c positive cells was calculated (D). The CD4 1 T cells from DO11.10 OVA 323-339 specific (TCR-transgenic 3 C57BL/6) F hybrid mice were cocultured with DCs from thymic lymphomas or from the spleen of radiationtreated mice in the presence of OVA peptides. 5 days later, the total number of viable CD4 1 (CD4 1 -7-AAD 2 ) cells in each well was measured by FACS analysis (E). All data are presented as the mean 6 s.d. of three separate experiments. *P , 0.05. www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 9613 | DOI: 10.1038/srep09613 Bone marrow mononuclear cells were prepared from mouse (5-6 weeks old) femur bone marrow suspensions by depletion of red cells and then were cultured at a density 2 3 10 6 cells/ml in 6-well plates in RPMI 1640 medium supplemented with 10% FCS, 10 ng/ml of recombinant mouse granulocyte-monocyte colony-stimulating factor and 1 ng/ml of recombinant mouse IL-4. Nonadherent cells were gently washed out on day 4 of culture. At day 5, the dendritic proliferating clusters were collected and purified by anti-CD11c microbeads as immature DCs. Analysis of cell surface marker expression and cytokine intracellular staining. Fluorescein-conjugated monoclonal antibodies recognizing Ia, CD40, CD80, CD86, CCR7, and the respective isotype controls were purchased from BD-PharMingen. Before fluorescent antibody staining, all cells were incubated for 15 min at 4uC with antibody to CD16/CD32 at a concentration of 1 mg per 1 3 10 6 cell per 100 ml and cells were incubated for a further 30 min at 4uC. The cells were washed once with icecold PBS, pH 7.2, containing 0.1 NaN3 and 0.5% BSA and were resuspended in 300 ml PBS. Flow cytometry was done with a FACSCalibur and data were analyzed with CellQuest software (BD Biosciences). The phenotype of cells from thymus, spleen were analyzed by LSR II flow cytometry (BD Biosciences) as described previously 12,24 . In details, the CD11c 1 phenotype cells was treated as DCs. Cells were double-staining with CD11c-Ab and other molecular-Ab, then in the gate of CD11c positive, the other molecular expressions were assayed. IL-12 in cells assay: brefildin A was added for 7 h, then fixed by 4% paraformaldehyde for 30 min, then perforated by 0.1% saponin for 30 min, and co-cultured with mAb-IL-12 conjugated with FITC for 1 h, 4uC. The level of IL-12 in cells was assayed by FACS. Assay for cytokines, TAG, TC and Glucose. Cytokine in the supernatant of the DClipid system were assayed with ELISA kits (R&D System). Concentration of triacylglycerol (TAG), total cholesterol (TC) and glucose were assayed by Laboratory Medicine department of Changhai hospital, The Second Military Medical University. TAG, TC and glucose were assayed by Triglyceride Quantification Kit (ab65336), Cholesterol Quantification Kit (ab65359), Glucose Assay Kit (ab65333) separately (Abcam, Cambridge, UK). Lipid content analysis. To analyze the lipid content in cells, the lipophilic fluorescent dye BODIPY 493/503 was used. BODIPY 493/503 dye is bright, green fluorescent dye with similar excitation and emission to fluorescein (FITC). Cell were then washed and resuspended in 500 ml of BODIPY 493/503 at 0.5 mg/ml in PBS. Cells were stained for 15 min at 20uC before the analysis. All experiments with BODIPY performed on LSRII. Statistical Analysis. Data were presented as the mean 6 s.d. from at least three independent experiments. The difference between groups were analyzed using twotailed Student's t test when only two groups were compared. The difference between groups were analyzed using ANOVA when three or more than three groups were compared. Correlation analysis was performed by two-tailed Person's correlation coefficient analysis. Mice survival was determined by Kaplan-Meier analysis. Statistical analyses were performed using SPSS software (version 17.0). P , 0.05 was considered significantly different. Results DCs dysfunction in radiation-induced thymic lymphomas. We compared the gene-expression profiles of radiation-induced thymic lymphomas and normal adjacent-matched thymus tissues by gene microarray. This comparison revealed that the expression of genes for co-stimulatory molecules and cytokines associated with DCs function in thymic lymphomas were down-regulated (data not shown). To confirm this finding, we analyzed the expression of costimulatory molecules genes and cytokines in thymic lymphomas and appropriate control tissues by qRT-PCR, and found lower levels of Ia, CD86, CD83, CD80, CCR-7, CD40 and CD11c in thymic lymphomas (Fig. 1A), and higher levels of IL-6, TGF-b, and lower level of IL-12 (Fig. 1B). Furthermore, by gating of the CD11c 1 DCs cell population using flow cytometry analysis, we confirmed that the Ia, CD86, CD80, CCR7, CD40 and IL-12 proteins of CD11 1 DCs were all down-regulated in thymic lymphomas (Fig. 1C). Since the percentage of DCs in the thymus and spleen indicated that thymic lymphomas have a small percentage of DCs in the thymus and spleen (Fig. 1D), we decided to test function of DCs directly. Therefore, DCs in thymus and spleen were acquired by CD11c 1 sorting and were then tested for their T cell stimulating ability in a T cell proliferation experiment. We found that DCs from thymus or spleen both showed reduced T cellstimulating ability (Fig. 1E). These data confirmed the dysfunction of thymus and splenic DCs of mice with radiation-induced thymic lymphomas. Serum from thymic lymphomas mice induced the dysfunction of DCs. Previously we showed that immunosuppressive cytokines secreted by tumor cells can lead to dysfunction of DCs 12 . Thus we hypothesized that the immunosuppressive cytokines in the serum of radiation-induced thymic lymphomas mice may impair the functional ability of DCs. To test this hypothesis, we co-cultured DCs with serum from mice with radiation-induced thymic lymphomas for 60 h, and then examined the function and cytokines of DCs. We found decreased expression of several cytokines on the surface of DCs, including CD80, CD86, Ia, CD40 and CCR7 ( Fig. 2A). Additionally, expression was down-regulated for DCs secreted cytokines IL-12p40, IL-1 and IFN-c (Fig. 2B). To identify the serum factors contributing to these effects, we measured the immunosup- pressive cytokines level in serum from mice with radiation-induced thymic lymphomas, and found that the level of TGF-b and IL-6 were higher than in the control (Fig. 2C). Thus, it is possible that TGF-b and IL-6 in serum may be involved in the DCs dysfunction in thymic lymphoma. To confirm the role of TGF-b and IL-6, we performed a T cell proliferation experiment in the presence or absence of TGF-b and IL-6 neutralizing antibody in DCs-serum system separately, and found that only anti-TGF-b could partly restore the T cell stimulating function of DCs (Fig. 2D). Accordingly, we concluded that another immunosuppressive factors may exist in the serum of mice with radiation-induced thymic lymphomas. Triacylglycerol up-regulation in serum and lipid accumulation in DCs. The results from microarray-base gene expression analysis showing the down-regulation of LPL and FABP4 in mice with radiation-induced thymic lymphomas hinted at the involvement of these unknown immunosuppressive factors in thymic lymphomas pathogenesis. For this reason, qRT-PCR analysis was performed to measure expression in different cell types. The levels of LPL and FABP4, both of which are involved in lipid uptake and metabolism, was down-regulated in the thymus, spleen and PBMC of mice with thymic lymphomas, compared to the levels in mice that did not receive radiation treatment (Fig. 3A). Next, we tested the level of triacylglycerol (TAG), total cholesterol (TC) and glucose in the serum of mice with thymic lymphomas, and found that TAG level in serum of thymic lymphomas mice was increased greatly (Fig. 3B). As a high level of TG could lead to lipid accumulation in DCs and their dysfunction 15 , we tested the TAG level in splenic and thymic DCs, and found that the TAG level in DCs from thymic lymphomas mice was higher than in control mice (Fig. 3C). In addition, the lipid content in DCs from mice with thymic lymphomas was also higher (Fig. 3D). Lipid accumulation led to the dysfunction of DCs. A series of in vitro experiments were conducted to test whether lipid accumulation led to the dysfunction of DCs. We co-cultured DCs with different TAG concentrations, then measured lipid accumulation, DCs surface function molecular markers, cytokines levels, and T cell stimulating ability. The results indicate the increasing TAG led to lipid accumulation in DCs (Fig. 4A). Moreover, TAG could also down-regulate the expression of CD86 and Ia (Fig. 4B), and of IL-12p40, IL-1, IFN-c level (Fig. 4C) in a concentration-dependent manner. Most importantly, TAG also reduced the T cell proliferation stimulating ability of DCs in a dose-dependent way (Fig. 4D), with a high correlation between the number of proliferating T cells and the lipid content of DCs (Fig. 4E). Inhibition of lipid accumulation restored the DC function. To confirm the direct involvement of lipid accumulation in the dysfunction of DCs, rescue experiments were performed, To do this, fatty acid levels were regulated with an inhibitor of acetyl-CoA carboxylase, 5-(tetradecycloxy)-2-furoic acid (TOFA) 28 . Since TAG undergo rapid degradation in the cells, maintaining requires active fatty acid synthesis 15 . When synthesis is blocked, cells are unable to sustain high levels of triacylglycerol (Fig. 5A). FACS analysis showed that CD80, CD86, Ia and CCR7 expression was In vitro bone marrow progenitor-derived DCs were treated with TAG (1, 10 or 100 mmol/L) separately. Next, lipid content was analyzed in DCs by BODIPY493/503 staining (A). After pretreatment of DCs (1 or 10 mmol/L TAG cocultured), the expression of CD86, Ia and CCR7 expression in DCs were analyzed by FACS analysis (B). After the pretreatment of DCs (1 or 10 mmol/L TAG co-cultured), DCs were washed by PBS 3 times, 24 h later, the supernatant of cells (6 3 10 5 cells/well) was collected for ELISA assay (C). After pretreatment of DCs (1, 10 or 10 mmol/L TAG co-cultured), DCs were washed by PBS 3 times, harvested, and further co-cultured with CD4 1 T cells from DO11.10 OVA 323-339 -specific (TCR-transgenic 3 C57BL/6) F1 mice for 5 days in the presence of OVA. Finally, the number of viable CD4 1 T cells (CD4 1 7-AAD 2 ) was detected by FACS (D). The correlation between the number of viable CD4 1 T cells and the lipid content in DCs (BODIPY493/503 staining) was analyzed by two-tailed Person's correlation coefficient analysis (E). All data are presented as mean 6 s.d. of three independent experiments. *P , 0.05. www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 9613 | DOI: 10.1038/srep09613 restored by TOFA treatment (Fig. 5B). Significantly, in the presence of serum of mice with thymic lymphomas, the treatment with TOFA considerably improved the ability of DCs to stimulate T cell proliferation (Fig. 5C). Hence, the rescue experiments results confirmed the role of lipid in DCs function. High fat dietary promoted radiation-induced thymic lymphomas growth. The above results indicated that lipid accumulation led to dysfunction of DCs, which in turn promoted thymic lymphomas growth. Accordingly, we conjectured that a high fat dietary (HFD) enhanced radiation-induced thymic lymphomas by impairing the function of DCs. To explore this possibility, after radiation, we fed mice with high fat dietary or normal fat dietary (NFD) for 15 weeks, then thymus weight and DCs were measured. We found that HFD promoted thymic tissue growth (Fig. 6A), and decreased the percent of DCs in thymus (Fig. 6B). In mice with radiation-induced thymic lymphomas, the HFD increased thymus weight (Fig. 6C), and decreased the percentage of DCs in the thymus (Fig. 6D). Importantly, HFD increased the incidence of radiation-induced thymic lymphomas in mice (Fig. 6E), and survival analysis revealed that the HFD led to diminished lymphoma-free survival rate (Fig. 6F). Discussion In this study, we primarily found dysfunction of DCs in radiation induced thymic lymphomas. Subsequently we found, in vitro, that the serum of mice with thymic lymphomas led to lipid accumulation in bone marrow-derived DCs and their dysfunction. The key factor in this process was proven to be TAG. In a previous study, lipid accumulation and dysfunction in DCs were also identified in lymphomas 15 . We presume that the accumulation of lipids might be due to increased synthesis of fatty acids or may result from increased lipid uptake from plasma. Radiation up-regulated the LPL and FABP4 expression, and the high level of TAG in serum led to the lipid accumulation in DCs. Interestingly, in the previous study 15 , DCs from tumor-bearing mice showed preferential up-regulation of the macrophage scavenger receptor (Msr1, or CD204), and scavenger receptors represent a major route in the acquisition of fatty acids by DCs and macrophages [29][30][31] . In this work, the levels of Msr1 were not measured, but may similarly be up-regulated in DCs. Lastly, we also found that HFD promoted radiation-induced thymic lymphomas growth. This finding is consistent with other studies 32 . Indeed, diet-induced obesity has many consequences including pathologies of diverse organ systems as well as cancers of the liver, kidney, and pancreas. In addition, our data highlight the role of HFD in radiation-induced carcinogenesis. Whether low fat diet has a radiation protective role is an important outstanding question. We previously showed that HMGB1 was released from radiationinduced dying thymus cells. HMGB1 in turn activated TLR4 and elevated the pro-tumor factors IL-6 and miR-21, together with other In the presence of TOFA, immature DCs were co-cultured with serum from mice with radiation-induced thymic lymphomas for 60 h, DCs were stained with BODIPY493/503 for lipid content analysis (A). In the presence or absence of TOFA, the expression of CD80, CD86, Ia, CD40 and CCR7 of serum treated DCs was analyzed by FACS analysis (B). Immature DCs were cocultured with serum from mice with radiation-induced thymic lymphomas for 60 h in the presence of TOFA, harvested and then further cocultured with CD4 1 T cells from DO11.10 OVA 323-339 -specific (TCR-transgenic 3 C57BL/6) F1 mice for 5 days in the presence of OVA. Finally, the number of viable CD4 1 T cells (CD4 1 7-AAD 2 ) was detected by FACS analysis(C). All data are presented as the mean 6 s.d. of three independent experiments. *P , 0.05. www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 9613 | DOI: 10.1038/srep09613 Figure 6 | High-fat diet increased lymphoma incidence and reduced the survival rate of radiation-treated mice. Twenty C57B/L6 mice were fed a high fat diet (HFD) for 15 weeks, then the thymus was isolated and weighted, 20 mice fed with normal fat diet (NFD) served as the control (A). Single cells were prepared, and stained with Ab to CD11c. The CD11c positive cells were analyzed by FACS analysis (B). Twenty mice with radiation-induced thymic lymphomas mice were fed with the HFD or NFD, and the weight of the thymus was evaluated. 20 WT mice were used as the control (C). Twenty mice with radiation-induced thymic lymphomas were fed the HFD or NFD, and the percentage of DCs in thymus was examined by FACS analysis. Twenty WT were used as control (D). Twenty C57B/L6 mice were fed with the HFD for 15 weeks, the other 20 mice were fed the NFD as the control. In the last four weeks, mice received radiation treatment as described in the method. All mice were then euthanized to assess lymphoma incidence (E). The survival status of these treated mice were recorded after radiation treatment. The Kaplan-Meier curves were drawn according to the fat diet (F). All data are presented as the mean 6 s.d. of three independent experiments. *P , 0.05. www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 9613 | DOI: 10.1038/srep09613 important factors like MMP9 and miR-155, to induce carcinogenesis 5 . Here, using a similar radiation-induced thymic lymphoma model, we found the dysfunction of DCs, though HMGB1 could contribute to anticancer chemotherapy and radiotherapy via DCs 33 . The context in radiation induced carcinogenesis is very complex and a number of factors and cells are involved. The discrepancy between studies may be due to analysis of different cross-sections data. DCs seem to be the key factor in radiation-induced carcinogenesis, and showed different roles different contexts. In conclusion, we have confirmed the dysfunction of DCs in radiation-induced thymic lymphomas. Up-regulation of TAG in serum led to lipid accumulation and dysfunction in DCs. Our data highlight the role of DCs in radiation-induced thymic lymphomas and reveal a new mechanism of radiation-induced carcinogenesis.
2018-04-03T04:07:20.375Z
2015-04-29T00:00:00.000
{ "year": 2015, "sha1": "39ab3661d3e3b21d60f450609dadd3154e6e297e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep09613.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c188ffb4dac17447590b9797e49b962c61e71ddb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
42653925
pes2o/s2orc
v3-fos-license
Multiplex ligation‐dependant probe amplification study of children with idiopathic mental retardation in South India Intellectual disability, also called as mental retardation (MR) is a generalized disorder branded by significantly impaired cognitive functioning existing concurrently with related limitations in two or more of the adaptive skill areas such as community use, self‐direction, health and safety, functional academics, leisure, and work and appears before the age of 18 years. With regard to the intellectual criteria for the diagnosis of MR, intelligence is generally defined by an intelligent quotient (IQ) test score of approximately 70 or below. Levels of MR has been classified by Diagnostic and Statistical Manual of Mental Disorders IV‐Text Revision (DSM‐IV‐TR) based on IQ scores; mild to moderate MR being >50 and moderate to severe MR is <50. The prevalence of MR is 2‐3% of the general population.[1] MR may be caused by a wide range of factors that, together, contribute to its pathogenesis. In different studies on the etiology, the diagnostic yield appears to be highly variable.[2] This variation is likely attributable to difference in methodology, classification and terms used for diagnosis.[3] Genetic disorders most commonly found in individuals with developmental delay/ MR are chromosomal aberrations at about 10%.[4,5] Down syndrome (DS) is the most common with MR associated chromosomal abnormality,[6] Fragile X syndrome (FXS) is 2nd most common form and inherited MR. Besides DS and FXS, chromosome aberrations are so common. Conventional cytogenetics is the primary diagnostic tool Introduction Intellectual disability, also called as mental retardation scores; mild to moderate MR being >50 and moderate to severe MR is <50. The prevalence of MR is 2-3% of the general population. [1] MR may be caused by a wide range of factors that, together, contribute to its pathogenesis. In different studies on the etiology, the diagnostic yield appears to be highly variable. [2] This variation is likely attributable to difference in methodology, classification and terms used for diagnosis. [3] Genetic disorders most commonly found in individuals with developmental delay/ MR are chromosomal aberrations at about 10%. [4,5] Down syndrome (DS) is the most common with MR associated chromosomal abnormality, [6] Table 2. The MLPA analysis was carried out as described by Schouten et al., [9] with minor modifications. Reaction products were electrophoresed in the ABI-PRISM 3130 genetic analyzer (Applied Biosystems, USA). Table 1] in subjects with MR. Materials and Methods A total of 122 subject of age group between 3 years and 18 years with developmental disabilities were recruited. The IQ of all the study subjects (above 5 years) were determined to be below 70 (Wechsler intelligence scale for children). The etiology of developmental delay/ or intellectual disability was found out using inclusion and exclusion criteria [7] such as (1) dysmorphologic examination (e.g., minor anomalies and malformations), (2) neurologic examination (e.g., electroencephalography, neuroimaging), (3) cytogenetic screening (G-bands by Trypsin using Giemsa -GTG banded metaphases) and (4) fragile X screening (FraX site using M199 medium and polymerase chain reaction (PCR) analysis for Fragile X Mental Retardation 1 (FMR1) gene, Cytosine-Guanine-Guanine (CGG triplet repeats). Cytogenetic analysis was carried out as per standardized method [8] in every recruited patient. Genomic DNA was extracted and purified from peripheral blood samples by using standardized phenol-chloroform method as well as Qiagen DNA extraction kit. A total of 10 normal The variability of the normalized values for each probe in these control samples were determined and were plotted using Origin as shown in Figure 1 and the probes showed a homogenous behavior, and the normalized values clustered around one although slight differences on skewness and range were observed for some of them. Hence, not all probes conferred the same reliability taking into account the borderline results that were obtained. The results obtained with the same assay in normal men and women confirmed the reliability of our data. The study was approved by the Institutional Ethical Committee and a written informed consent was obtained from all participants. Results A total of 122 children with developmental delay/ or MR were selected to study cytogenetic and MLPA analysis [ Figure 2 and SEMA7A detected in one patient each, which were reported by two study groups. [10,11] No subtelomeric deletions or duplication were observed in the present study group. Discussion In this study, we present the results of the submicroscopic screening by MLPA in 122 unexplained intellectual disability patients with normal Karyotypes. The results of the present study highlight the benefit of submicroscopic genomic screening in unexplained MR. This paper reports a group of patients with developmental delay and/or mild to moderate intellectual disability recruited to find out genetic causes using cytogenetic and MLPA analysis. The purpose is to unearth the submicroscopic subtelomere deletions that could be missed by routine conventional cytogenetics. Recent methodologies for assessing the genomic imbalance at submicroscopic subtelomeres concluded that MLPA to be a robust technique in a diagnostic set-up although the use of real-time PCR and microarrays may become more widespread with their availability on an affordable commercial platform. [18,19] The strategy of replacing conventional cytogenetic analysis by MLPA in a diagnostic set-up has been suggested in a recent study, if follow-up by microarray analysis is feasible as it may prove effective in terms of detection rate and cost-effectiveness. [20] MLPA is an extremely efficient, medium-throughput technique. For example, the cytogenetic findings in this study showed normal karyotypes in 99% of cases but with MLPA, the study detected in 9% of the patients with micro-deletions as given in Table 4 Hence, MLPA method gives a better yield in comparison with karyotype analysis. MLPA can also be considered as an ideal approach until microarray testing gets validated as routine diagnostic tool, and becomes economical enough to replace karyotyping as the 1 st test for idiopathic MR patients.
2018-04-03T05:22:51.820Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "ffd17d90758b2318db1981cb417da976ea48f6e0", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc3758722", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5619b0478571aca54b79e00438e689ed2ec3c87e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235495721
pes2o/s2orc
v3-fos-license
Effect of Heat andMass Transfer andMagnetic Field on Peristaltic Flow of a Fractional Maxwell Fluid in a Tube Magnetic field and the fractional Maxwell fluids’ impacts on peristaltic flows within a circular cylinder tube with heat and mass transfer were evaluated while assuming that they are preset with a low Reynolds number and a long wavelength. )e analytical solution was deduced for temperature, concentration, axial velocity, tangential stress, and coefficient of heat transfer. Many emerging parameters and their effects on the aspects of the flow were illustrated, and the outcomes were expressed via graphs. Finally, some graphical presentations were made to assess the impacts of various parameters in a peristaltic motion of the fractional fluid in a tube of different nature. )e present investigation is essential in many medical applications, such as the description of the gastric juice movement of the small intestine in inserting an endoscope. Introduction Numerous implementations have drawn interest of physicists, mathematicians, and engineers on magneto-hydrodynamic flow issues. In some applications and geothermal studies, metal alloy substantiation processes are optimized Sources, management of waste fuel, regulation of underground propagation and pollution of chemicals, waste, the construction of energy turbines for MHD, magnetic equipment for wound therapy and cancer tumour treatment, reduction of bleeding during surgery and transport of targeted magnetic particles as medicines. Several extensive works of literature on that fertile field are now available in [1,2]. Saqib et al. [3] clarified the nonlinear motion of the non-Newtonian fractional model fluid problem. Rashed and Ahmed [4] produced a numerical solution for dusty nanofluids peristaltic motion in a channel using a shooting method. e slip effect's problem on a peristaltic flow of the fractional fluid of second-grade over a cylindrical tube was examined by Rathod and Tuljappa [5]. Vajravelu et al. [6] obtained the velocity, temperature, and concentration with a magnetic field of a Carreau fluid in a channel with the heat and mass transfer. Ali et al. [7] discussed magnetic field effects on a blood flow that the blood was characterized as the Casson fluid. Zhao et al. [8] explored the motion natural convection temperature of a fraction with a magnetic field of viscoelastic fluid through a porous medium. Abd-Alla et al. [9] were researching the magnetic field's impact on a peristaltic motion of the fluid through the cylindrical cavity. Afzal et al. [10] analyzed the effect of the diffusivity convection and magnetic field in nanofluids on the peristaltic motion through the nonuniform channel. Heat and mass transfer's effects and magnetic field of the peristaltic motion in a planar channel were examined by Hayat and Hina [11]. e impact of the temperature and the magnetic field of peristaltic motion through a porous medium was debated by Srinivas and Kothandapani [12]. Ramzan et al. [13] discussed the heat flux and magnetic field's influences in Maxwell fluid flow through a two-way strained surface. Rachid [14] calculated the movement of viscoelastic fluid peristaltic transport under the Maxwell fractional model. e impact of a viscosity and a magnetic field of the peristaltic motion of synovial nanofluid in an asymmetric channel was reconnoitered by Ibrahim et al. [15]. Aly and Ebaid [16] inspected the slip conditions' effects of a peristaltic motion of nanofluids. Carrera et al. [17] checked the extension of a fractional Maxwell fluid and viscosity to the peristaltic motion. Zhao [18] exhibited the convection flow, the magnetic field, and velocity slip of a peristaltic motion of a fractional fluid. Abd-Alla et al. [19] obtained the solution to the peristaltic motion problem in an endoscope tube. e analytical solution of the transport of viscoelastic fluid through a channel in the fractional peristalsis movement model was presented by Tripathi et al. [20]. e magnetic field effect on peristaltic movement in a vertical annulus was exposed by Nadeem and Akbar [21]. Srinivas et al. [22] were determining the effects on Newtonian fluid's peristaltic movement into porous channels of wall slip conditions, magnetic field, and heat transfer. Recent research expansions on the subject beginning from [23][24][25][26][27][28][29][30][31][32][33]. is paper aims to inspect the impacts of magnetic fields, heat and mass transfer, and fractional Maxwell fluids on the peristaltic flow of Jeffrey fluids. Both two-dimensional equations of motion and heat and mass transfer are generalized under the presence of low Reynolds numbers and a long wavelength. e temperature, concentration, axial velocity, tangential stress, and coefficient of heat transfer are empirical solutions, and the wave shape is found. In the problem, the relevant parameters are specified pictorially. e findings obtained are displayed and discussed graphically. For physicists, engineers, and individuals interested in developing fluid mechanics, the outcomes described in this paper are essential. e different potential fluid mechanical flow parameters for the Jeffrey peristaltic fluid are also supposed to serve as equally good theoretical estimates. Indeed, the current investigation is firmly believed to receive considerable attention from the researchers towards further peristaltic development with a variety of applications in physiological, modern technology, and engineering. Formulation of the Problem Take the MHD peristaltic flow through uniform coaxial tubes of a viscoelastic fluid through the fractional Maxwell fluid model. If the flow is transversely subject to a consistent magnetic field, electrical conductivity exists ( Figure 1). Furthermore, it is supposed the inner and outer tube temperatures are T 0 and T 1, and concentrations are C 0 and C 1, respectively. We picked a cylindrical coordinate R and Z. e equations for the tube walls are given by (1) e equation of the fractional Maxwell fluid is given by where Also, note that D t , of order α 1 concerning t and defined as follows: e equation of motion can be written in the fixed frame which are derived [32,33] as 2 Complexity e transformation between these two frames can be written as follows: e relevant governed boundary conditions for the considered flow analysis can be listed as e leading motion equations of the flow for fluid in the wave frame are given by where S depends only on r and t. After using the initial condition S(t � 0), we find S rr � S θθ � S zz � S rθ � 0, and Figure 1: e geometry of the problem. Complexity 3 We present the following dimensionless parameters for further analysis: Solution of the Problem For the abovementioned modifications and nondimensional variables listed earlier, the preceding equations are reduced to Reδ u z zr RePrδ u z zr Reδ u z zr With boundary conditions The Analytical Solution Furthermore, the hypothesis of the long wavelength approach is also supposed. Now, δ is very small so that it can be tended to zero. us, the δ ≪ 1 dimensionless governing equations (12)-(15) by using this hypothesis may be written as equation (18) specifies that p is only a function of z. Temperature, concentration, and axial velocity solutions can be described as follows: θ � log r/r 2 log r 1 /r 2 + β 4 where e heat transfer coefficient is indicated as follows: So, the solution of heat transfer is given by Complexity 5 Using the definition of the fractional differential operator (5) we find the expression of f as follows: Results and Discussion In this section, the effect of different parameters is shown graphically in Figures 2-7 Figure 2 has been plotted to clarify the variations of β and φ on the temperature distribution θ. Figure 2 shows that θ decreases when β increases in the range 0 ≤ r ≤ 0.32, while θ increases when β increases in the range 0.32 ≤ r ≤ 1.2. Moreover, θ decreases when φ increases in the range 0 ≤ r ≤ 0.32, while θ increases when φ increases in the range 0.32 ≤ r ≤ 1.4. In addition, the temperature decreases with the radial increase and the boundary conditions are fulfilled. Figure 3 displays the discrepancy of the concentration with the radial for various values of ε, φ, Sc and Sr. It is indicated that the concentration increases with increasing ε and φ. However, Θ decreases with increasing Sr and Sc. In addition, the concentration decreases with the radial increase and the boundary conditions are fulfilled. e impacts of Gr, λ 1 , φ, α 1 , M, and Sc on the axial velocity w are illustrated in Figure 4. It is indicated that the axial velocity profiles decreases with increasing Gr, λ 1 , and φ in the range 0 ≤ r ≤ 0.32, while it increases in the range 0.32 ≤ r ≤ 0.45, In addition to this, the axial velocity profile decreases with increasing α 1 in the whole range 0 ≤ z ≤ 1, while it increases with increasing M in the whole range 0 ≤ z ≤ 1, the axial velocity profiles decreases with increasing Sc in the range 0 ≤ z ≤ 53 as well, and it increases in the range 0.53 ≤ r ≤ 0.88 and then decreases again in the range 0.88 ≤ z ≤ 1. Also, it is observed that the velocity has oscillatory behavior due to peristaltic motion concerned. e effect of α 1 , M, β and Sc can be observed from Figure 5, in which the tangential stress is illustrated for the various values of α 1 , M, β, and Sc. With the increase of α 1 and Sc, the tangential stress decreases. Moreover, tangential stress increases with increasing M and β. It is noticed that one can observe the tangential stress is in oscillatory behavior, which may be due to peristalsis. Figure 6 explains the influence of ε and φ on the heat transfer coefficient Zh. Obviously, the increase in ε and φ increases the amplitude of the heat transfer coefficient in the whole range z. From Figure 6, one can observe that heat transfer coefficient is an oscillatory behavior in the whole range, which may be due to peristalsis. Figure 7 is plotted in 3 D schematics concern the axial velocity w, the concentration Θ, the temperature θ, and the heat transfer coefficient Zh concerning r and z axes in the presence α 1 , Sr, ε, and φ. It is indicated that the axial velocity decreases by increasing α 1 , Also, the concentration decreases by increasing Sr, the temperature increases with increasing of ε as well, otherwise the heat transfer coefficient increases by increasing φ. For all physical quantities, we obtain the peristaltic flow in 3D overlapping and damping when the state of particle equilibrium is reached and increased. e vertical distance of the curves is greater, with most physical fields moving in peristaltic flow. 5) is study has indeed been widely applied in many fields of science, such as medicine and the medical industry. us, in the field of fluid mechanics, it is considered as extremely essential. When inserting an endoscope through the small intestine, this study describes the movement of the gastric juice. Nomenclature R 1 , R 2 : Shapes of the wave walls t: Time in a wave frame λ 1 : Relaxation time α 1 : Fractional time derivative parameter c . : Rate of the shear strain U, W: e components of the velocity in a laboratory frame u, w: e components of the velocity in a wave frame P: e pressure in a laboratory frame p: e pressure in a wave frame σ: Fluid's electric conductance B o : e intensity of the external magnetic field ρ: Density g: Gravity constant α t : Linear coefficient of the thermal expansion α c : Coefficient of the viscosity at constant concentration c p : Specific heat K: ermal conductivity Q O : Heat generation coefficient φ: Wave amplitude in the dimensionless form ε: Radius ratio θ: e distribution of temperature Θ: e distribution of concentration T 0 , T 1 : Inner and outer tube temperature C 0 , C 1 : Inner and outer tube concentration δ: Wavenumber μ: Fluid viscosity M: Hartmann number Re: Reynolds number Pr: Prandtl number Gr: Grashof number β: e heat source/sink parameter Br: Brinkman number Sr: Soret number Sc: Schmidt number. Data Availability No data were used to support this study. Conflicts of Interest e authors declare that they have no conflicts of interest. Figure 7: Discrepancies of the axial velocity, w, the concentration, Θ, the temperature, θ, and the heat transfer coefficient Zr in 3D against rand z-axis under the influence of α 1 , Sr, ε, and φ..
2021-06-22T13:19:48.084Z
2021-06-16T00:00:00.000
{ "year": 2021, "sha1": "16e64bedc60fc3d9865da74b7850504df3df9d7d", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/complexity/2021/9911820.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1b843adc8e0f68386e23b896a8bf5a91bf7546dc", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
226241391
pes2o/s2orc
v3-fos-license
Multi-mission satellite remote sensing data for improving land hydrological models via data assimilation Satellite remote sensing offers valuable tools to study Earth and hydrological processes and improve land surface models. This is essential to improve the quality of model predictions, which are affected by various factors such as erroneous input data, the uncertainty of model forcings, and parameter uncertainties. Abundant datasets from multi-mission satellite remote sensing during recent years have provided an opportunity to improve not only the model estimates but also model parameters through a parameter estimation process. This study utilises multiple datasets from satellite remote sensing including soil moisture from Soil Moisture and Ocean Salinity Mission and Advanced Microwave Scanning Radiometer Earth Observing System, terrestrial water storage from the Gravity Recovery And Climate Experiment, and leaf area index from Advanced Very-High-Resolution Radiometer to estimate model parameters. This is done using the recently proposed assimilation method, unsupervised weak constrained ensemble Kalman filter (UWCEnKF). UWCEnKF applies a dual scheme to separately update the state and parameters using two interactive EnKF filters followed by a water balance constraint enforcement. The performance of multivariate data assimilation is evaluated against various independent data over different time periods over two different basins including the Murray–Darling and Mississippi basins. Results indicate that simultaneous assimilation of multiple satellite products combined with parameter estimation strongly improves model predictions compared with single satellite products and/or state estimation alone. This improvement is achieved not only during the parameter estimation period (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sim $$\end{document}∼ 32% groundwater RMSE reduction and soil moisture correlation increase from \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sim $$\end{document}∼ 0.66 to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sim $$\end{document}∼ 0.85) but also during the forecast period (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sim $$\end{document}∼ 14% groundwater RMSE reduction and soil moisture correlation increase from \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sim $$\end{document}∼ 0.69 to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sim $$\end{document}∼ 0.78) due to the effective impacts of the approach on both state and parameters. Studying the terrestrial hydrology is facilitated by developments of land surface models. These models are important to simulate various terrestrial compartments over an extended period of time. Moreover, they are essential for predicting hydrological processes and water storage changes at various temporal and spatial resolutions. The performance of the land surface models, however, can be degraded caused by multiple factors such as uncertainties in model forcings, model parameters, initial and boundary conditions, and simplification of the representation of processes 1,2 . To address this, traditionally additional datasets are integrated with models to improve model estimates. The data integration approaches have become more popular with the advent of satellite remote sensing. This is related to the satellite's extensive coverage and high spatial and temporal resolution, especially during the past few decades. Satellite data products can be used to constrain the models, e.g., via data assimilation [3][4][5][6][7][8][9][10][11][12] . A number of studies has shown that applying multivariate data assimilation using in-situ and reanalysis estimates [13][14][15][16][17][18][19] could be beneficial. However, despite a few efforts for using multi-mission satellite products for data assimilation [20][21][22] , the extent of the effectiveness of the approach has not yet been fully investigated 23 . Furthermore, while using the multivariate data assimilation was found to be effective for improving on-line model estimates, its impact on the (long-term) forecasting skill is normally limited if only initial states are updated. The main reason behind this is the important role of the model parameters for simulating fluxes and water storage as well as uncertainty with respect to model forcings (meteorology). Poorly defined parameters, which are not updated during the Materials Case studies. The two major river basins, Mississippi and Murray-Darling are selected for the experiment given the presence of in-situ measurements to assess the proposed multivariate data assimilation. The Murray-Darling basin is the biggest river system in Australia comprising many wetlands (i.e. more than 30,000) and rivers (i.e. 23), which provide freshwater for, e.g., agriculture, industry, and water use 54 . A large area in the eastern part of the country is covered by the basin, which contains a variety of natural environments, e.g., desert and dry regions (west), rainforest (north), snow covered areas and areas with a larger amount of surface water (south). Historically, the Murray-Darling basin has undergone various extreme droughts and floods. Furthermore, water storage within the basin has also shown large inter-annual and annual variabilities 53,54 . Temperature varies from 0 to 3 • C in the elevated areas in the southeast of the basin in July to 33 to 36 • C for the upper northern parts in January. The same rainfall spatial variability also exists within the Murray-Darling basin, i.e. annual rainfall less than 250 mm in the northwest and excess of 2000 mm in the elevated areas 55 . Similarly, the Mississippi River basin is an important source of freshwater in North America, which provides water for more than 18 million people and different socioeconomic sectors. Temperature varies strongly within the basin, which leads to large spatial and temporal hydro-climatic variabilities 56,57 . For example, higher temperature ( 21 • C ) along with hot and humid condition exist in May to September while average low temperatures ( −3 • C ) in January are available in the north caused by various factors such as polar and subtropical jet streams and Arctic cold. Snow line has been progressively migrating northward across the basin 59,60 . Overall, Upper Mississippi areas (e.g., central Minnesota to central Wisconsin) has larger snow cover compared to the other parts of the Mississippi River basin (such as southeast Missouri and southwest Illinois). Showers (and thunderstorms) occur mostly in summers over different parts of the basins while winter precipitation varies from less than 25 mm for the western and northern Great Plains to 75 www.nature.com/scientificreports/ conditions vary over the different parts of the Mississippi basin and different times of the year [58][59][60] . This includes semiarid climates in the west, humid condition over the eastern parts, sub-humid climates in the south along with a large discharge rate and multiple flood events across the basin. In-situ groundwater well data (derived from USGS and the New South Wales Government) are used over both basins to evaluate the estimated groundwater variations from the model with and without data assimilation. To this end, groundwater level data are converted into groundwater storage change values with the help of specific yield values of the basin 2,61-63 . In addition, soil moisture observations from in-situ stations are used to assess the soil moisture estimates at different depths. For this purpose, top, shallow and root-zone soil moisture from the model are compared against in-situ soil moisture measurements at corresponding depths. Over the Mississippi Basin, groundwater and soil moisture measurements are acquired from USGS (https ://water .usgs.gov/ogw/data. html) and the International Soil Moisture Network (https ://ismn.geo.tuwie n.ac.at/), respectively. For the Murray-Darling Basin, the measurements are acquired from New South Wales Government (http://water info.nsw. gov.au/pinne ena/gw.shtml ) and the moisture-monitoring network 64 . Model and data. Model. The World-Wide Water Resources Assessment model (W3RA), which was designed and developed by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) is used. W3RA is a Global water balance model, which is distributed on grid basis and simulates water flows and water storage 65 . ERA-interim reanalysis data including meteorological fields of precipitation, maximum and minimum temperature, and downwelling short-wave radiation, are used as model forcings. The model presents the water balance of the soil, groundwater and surface water independently over each grid cell 66 . The water and energy fluxes between the water storages are also modelled for two hydrological response units (HRUs) which occupy different fractions of a grid cell, i.e., tall and deep-root vegetation in HRU1 and short and shallow root vegetation in HRU2. Correspondingly, parameterizations are applied at the sub-grid level 39 . Poovakka et al. 40 discussed the necessity of calibration for this model, as it is currently limited to a number of catchments where streamflow records and input forcing data are available. The model relies on a variety of parameters such as water holding capacity and effective soil parameters 67 . A detailed list of selected parameters for estimation is presented in Table 1. These parameters influence mass balance equations underlying the model. Soil albedo and photosynthetic capacity index (PCI) parameters are used to model canopy albedo and outgoing shortwave radiation from the land. Initial retention capacity ( I 0 ) and reference event precipitation ( P ref ) are applied to derive surface runoff. These parameters are also connected to rainfall intensity and the soil infiltration distribution. Soil water drainage is estimated based on β and field capacity drainage fraction. F ER0 , W 0lim and maximum stomatal conductance ( G smax ) are applied for evaporation modelling, e.g., via rainfall interception evaporation, soil evaporation, and maximum transpiration, respectively. Open water evaporation scaling factor is used to derive open water evaporation, which can have higher uncertainties over large bodies of surface water. Specific leaf area and leaf area index parameters are developed to facilitate vegetation phenology computations 65 . Satellite remote sensing. Three main satellite products are used for data assimilation to update states and estimate parameters. TWS changes are derived from level 2 (L2) GRACE products (up to degree and order 90). L2 coefficients and their associated full error covariance information are acquired from the ITSG-Grace2014 gravity field model 68 . Post-processing steps are done following Khaki et al. 69 and Khaki and Awange 70 to calculate TWS changes between 2003 and 2016. The data is then used to update the summation of different water storage components from the model including groundwater, different soil layers, and surface water storage (see details in "Methodology" section). TWS error covariances (to be used in data assimilation) are computed from potential coefficients following Schumacher et al. 46 . The National Oceanic and Atmospheric Administration (NOAA) of LAI Climate Data Record (CDR; version 4) and Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) 71 are obtained for the period of 2003-2016. The data were produced by the University of Maryland and the NASA Goddard Space Flight Center (GSFC) on a daily 0.05 • × 0.05 • global scale. LAI products are used for data assimilation given their potential www.nature.com/scientificreports/ to improve modelling skills 72 . LAI has a major impact on estimating evapotranspiration (ET) and precipitation interception, thus, can be very useful for data assimilation 71 . Following Fox et al. 73 , a constant error standard deviation of 0.2 (m 2 m −2 ) is assumed for the LAI from satellite. Soil moisture products are achieved over the same period, i.e., 2003-2011 from AMSR-E (Level-3) 74 and 2011-2016 from SMOS (Level 3 Centre Aval de Traitement des Donnees SMOS) 75 . These data are used during assimilation to control the model surface soil moisture content. Regarding soil moisture measurement uncertainty, we followed Leroux et al. 76 and De Jeu et al. 77 and assumed 0.04 (m 3 m −3 ) error for SMOS and 0.05 (m 3 m −3 ) error for AMSR-E observations. Water fluxes. Additional datasets of precipitation, total evapotranspiration, and water discharge are used to constrain the water balance through in the UWCEnKF implementation (see details in "Data assimilation" section). These data are derived from Khaki et al. 69 , in which data from different sources, e.g., satellite, reanalysis, and gauge-based measurements (from multiple sources over the Mississippi and Murray-Darling basins), are merged to achieve the best estimates over different basins. Note that the datasets applied here for water budget constraint are mostly independent from those applied for running the model except for precipitation, e.g., the Tropical Rainfall Measuring Mission (TRMM) is used in both ERA-interim forcing and the merged precipitation product for the water budget constraint. Nevertheless, this dependency between the products is not a limitation for our data assimilation experiments as it was shown that the water budget closure, where the water flux observations are used is not affected by this 69,78 . Methodology Sensitivity analysis. A sensitivity analysis following Cannavo 79 is carried out to measure the model response to parameter changes. This is done to identify the parameters that significantly affect the model output. The analysis will also increase our understanding of the impact of model parameters on model simulations. The selected approach here is a global sensitivity analysis that contrary to so-called local sensitivity analysis assesses sensitivity over the entire input parameter space. It is a variance-based method that investigates the contribution of each input parameter to the total variance of the output, i.e., y = f (X) and X = (x 1 , x 2 , . . . , x n ) with n being the number of input parameters ( x ). The objective is to measure the importance of input on the variance of the output, namely the sensitivity ( S i ) of y to x i through, This is known as the first order sensitivity index by Sobol 80 . Analytical solution of Eq. 1 for a non-linear high dimensional system is not possible, thus, a numerical approximation is needed. This can be facilitated using the Fourier Amplitude Sensitivity Test (FAST) and the Monte Carlo algorithm for numerical approximation. Here we apply the latter following Cannavo 79 , where a sequence of random points of length N can be used to approximate the solution for N → ∞ . This allows for evaluating a multidimensional integral using a Monte-Carlo technique. Consider two uniformly distributed independent random points A and B (with a size of N × n ). A = [α A , β A ] and B = [α B , β B ] are composed of N trial sets for the evaluation of y . The model ( f (X) ) can be evaluated in these two points: f(A) and f (α A , β B ) . Using this method, the influence of different variables and their subsets on the model can be analyzed. In practice, this method draws A and B and form C ( C i , i = 1, . . . , n ) in a way that its ith column is equal to the ith column of B, and its remaining columns are from A. Using these sample inputs, the model is run to derive corresponding model evaluations ( f (A), f (B), f (C i ) ). These are then used to calculate the sensitivity indices using, Using this method one can measure the sensitivity of the model to a given parameter based on its contribution to the variance of the model output see more details in 79 . Data assimilation. UWCEnKF. The main aim of UWCEnKF is to update the system state and its parameters in a dual way while accounting for water balance when incorporating new observations. Here, we present a summary of the approach and more details can be found in Khaki et al. 52 . Ait-El-Fquih et al. 45 proposed a new dual EnKF scheme following the one-step-ahead (OSA) smoothing and showed that this could improve data assimilation performance by imposing more information to the system. Their approach comprises two interactive EnKF filters for state-parameter estimation. Khaki et al. 52 extended this to a water balance system by enforcing an additional constraint. The approach includes different steps; it first uses the state forecast ensemble to update the parameters through EnKF-like update, as well as to compute the OSA smoothing ensemble. The updated parameters and state variables are then integrated with the model to obtain the next state forecast ensemble in the second EnKF, which will be used to acquire the analysis ensemble. Despite the addition of the second EnKF implementation compared to the traditional dual-EnKF due to the OSA smoothing part, it has been shown that this only increases the computational cost minimally while it considerably enhances the performance of the dual approach 44,45,52 . For the state-parameter estimation problem in a discrete-time dynamical system, one can write, Scientific Reports | (2020) 10:18791 | https://doi.org/10.1038/s41598-020-75710-5 www.nature.com/scientificreports/ where x t ∈ R n x is the system state vector (with dimension n x ) and y t ∈ R n y is the observation vector (with dimension n y ) at time t. θ ∈ R n θ represents the parameter vector of dimension n θ . In Eq. (3), the model operator is indicated by M t−1 (.) , which is used to forward the state vector from t − 1 to t, and the observational (design) operator at time t is shown by H t . The model and observation process noises are represented by ν t−1 ∼ N (0, Q t ) and w t ∼ N (0, R t ) , respectively, with state covariance matrix Q t and observation covariance matrix R t . To solve Eq. (3), UWCEnKF applies a dual EnKF scheme comprising two interactive EnKF filters for state-parameter estimation. Each step of the filter is presented below. (with n being the ensemble number and a standing for analysis step), the process begins with integrating state and parameters within the model to derive forecast is then used to calculate the analysis parameter ensemble {θ with the sample forecast error covariance matrix P x f t and the sample cross-covariance matrix between the previous parameter vector and current forecast errors where S is ensemble perturbation and can be calculated as a difference between ensemble members and ensemble mean. • State estimation. Traditionally, the analysis parameters are used to recalculate the forecast ensemble in the standard dual EnKF by integrating {x a,(i) t−1 } n i=1 into the model based on the updated parameters. Ait-El-Fquih et al. 45 showed that the implementation of the OSA smoothing step, which is a measurement update based on the current observation can lead to a better state estimate. The smoothing state {x with P x a t−1 ,x f t being the sample cross-covariance matrix, calculated from the analysis states at t − 1 and forecast states at t. Next, similar to the standard EnKF, the forecast step is applied but using the updated parameters to forward states in time (from t − 1 to t). This is done using {x Scientific Reports | (2020) 10:18791 | https://doi.org/10.1038/s41598-020-75710-5 www.nature.com/scientificreports/ where z t def = d t − p t + e t + q t is introduced as "pseudo-observation". In this equation, L is an n z × n x identity matrix, and G = −L (here, n z = n x ). Contrary to a standard EnKF that only computes states in the analysis step, UWCEnKF estimates pseudo-observation noise covariance along with the states. This leads to the computation of constrained states from unconstrained state analysis ( {x a,(i) t } n i=1 ) in a second analysis step. A recursive algorithm exists in UWCEnKF to efficiently compute the analysis state {x a,(i) t } n i=1 based on the pseudo-observation noise covariance matrix ( ˆ t ). The second update step, thus, involves cyclic iterations to adjust the analysis state for ℓ = 0 . . . L (with L being the iteration number) as, t is computed from the new state and it is then used again in Eqs. (14)-(16) 12 . Data assimilation setup. An experiment is designed to monitor the performance of multivariate data assimilation. The study period is divided into three parts: 2000-2002 to generate the initial ensemble, 2003-2012 to assimilate observations and estimate model parameters (i.e. assimilation periods), and 2013-2016 to investigate the impact of the estimated parameters on model simulations in the absence of assimilation (i.e. forecasting period). The spin-up is made for m = 30 ensemble members for the period 2000-2002. This is done by perturbing the meteorological forcing fields, i.e. for precipitation: ×N (0, 0.3) , for shortwave radiation: +N (0, 50) , and for temperature: +N (0, 2) . Model errors are mainly caused by errors in the initial condition, forcing data, and model parameters. The above perturbation process accounts for the first two error sources while the model structure error is not considered here. Nevertheless, ensemble inflation applied in the assimilation process (explained below) allows the ensemble to largely account for this error 81,82 . A parameter ensemble is produced by drawing (30) random samples from each parameter's HRU defined range (cf. Table 1). The state vector for data assimilation includes soil moisture at three layers of top (up to 7-9 cm soil layer), shallow (up to 30 cm soil layer), and deep-zone layers (up to 100 cm soil layer), surface and snow water storage, groundwater and LAI. The observation vector contains GRACE TWS observations, satellite soil moisture, and LAI products. Cumulative distribution function (CDF) matching is used for rescaling the observations (TWS, soil moisture, and LAI) to match those from the model 22,83 . The observational operator ( H t ) converts the state variables into the observation space by taking into account discrepancies between the model and observation spatial resolutions. It aggregates model state variables at multiple grid cells to 1 • to be updated by 1 • GRACE TWS data. Top layer soil moisture variables at every 0.25 • are updated by satellite soil moisture measurements (i.e. 0.25 • AMSR-E and SMOS). LAI observations are spatially averaged and assimilated at the same resolution as of model ( 0.125 • ). To deal with the observations different temporal resolution, all observations are rescaled to the monthly scale (same as GRACE products) and assimilated on a monthly basis. This scale is also selected because it allows easier water budget constraint implementation provided in the second step of UWCEnKF, where water balance equation is applied using TWS changes over consecutive months. The monthly corrections as a result of data assimilation are added as offsets to the state vectors at the last day of each month to generate the ensembles for the next month assimilation step 2,84 . To enhance EnKF performance during assimilation ensemble inflation and localization are applied. It has been shown by literature 85,86 that ensemble-based data assimilation methods are sensitive to the size of ensemble. Generally, a larger number of ensemble members can better span the state-observation space and lead to better results but at the expense of strongly increased computation needs. To address this, ensemble inflation and localization methods are usually used to tackle filter divergent or inaccurate estimation 87 for a small ensemble size and to avoid filter inbreeding. Ensemble inflation increases ensemble deviation from the ensemble-mean by applying a small coefficient ( [1.1 − 1.3] for the parameter and state updates) to ensemble members 88 . Localization using the Local Analysis (LA) scheme is also applied. It performs by spatially limiting the assimilation process within a certain distance from a grid point 10,89 . The suggested values ( 3 • ) by Khaki et al. 10 are used as localization radii to achieve the best outcomes using a trial and error. As mentioned, the experiment is undertaken over the Murray-Darling and Mississippi basins given good data availability. To assess the results, model simulations are spatially interpolated to the nearest in-situ stations (cf. "Case studies" section). Once the simulation time series are generated at these locations, three evaluation metrics including standard deviation (STD), Root-Mean-Squared Error (RMSE), and correlation values are calculated with respect to the independent in-situ measurements. RMSE and STD are particularly useful to respectively investigate the distance between the simulations and in-situ measurements and the spread of simulations around the mean. These show how accurate and precise the results are. Note that only time series anomalies (i.e. time series minus their temporal average) are used for the validation. To better investigate the (13) Results Sensitivity analysis. The results of the sensitivity analysis are shown in Fig. 1. Estimated sensitivity weights of parameters (cf. Table 1) for each iteration of total 100 different iterations (using 100 different sets of matrices, see "Sensitivity analysis" section) are spatially averaged to show the relative weights of the model parameters and their influence on model output. In addition, the average parameter weights (for 100 iterations) are also plotted in the figure by a solid black line. It can be seen that larger weights are assigned to a group of parameters including C SLA , ref , I 0 P ref , and β with C SLA and ref having the biggest weights amongst all parameters. This can show the fundamental impact of specific leaf area and its interaction with light and moisture (humidity) levels within the study area. These larger weights corresponding to more model sensitivity can be observed over a majority of iterations. Some of the other parameters such as G smax and F ER0 represent less impact on the model outputs. From Fig. 1, it can be seen that sensitivity of parameters (e.g., PCI and α dry ) differ between HRU's. These indicate the effect of the model parameter variations on the simulation results, which highlights the importance of an accurate selection of parameters for estimation. www.nature.com/scientificreports/ In addition to the above variations, it is found that the sensitivity of parameters shows considerable variations over different grid points. This can be seen in Fig. 2, where the relationship between the average and STD of parameters over the grid points is shown. These variations indicate that defining fixed values (spatially and temporally) for parameters is not realistic as it does not reflect the characteristics of different regions (and over different time periods) and can be problematic. The large STD values for a majority of parameters such as I 0 , C SLA , P ref , and β can be explained by larger spatial variabilities of the parameters. This is also the case for some parameters with smaller weights, e.g., PCI (in HRU2 corresponding to short and shallow-rooted vegetation). It can also be seen that parameters with larger variabilities such as C SLA , ref , P ref , β , and I 0 demonstrate larger sensitivities too (cf. Fig. 1). This means that the model is largely sensitive to the variations of these parameters. A few parameters such as α dry , F loss,max , and W 0lim , on the other hand, show smaller spatial variabilities and can be considered spatially homogeneous. Based on this test, we focus only on the most sensitive and variable parameters including C SLA , ref , I 0 P ref , and β to be estimated. This allows to efficiently improve the model by avoiding estimation of all parameters. Parameter estimation. The parameter estimation results are presented here. The adjusted parameters and their range of variations from the application of the assimilation approach are presented in Table 2. Figure 3 shows the time evolution of two sample parameters ( β and I 0 ) over the assimilation period. The variation of these two parameters represents their average at each month for the Mississippi basin. From the figure, it can be seen that the parameter estimation process converges the parameters for different assimilation cases, i.e. GRACE, soil moisture, LAI only experiments, as well as simultaneous data assimilation. Details of the converged parameters over both basins can be found in Table 2. The results are for both multivariate and univariate data assimilation scenarios. The STD values show the spread of parameters around the average value, which indicates the variabilities and corresponding uncertainties of the estimated parameters. Table 2 shows that some parameters have larger STDs, e.g., β , I 0 , ref , P ref , C SLA , which generally suggests more spatial variability. These results suggest the ability of the parameter estimation approach to derive different values for parameters by adequately spanning the parameter space. Spatially varying parameters can better capture the characteristics of areas with different www.nature.com/scientificreports/ atmospheric and environmental conditions. Moreover, it is found that the estimated parameters are considerably different from the initial values, especially for the A/Par approach, which will consequently affect state estimates too. It can also be inferred from the table that each assimilation scenario results in different parameter estimation. Nevertheless, closer results can be found between the multivariate case (A/Par) and GRACE-only assimilation. This can be explained the larger impact of the GRACE data during the assimilation process compared to the other assimilated observation. This will be investigated more in the following section. Furthermore, to better explore the corresponding impact of the parameter estimation on model simulations, the simulations with (A/ Par) and without (A/O) the adjusted parameters are analyzed (cf. "Results validation-Observations impact" sections). Results validation. Independent in-situ measurements over the Murray-Darling and Mississippi basins are also used to evaluate the results for A/O and A/Par approaches. We compare the results of assimilating different observations, i.e. GRACE TWS only, satellite soil moisture only, LAI only, and simultaneous assimilation of all three data products. To this end, RMSE and STD values for both the assimilation and forecasting periods are computed (Fig. 4). We further compare RMSE values for groundwater wells and the different assimilation methods both for the assimilation and forecasting periods (Fig. 5). The figure shows the RMSE reduction for each scenario with respect to the open-loop results. Overall, the results highlight the effectiveness of the satellite data assimilation for improving the model simulations, especially over the assimilation period. Moreover, www.nature.com/scientificreports/ multivariate data assimilation clearly achieves the best results over both basins. This can clearly be seen for different locations in Fig. 5. Multivariate data assimilation performs reasonably consistent across the basin for both experiment periods. GRACE data assimilation reduces RMSE and STD more than soil moisture and LAI only assimilation experiments. This is expected due to the larger impact of GRACE TWS on groundwater storage during assimilation. Despite this, it is observed that simultaneous (multivariate) A/Par reduces groundwater RMSE 32% (on average) compared to the open-loop run, which is the best performance amongst the different assimilation cases. Similar performance can be observed for the two basins. The A/Par method also obtains slightly better results compared to the A/O method over the assimilation period. Over the forecasting period, however, the multivariate simultaneous data assimilation method performs substantially better, which is evident from smaller RMSE values in both basins compared to the open-loop and A/O results. This can also be seen in Fig. 5, where simultaneous data assimilation (and to a lesser degree also GRACE only assimilation using A/ Par) results in higher RMSE reductions than A/O. Such superiority can be explained by the positive impacts of the new method on model parameters, which allows the model to preserve the adjustment impact during the forecast period. www.nature.com/scientificreports/ The performance of the above data assimilation scenarios is further assessed against in-situ and independent satellite soil moisture measurements relying only on the correlation analysis. Correlations between simulated soil moisture (with and without data assimilation using different observations) and in situ measurements are calculated at different depths and average results are reported in Table 3. For this purpose, the top layer estimates are examined against in-situ measurements of 0-8 cm for Murray-Darling and 0-10 cm for Mississippi. The estimated top, shallow and a portion of deep-root soil layers are compared with in-situ measurements of deeper layers over the two basins (e.g., 0-30 cm and 0-90 cm for Murray-Darling, and 0-50 cm and 0-100 cm for Mississippi). A statistical test is also applied to measure the significance of the results at 0.05 level. In general, assimilating multiple observations simultaneously leads to higher correlation values, both for the A/Par and A/O methods compared to the open-loop results. Furthermore, top layer simulated soil moisture is compared with the surface soil moisture L2 product from the Advanced SCATterometer (ASCAT) over the same periods. The ASCAT soil moisture products provide an estimate of the water saturation of the 5 cm topsoil layer and are derived from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT). The correlation between www.nature.com/scientificreports/ the open-loop soil moisture and the soil moisture of the data assimilation scenarios with ASCAT soil moisture data is then calculated to derive improvement values, i.e. the difference in correlation for a data assimilation approach and the open loop experiment, both for the assimilation and forecasting period (Fig. 6). According to Table 3, the multivariate data assimilation improves correlation values by 0.21 (on average for the cases with and without parameter estimation) over the Mississippi basin and by 0.17 (on average for the cases with and without parameter estimation) over the Murray-Darling basin. It can be seen that univariate satellite soil moisture data assimilation performs the best among the univariate data assimilation experiments by increasing the correlation values from 0.67 (on average for open-loop) to 0.76. Similar results can also be seen in Fig. 6, where the multivariate data assimilation obtains the highest correlation improvement followed by soil moisture only data assimilation. Limited impacts can be observed by GRACE data assimilation, especially over the assimilation period while the LAI only assimilation case has no considerable impact on the results. From Table 3, improvements can also be seen in soil moisture estimates from GRACE data assimilation. Overall, it is found that GRACE only data assimilation mainly affects the deep-root and shallow soil zones within the assimilation period (on average ∼ 9% more than top layer) while soil moisture data assimilation largely improves top layer estimates (on average ∼ 12% more than deep-root layer). The former can be explained by the larger impact of GRACE TWS data assimilation (as in uni-and multivariate cases) on deeper model soil layers. Satellite soil moisture measurements, on the other hand, mainly reflect the top few centimeter soil water variations and correspondingly impact the model top layer. The combination of observations in the simultaneous case leads to the better performance of the approach in both A/O and A/Par. Between the experiment periods, more correlation improvement (with respect to the open-loop results) is obtained during the forecasting period using A/Par ( ∼ 20% for simultaneous assimilation) than A/O ( ∼ 4% ). This shows the importance of multi-mission observations during data assimilation. Yet, estimating parameters along with the state effectively improves the state-parameter estimates when multivariate data assimilation is assumed. This effect can also be observed in Fig. 6. The simultaneous data assimilation, and to a lesser degree soil moisture only scenario positively impacts the model top layer simulation by estimating parameters along with states during the assimilation period. Further result evaluation is done to assess the effect of satellite data assimilation, specifically from the LAI products. As shown in literature [90][91][92] , constraining land surface models with LAI observations could result in better evapotranspiration predictions. To explore this, the estimated LAI and evapotranspiration by the A/O and A/Par approach are compared with AVHRR LAI and evapotranspiration from the MODIS Global Evapotranspiration Project (MOD16) 93 . This is done also for all univariate and multivariate assimilation cases. Average correlation improvement with respect to the open-loop results for the Mississippi and Murray-Darling basins are depicted in Fig. 7. The analysis is done again separately for the assimilation and forecast periods. One can see that data assimilation effectively improves the estimates in most of the cases. The improvements are more pronounced for simultaneous and LAI only data assimilation. GRACE only and soil moisture only data assimilation cases lead to a small level of correlation enhancement, especially using the A/Par method, which can be explained by the updated parameters. Improvements in LAI simulations clearly lead to evapotranspiration estimates closer to MOD16. The improvements are found for both the assimilation with and without parameter estimation approaches, particularly over the assimilation period. The best results over the forecast period are found for the A/Par experiments and for simultaneous data assimilation. The A/O performance is clearly worse compared to the A/Par performance over the forecast period. These results are consistent with the previous assessments, stressing that multi-satellite data assimilation, especially along with parameter estimation considerably improves the model simulations by incorporating various observations. Due to the superiority of the multivariate data assimilation cases based on this section's results, in the following, we focus only on these approaches and especially A/Par to investigate their performance in more aspects. Observations impact. The integration of multivariate satellite observations (GRACE TWS, soil moisture, and LAI simultaneously) during the assimilation process impacts model simulations. This effect can be seen in Fig. 8 over the Mississippi and Murray-Darling basins. In this figure, basin-averaged TWS variations from the open-loop run (no data assimilation) are compared with the assimilation results, as well as the GRACE TWS data. The error, measured as the absolute difference between the GRACE TWS data and model simulations (with and without assimilation) is also plotted in Fig. 8c,d. Note that the forecast period (2013-2016), when no assimilation is applied, is separated from the assimilation period (2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013). It can clearly be seen in Fig. 8 that data assimilation decreases misfits between the open-loop and observations over both basins. Smaller errors in Fig. 8c,d confirm the ability of the applied data assimilation method for decreasing discrepancies between the model and observations. This improvement can largely be seen for the assimilation period, and to a lesser degree, for the forecast period. Importantly, data assimilation leads to a better simulation of anomalies such as 2011-2012 over the Murray-Darling basin and 2012-2013 over the Mississippi basin. To further investigate the impact of observations in the assimilation process, TWS ensemble spread over the basins is shown in Fig. 9. This is particularly of interest to monitor the influence of data assimilation on estimates in the assimilation period and its absence in the forecast period. TWS variations from individual ensemble members (shaded blue) and their average (solid blue) are displayed in Fig. 9. To better explore the effect, the comparison is done between the A/Par (Fig. 9a,b) and A/O (Fig. 9c,d) approaches. Both methods maintain the ensemble spread steadily during the assimilation period. While the pattern for both methods is similar over the assimilation period they differ in the forecast period. Larger spreads and corresponding uncertainties can be observed for the A/O results compared to the A/Par approach (cf. Fig. 9c,d). It can be inferred from the figure that the parameter estimation process along with the assimilation can extend the impact of data assimilation during the forecast period. This also reduces model uncertainties in that time period. Scientific Reports | (2020) 10:18791 | https://doi.org/10.1038/s41598-020-75710-5 www.nature.com/scientificreports/ Figure 10 shows the impact of data assimilation on soil moisture components from individual ensemble members to further investigate the simulation results (cf. Fig. 9). The correlation improvements over both basins are calculated with respect to the open-loop run, i.e., r c − r o with r c being the correlation coefficients between the assimilation (A/Par and A/O) results and satellite soil moisture observations and r o being the correlation coefficients between the open-loop results and satellite soil moisture observations. This is done separately for the assimilation and forecast periods. Figure 10 depicts correlation increases by both assimilation methods over the assimilation period. The A/Par method, however, obtains slightly better results, especially over the Mississippi basin, with an average increase of correlation of 0.29 compared to 0.25 for A/O. Over the forecast period, on the other hand, the new method performs remarkably better than the A/O approach over both basins, which is related to the estimated parameters. In addition, it can be seen from the figure that ensemble correlations show a larger spread over the forecast period, particularly for the A/O approach. This can indicate the larger stability of the A/Par method during the forecast period (as Fig. 9), which can result in smaller model state uncertainties. Now we explore the influence of the assimilated LAI data products on estimates. To this end, we compare the estimated LAI from two assimilation approaches by comparing it with LAI derived from AVHRR data. This is again explored over the Murray-Darling and Mississippi basins. Figure 11 shows the correlation improvement with respect to the open-loop run. Correlation values are computed for the assimilation period over each grid point. Land cover data acquired from Climate Change Initiative -European Space Agency (Version 2.0; http:// www.esa-landc over-cci.org/) is also presented in the figure for a better interpretation of LAI improvement results. From Fig. 11, the A/Par method increases the correlation compared to the open-loop results over both basins. The correlation improvement over the forecast period, however, is smaller, i.e. more than 0.4 improvement over www.nature.com/scientificreports/ ∼31% of grid points (averaged over both basins) against ∼74% for the assimilation period. This is expected due to the absence of data assimilation. Nevertheless, correlation increase can be seen across the basins within the forecast period. More improvements can be seen over the vegetated areas (containing trees, vegetation, and shrubland) in both assimilation and forecasting periods compared to the cropland areas. This can be attributed to the higher capability of the assimilated data to reflect the variations of plant canopies. Overall, it can be www.nature.com/scientificreports/ concluded that the method successfully incorporates observations during the filtering process and estimates the associated parameters. This is in correspondence with the previous findings as documented in Figs. 8, 9 and 10. Evaluation against water fluxes. A successful data assimilation approach for a water balance system not only improves the model simulations of various compartments but should also result in a better reproduction of water fluxes. To assess this, the updated TWS estimates are compared against flux observations of precipitation, evaporation, water discharge, and water storage changes using correlation analysis. Cross-correlation values are computed between the simulations (from the open-loop run, as well as assimilation with and without parameter estimation) and flux observations used in the second step of data assimilation filtering scheme (cf. "Model and data" section). Afterwards, improvement is calculated between the assimilation results with respect to the open-loop results for the assimilation (Fig. 12a) and forecasting (Fig. 12b) periods, separately for the Murray-Darling (indicated by 'MD') and Mississippi (indicated by 'MIS') basins. Both assimilation methods improve the agreement between measured and modelled flux components and storage over the assimilation period. The cross-correations increase stronger for evapotranspiration and water storage changes, which can be explained by the assimilation of TWS and LAI data from satellite products. The level of improvement over the forecasting period is much better for the A/Par approach than for the A/O approach. This can be seen clearly in Fig. 12b, where the A/Par results are approximately 12% (on average) better than those of A/O. This shows that the applied parameter estimation strategy has more pronounced impacts than A/O on results. Climate variabilities. In this section, the ability of the multivariate data assimilation with parameter estimation technique (as the best method so far) to accurately reflect inter-annual weather variabilities as well as extreme events is assessed. Figure 13 plots average TWS variations from the open-loop and A/Par approach with respect to precipitation data over the Murray-Darling and Mississippi basins. This is done separately for the assimilation and forecasting periods. Better agreement between the two time series leads to a higher correlation between precipitation and TWS anomalies. The assimilation results show a better match than the open-loop results between the estimated TWS-variations and precipitation variations. This is clearer over the assimilation period, in which the A/Par method increases the correlation by 0.12 (on average) compared to the open-loop simulations. Improvement can also be found over the forecasting period over both basins (by 0.08) using the multivariate A/Par approach. These results demonstrate that the assimilation results better represent climateinduced variations compared to the open-loop run. Another important aspect of successful model simulations is their ability to represent seasonal changes. This is evaluated by comparing seasonal variations of the open-loop and A/Par TWS results with those from GRACE data (Fig. 14). Results in Fig. 14 depict the average TWS seasonal amplitude (top panel) and TWS seasonal changes (middle and bottom panels) for the Murray-Darling and Mississippi basins over the assimilation and forecasting periods. Figure 14 illustrates that contrary to the open-loop result, assimilation results show not only similar seasonal amplitude but also closer range of variations compared to GRACE. Importantly, such an improvement can also be observed over the forecasting period (2013-2016), which is related to the estimated model parameters by remote sensing data assimilation. Better agreement between the assimilation results and observations can also be seen in seasonal changes over both study periods. This is more evident for the Murray-Darling basin, where larger discrepancies exist between the open-loop results and GRACE data. Data assimilation, thus, has larger impacts in this case even www.nature.com/scientificreports/ over the forecasting period. It can be concluded that the assimilation results agree better to climatic variations due to their better performance in representing seasonal changes, which are triggered largely by climate-related components mainly through precipitation. To further investigate the performance of data assimilation, soil moisture results are compared with average precipitation changes over two particular time periods, 2009-2013 (in assimilation period) and 2013-2016 (forecasting period) for the Murray-Darling basin. The former time period is selected due to the occurrence of an extreme (or irregular) climatic event namely high precipitation due to El Niño Southern Oscillation between 2010 and 2012 94 . The latter time period is selected to monitor the assimilation impacts on the forecastings. Figure www.nature.com/scientificreports/ can be due to various factors such as erroneous model parameters, over-simplifying physical phenomena, and errors in its underlying equations. Better results for the A/Par approach suggest that estimating parameters through data assimilation can largely address the issue and consequently reflect anomalies. A similar performance can also be seen over the forecasting period, e.g., when positive anomaly in 2013 is clearer in the A/Par results. This again confirms the positive impact of A/Par on the parameter estimations. To better illustrate this, the difference between average soil moisture content in March-April and January-February 2010 over the Murray-Darling basin is shown in Fig. 16, again both for the A/Par method and the open-loop run. This is done to investigate the impact of ENSO phenomena on soil moisture changes. Remarkably larger positive differences in the assimilation results indicate their better performance in representing the phenomena. These results show that assimilating multiple satellite data products can effectively improve the model skills to capture inter-annual weather anomalies. Conclusions The present study investigated the ability of multivariate satellite remote sensing data assimilation to improve predictions with a land surface model. Various observations including GRACE TWS, AMSR-E and SMOS soil moisture products, and AVHRR LAI were assimilated individually and simultaneously into the W3RA model using the recently proposed A/Par method, UWCEnKF. This was done for (i) state-parameter estimation over the assimilation period and (ii) for model predictions over the forecasting period. Different data sets were used to assess the data assimilation performance over the Murray-Darling and Mississippi basins. The major findings of this effort are: • In general, it was shown that the application of multi-mission satellite data can successfully improve the model's different estimates, both in the assimilation and forecasting periods. On the other hand, univariate data assimilation was found to mainly improve the model corresponding variable. Analysing the results against the assimilated observations shows that the A/Par method results in a closer correspondence to observation data, including independent, not assimilated, data. Thus, this study showed the importance of multivariate data assimilation when various water components are targeted combined with parameter estimation. • In the forecasting period, the joint assimilation and parameter estimation method still improves estimates considerably, but the A/O approach does not improve updates. Better TWS and LAI forecasts were obtained over the Murray-Darling and Mississippi basins by this method. The use of independent groundwater and soil moisture measurements also confirmed this. The UWCEnKF A/Par method demonstrated high capability to preserve the observations' impacts over a longer time period, which suggests that the method can successfully estimate the model parameters. Furthermore, multivariate assimilation along with parameter estimation shows promising performance in reflecting inter-annual weather variabilities as well as weather extremes into the state estimates over both assimilation and forecasting periods. Therefore, model parameter estimation during data assimilation is crucial for improved predictions. Overall, based on both assessments against assimilated and independent observations, multivariate data assimilation with model parameter estimation remarkably improved model simulations, e.g., in terms of water storage accuracy and forecasting skill. Nevertheless, more investigation is required on the performance of the method on hyper-resolution models, where assimilating massive datasets can be problematic. Moreover, the method should be tested over various basins with different hydro-climatic conditions to further assess its impact on the simulations, especially for the forecasting periods. www.nature.com/scientificreports/ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2020-11-04T14:08:27.607Z
2020-11-02T00:00:00.000
{ "year": 2020, "sha1": "c7ac249e3756b4ca8e380d97abe25fc15fc7ed40", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-75710-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7de37ec7c90b61991eab98989a857dd5f5016959", "s2fieldsofstudy": [ "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
188212602
pes2o/s2orc
v3-fos-license
A model for cross-institutional collaboration: how the intercollegiate biomathematics alliance is pioneering a new paradigm in response to diminishing resources in academia Abstract We present an emerging model of shared academic, intellectual and infrastructure resources that addresses the need for institutions to sustain their educational and scholarship missions under ever-declining funding. The (IBA) was created in 2014 by Illinois State University for this purpose, eventually growing to a state-recognized ‘Center for Collaborative Studies’ in 2017. As the impact of the IBA continues to expand, it is on its way to become a new education paradigm in response to diminishing resources, and it can serve as a model to foster collaboration for other fields of mathematics. Introduction For well over a decade, higher education has been suffering financially, socially and politically. Research universities, regional public universities (RPU) and liberal arts colleges are all under similar pressures to accomplish their missions with steadily diminishing resources and in increasingly unsympathetic environments. Neither research nor teaching activities have been spared from the chronic lack of state or federal funding and support. For example, state spending on public colleges and universities remains well below historic levels. Overall state funding for public two-and four-year colleges in the 2016-2017 school year was nearly $9 billion below its 2008 level, after adjusting for inflation (see Figure 1). As a result, higher education institutions have had to balance budgets by reducing faculty, limiting course offerings, and in some cases, by closing campuses. Surviving academic units have struggled to maintain academic standards, due to the impact of reduced funding that causes both educational and research activities to suffer. The field of mathematical biology is not isolated from the pressures of constantly diminishing funding. In fact, the field is even more prone to suffering from the lack of institutional funding due to its collaborative nature and its dependence on a diverse group of disciplines. The increasing popularity of the field has brought the need to develop mathematical biology programmes, both as new curriculum additions and research emphases, to the forefront for a wide spectrum of colleges. Coincidentally, under the circumstances caused by decreasing institutional resources, introducing new programmes -either as educational components or new research programmes -have become more challenging, especially due the fact that mathematical biology is dependent on more resources than a single department or programme can provide. Alliance components The seeds of a comprehensive academic collaborative entity addressing the discipline-specific needs of mathematical biology programmes were planted in early 2014 at Illinois State University (ISU), with the creation of the Intercollegiate Biomathematics Alliance (IBA). The goal of the IBA was to form a network of dues paying member institutions that served as a nurturing environment for all by pooling their academic, intellectual and infrastructure resources. These resources were then made available to all members in terms of research and curriculum support. As a result, each of the contributing institutions has become a stakeholder in an endeavour where their students and mathematical biology faculty have unprecedented access to a broad spectrum of community-based support, which we shall talk about in detail. The following highlight the main components of the IBA's mission. Research workshop The Cross-institutional Undergraduate Research Experience (CURE) workshop is a unique opportunity for students to experience how conducting research works, experiencing the entire process from its inception to dissemination, including cross-institutional collaboration, scientific writing, presentation and publication steps. The first CURE workshop was held at ISU during the first extended weekend (Thursday-Sunday) of June 2016, where participation was at 20 students, with 10 faculty members giving presentations on possible research projects. The second CURE was attended by 10 students and 6 faculty members in June 2017. In both cases the application process was very rigorous, and the numbers were kept at an optimum amount for productivity. Presentations included a range of mathematics and biology topics, such as basic epidemiological modelling, agent-based modelling, waterborne diseases and neural networks. This workshop's flexible content allows the participants to engage in inclusive and diverse research projects especially designed to accommodate several faculty mentor(s)-student researcher(s) groups. Over half of the IBA CURE workshop student participants have presented at the annual Symposia on Biomathematics and Ecology Education and Research (BEER) on projects started at the workshop. In particular, one participant won the BEER conference first annual Outstanding Undergraduate Research award for his work on modelling devil facial tumour disease in Tasmanian devils after the first CURE workshop. He was invited back the next year, and gained so much insight from participating in biomathematics research that he ended up pursuing an MD-PhD, instead of continuing with his previously chosen career path, which was to become a physician assistant! Students participating in CURE have gone on to pursue medical school, veterinary school and graduate programmes in biochemistry, mathematics and biology. The workshop experience continues to have a lasting impact on students' academic achievements and goals. The IBA is planning to have institutional members host the CURE workshop in rotation, hence giving host locations a special forum to promote their own programmes. In addition, the IBA publishes a unique biomathematics undergraduate research journal, Spora, where students whose paths cross IBA activities (as well as others all over the world) can publish their research. Community curriculum The IBA coordinates course offerings by member institutions that can be taken virtually by students at other IBA schools. Courses that have already been offered to member institutions include Data Science, Quantitative Biomathematics, and Probability and Statistics. The IBA also funds the technological means needed to support these cross-institutional curriculum offerings, such as access to online conferencing tools and motion tracking cameras. There will soon be an interactive website that coordinates these efforts. Faculty research support The IBA frequently coordinates and funds collaboration retreats for faculty to achieve high professional development goals and conduct quality research. To address the needs of computing-intensive research efforts, the IBA also maintains a high-performance computer at ISU, which is available exclusively to members. As part of its commitment to the dissemination of research and educational advancements in mathematical biology, the IBA additionally supports a wide range of research activities such as sponsoring the Midwest Mathematical Biology Conference and the Symposium on Biomathematics and Ecology Education and Research, and funding article publication charges for its members, including in the IBA-sponsored and peer-reviewed Letters in Biomathematics journal for new research in biomathematics. Community-based graduate curriculum In fall 2018, the IBA will launch a graduate certificate programme built on a community curriculum where students will have access to online programmes in mathematical biology. The programme combines the resources of IBA institutions to offer three tracks of study designed with a spectrum of student needs in mind. The cross-institutional structure of the programme allows a wider variety of options than a single school would be able to support. Students are able to choose an online programme designed to prepare them for the workforce, serve as a bridge programme for further graduate study, or increase their competitiveness when applying to medical programmes. This is a unique approach in mathematical biology and has potential to inspire other fields or groups of institutions in delivery of education. Institutional impact The IBA continues to grow since it has joined the ranks of well-known organizations supporting the mathematical biology community, such as the Mathematical Bioscience Institute (MBI) and the National Institute for Mathematical and Biological Synthesis (NIMBioS), with an extended and unique goal to support both research and curriculum development. Colleges and universities with different missions all benefit in different ways from participation in the IBA. Liberal arts colleges Private liberal arts colleges, such as IBA institutional members Marymount University and the University of St. Francis, are often tuition-driven and have limited resources for research. Furthermore, their faculty typically have large teaching loads but little funding for travel for collaboration purposes. Small departments demand diverse faculty backgrounds, giving a low likelihood of finding multiple faculty in a single field. Therefore, the need for networking opportunities for faculty to find productive collaborations is high. The IBA provides a platform for interactions between mathematical biology faculty at different types of institutions. These interactions, in turn, lead to a meaningful understanding of how expertise among faculty can be shared to provide productive research collaborations. The IBA has funding to support faculty publications in open-access journals and provides access to journal collections, which may otherwise prove to be prohibitively expensive. Faculty in departments without graduate programmes are routinely invited to serve as co-advisors or committee members for graduate students of other IBA member schools. These activities allow undergraduate faculty to engage in meaningful research experiences not available at their home institutions. The opportunities that IBA provides are not limited to faculty engagement but extend to student-oriented activities as well. Undergraduate research experiences are becoming increasingly important for undergraduate students to advance after graduation. Traditionally, research experiences for undergraduates (REU) and other summer research programmes have provided these opportunities, but such programmes are becoming increasingly competitive or otherwise unattainable for students of liberal arts colleges. Thus, the IBA regularly hosts workshops (see CURE above) as well as spontaneous small meetings and brings motivated undergraduate students and faculty together across institutional and discipline boundaries. Through these workshops, students' academic experiences are enriched beyond what their home institution could provide. As a result, some IBA students have been able to significantly expand their career goals. For instance, we have mentioned that a particular student decided to pursue an MD-PhD in infectious diseases after engaging in IBA research activity, as opposed to becoming a Physician's Assistant as previously planned. This example highlights the direct impact the IBA has on students' scientific horizons. The student-oriented opportunities are not restricted to research. One of the most unique activities of the IBA is the community curriculum, where students may take courses across institutional boundaries. This expands the possibilities for students who do not have access to a wide range of course offerings. In this manner, the IBA allows students to experience opportunities that are available at large institutions while still receiving the personalized attention of a small college environment. Regional public universities RPUs such as Illinois State University and the University of Wisconsin-La Crosse often lack the resources of research universities, but commonly have scholarship demands similar to those in such institutions. We note that with growing interest in building graduate programmes, RPUs are becoming more dependent on additional academic resources. Moreover, they are likely to have more diversity in the interests of their graduate students than the expertise of their mathematical biology faculty. To address this imbalance between readily available resources and demands on faculty at RPUs, the IBA has adopted a vision of creating high-level research in a collaborative environment: research groups that are productive and accessible to faculty at all stages of their careers. The community structure of the IBA readily offers an avenue to develop more programmes that are robust by fostering synergistic connections within the IBA network. This includes relationships with schools that have abundant faculty to serve as co-advisors and schools without graduate programmes, where qualified faculty may be looking for opportunities to mentor graduate students. Similarly, faculty at schools with only master's programmes can gain experience working with Ph.D. students from other IBA institutions when they serve on dissertation committees or when they are engaged in student-based research at the doctoral level. On the other hand, as new graduate programmes are established, a strong base of undergraduate students is needed to populate these programmes. The IBA network provides RPUs a recruitment platform to connect with motivated and qualified students. The well-structured interactions among IBA institutional members can and do provide a seamless transition for students into advanced degree programmes. This foundation further helps RPUs to develop new innovative courses that may not be in sufficient demand within a single institution. Through the community curriculum, students at IBA institutions can take courses offered by other IBA schools, such as Introduction to Biomathematics. As a result, students at schools with limited course offerings may be motivated to take new and intriguing classes at RPUs, hence increasing enrolment as well as allowing new courses to get off the ground. Research universities Research universities (RUs) such as Arizona State University usually have ample resources for their faculty to pursue their research, but mathematical biology faculty may still be limited in the number of accessible collaborators. The IBA can help connect faculty with interests overlapping mathematical biology with those whose sole interest is mathematical biology for a mutual benefit. Computing needs for RU faculty research can be high, but access to computing facilities may not be available or accessible at their institutions. The IBA offers easy access to strong computing power for high-level computation needs. Additionally, RU faculty have access to a wide network of highly motivated IBA undergraduate and graduate students, and can provide expertise to those students, who will benefit immensely from their mentoring. Concluding remarks The successful introduction and growth of the IBA can serve to inspire other fields to generate similar collaborative programmes to help promote scholarship and education. In just three short years, the IBA has created research opportunities for dozens of students, fostered collaborations between faculty across the country, helped schools in remote areas develop fruitful connections, and formed the first community-based mathematical biology curriculum in the state of Illinois. We should finally mention that the IBA, in turn, came to existence at an institution where a few interested faculty members from the Department of Mathematics and the School of Biological Sciences were committed to creating and growing an inter-disciplinary programme. This started with the creation of a master's programme in biomathematics (a Program of Excellence) that was administered jointly by the two entities in 2007 and has continued to flourish since its inception. These devoted faculty members sought to expand beyond graduate education at a single school and their continued efforts have grown the IBA to a network of ten schools in just a few years. This review serves to report on the contribution of the IBA to mathematical biology education in an environment where cross-institutional partnerships are paramount and mindful individuals can make a difference in the higher education landscape.
2019-06-13T13:15:32.408Z
2018-03-08T00:00:00.000
{ "year": 2018, "sha1": "a9a09310dc541c144f1d707a9725ba6b1f3e9e68", "oa_license": "CCBY", "oa_url": "https://lettersinbiomath.journals.publicknowledgeproject.org/index.php/lib/article/download/33/15", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "db450baf700c848e77d998be2ebd702916844da4", "s2fieldsofstudy": [ "Education", "Biology" ], "extfieldsofstudy": [] }
232383550
pes2o/s2orc
v3-fos-license
Metasurface-Enhanced Antennas for Microwave Brain Imaging Stroke is a very frequent disorder and one of the major leading causes of death and disability worldwide. Timely detection of stroke is essential in order to select and perform the correct treatment strategy. Thus, the use of an efficient imaging method for an early diagnosis of this syndrome could result in an increased survival’s rate. Nowadays, microwave imaging (MWI) for brain stroke detection and classification has attracted growing interest due to its non-invasive and non-ionising properties. In this paper, we present a feasibility study with the goal of enhancing MWI for stroke detection using metasurface (MTS) loaded antennas. In particular, three MTS-enhanced antennas integrated in different brain scanners are presented. For the first two antennas, which operate in a coupling medium, we show experimental measurements on an elliptical brain-mimicking gel phantom including cylindrical targets representing the bleeding in haemorrhagic stroke (h-stroke) and the not oxygenated tissue in ischaemic stroke (i-stroke). The reconstructed images and transmission and reflection parameter plots show that the MTS loadings improve the performance of our imaging prototype. Specifically, the signal transmitted across our head model is indeed increased by several dB‘s over the desired frequency range of 0.5–2.0 GHz, and an improvement in the quality of the reconstructed images is shown when the MTS is incorporated in the system. We also present a detailed simulation study on the performance of a new printed square monopole antenna (PSMA) operating in air, enhanced by a MTS superstrate loading. In particular, our previous developed brain scanner operating in an infinite lossy matching medium is compared to two tomographic systems operating in air: an 8-PSMA system and an 8-MTS-enhanced PSMA system. Our results show that our MTS superstrate enhances the antennas’ return loss by around 5 dB and increases the signal difference due to the presence of a blood-mimicking target up to 25 dB, which leads to more accurate reconstructions. In conclusion, MTS structures may be a significant hardware advancement towards the development of functional and ergonomic MWI scanners for stroke detection. Introduction Stroke is a clinical syndrome causing acute dysfunction of a brain area and it is linked to a vascular mechanism resulting in a vessel occlusion or rupture. This definition of stroke refers to ischemic stroke (i-stroke), which involves a central nervous system infarction, but also broadly includes intracerebral hemorrhage (h-stroke) and subarachnoid hemorrhage [1]. Stroke is a very frequent disorder and a major cause of adult disability and death. It is the fourth most common cause of death in the UK, with more than 100,000 cases each year and a cost to society around £26 billion a year [2]. An early diagnosis of stroke is necessary, as the brain loses millions of cells every second after the vessel's occlusion or rupture and this may result in a permanent damage or even death. Additionally, stroke's treatments differ based on the stroke type, meaning that an incorrect diagnosis could be lethal for the patient [3]. Thus, a fast access to efficient imaging tools is needed to timely initiate the correct treatment. Also, a strict follow-up in the first days (especially for treated patients) would also be desirable to assess the stroke's evolution. Currently, stroke detection relies on technologies such as computed tomography (CT), magnetic resonance imaging (MRI) and cerebral angiography. Both CT and MRI are able to confirm the diagnosis and identify the stroke's site. However, both these tools are not portable and thus cannot be easily used by paramedics at the patient's bedside or found inside ambulances [4]. Moreover, whereas CT is ionizing and has a limited sensitivity for early ischaemic signs, while MRI exposes the patient to a strong magnetic field and needs more time and patient's cooperation to be performed [5]. More importantly, MRI is a high cost technology and its availability is limited for emergency imaging. Microwave imaging (MWI) could be a valid alternative to the current diagnostic methods, as it is not invasive, uses non-ionizing radiations and has a data acquisition time which ranges from milliseconds to a few seconds. In addition, it is a technology which can lead to low-cost and portable brain scanners [6]. This imaging technique is based on the difference in the tissues' dielectric properties, which results in perturbations of the scattered field. The tissue dielectric contrast can be estimated using radar-based or tomographic reconstruction algorithms that are applied to the acquired data [7]. In particular, in microwave tomography (MWT), it is possible to reconstruct a map of the spatial distribution of the dielectric properties of the region of interest through solving an inverse ill-posed electromagnetic (EM) problem [8][9][10]. A MWI setup consists of several antennas placed around the body to be imaged. Typically, there is a homogeneous matching medium between the antennas and the body. Using a lossy matching medium minimizes reflection and couples the transmitted power to the body, but it also attenuates the useful signal transmitted into the body [7]. As a result, the coupling medium affects the detection of useful scattered "weak" signals. As the antenna array that acquires these signals is confined in a small region surrounding the head, small and compact antennas are commonly used [11][12][13]. In addition to small size, MWT antenna arrays for brain imaging must operate in the 0.5-2.0 GHz frequency range to achieve an optimal trade-off between resolution and penetration depth [14][15][16]. Following these requirements, several MWI head scanners including helmets and headbands [17] have been considered. To couple the transmitted power into the body and improve the detection of useful scattered "weak" signals, we could take advantage of metasurface (MTS) technology. As demonstrated in our previous studies [18,19], one way to improve the efficacy of a MWT system is incorporating MTS structures into its array, adjacent to the substrate of each antenna. Using this hardware configuration, we have obtained higher quality reconstructed images of a blood-mimicking target placed inside the brain volume of our numerical head model. MTS's are the two-dimensional equivalent of metamaterials (MM's), which are engineered materials made of sets of small scatterers or apertures arranged in a regular array throughout a region of space. MTS's are mainly based on sub-wavelength split-ring resonators (SRRs) which can be designed in order to obtain some desirable EM behaviour which is not found in naturally occurring materials, such as negative refractive index or near-zero constitutive parameters [20]. Thus, we can take advantage of MTS structures to manipulate size, efficiency, bandwidth, and directivity of several EM systems [21,22]. For instance, zero-index MM's (materials with constitutive parameters ' and µ of zero or near-zero values) and negative index MM's have shown a strong potential in several applications and can be used to fabricate high directivity antennas [23][24][25]. At microwave frequencies, MTS-enhanced antennas are typically modelled tailoring the MTS's design in order to interact with the emitted wave to form the desired radiation pattern [26]. Recently, layers of MTS have been used in a variety of clinical applications such as biomedical imaging and sensing. For instance, SRRs have been used to enhance the sensing properties of existing biosensors [27,28]. Likewise, MTS's have been used as flat lenses for near-field imaging [29] and MRI [30]. With the aim of improving the accuracy of MWI for brain stroke detection and localization, we propose three MTS-enhanced antennas. To this end, this work validates our previous designed MTS film by presenting experiments with an elliptical brain phantom including cylindrical stroke-mimicking targets. Our experimental results suggest that the MTS film employed to enhance the sensitivity of our custom-made MWT prototype [31] can have a positive impact when placed on the head, closely fixed to the MWI array's antennas. Then, we show a detailed simulation study on the performance of a new MTSenhanced printed square monopole antenna (PSMA) operating in air. Our results show that we can detect a blood-mimicking target placed inside the brain volume of our head model avoiding the use of a thick and bulky matching medium. In addition, the MTS superstrate loading enhances the dielectric contrast between the blood-mimicking target and the surrounding tissue. The remainder of the paper is organized as follows: Section 2 presents the methodology for the MTS-enhanced antenna designs. It also describes the geometry of the custommade prototype used to validate our previous MTS designs and the simulation setups used to test the MTS-enhanced PSMA. Section 3 presents reflection and transmission plots from experiments and images reconstructed through our previously developed 2D DBIM-TwIST algorithm. It also includes simulation results which suggest that the PSMA's MTS loading increases the MWI system's sensitivity to the signal scattered by a blood-mimicking target. Finally, Section 4 includes a discussion, concluding remarks and future work. MTS-Enhanced Antennas Immersed in a Coupling Medium The triangular patch antenna and the spear patch antenna used in our experiments are shown in Figure 1a,b. These printed monopole antennas were modelled on an FR-4 substrate with a partial ground on the back side. To couple energy more efficiently into the head, both the antennas were designed to operate in a matching medium made of a 90% glycerol-water mixture [17,32,33]. The dimensions of the substrate, patch and transmission line of the antennas are shown in Figure 1a A 24.75 mm × 29.7 mm MTS superstrate loading was modelled to operate as an enhancer of the antennas described above and glued on the antennas' radiating elements. This MTS structure is based on the unit cell geometry shown in Figure 2a, which comprises a Jerusalem Cross-shaped copper lattice (thickness = 0.10 mm) embedded between two Rogers 3010 TM substrates (thickness = 1.27 mm, = 10.2 and tanδ = 0.0022). These two high dielectric substrates are bonded with with Rogers 3001 bonding film ( = 2.28, tanδ = 0.003). In previous work [18,34], we have performed several simulation studies to optimize this geometry for operating in contact with the human skin when immersed in our coupling medium. MTS-Enhanced Antennas: Experimental Validation in a Coupling Medium To test the MTS-enhanced antennas operating in our coupling medium, we carried out experiments using the custom-made MWT prototype shown in Figure 3a. This setup includes an elliptical array placed inside a 300 mm diameter cylindrical tank and connected to a multiport Keysight M9019A Vector Network Analyser (VNA). Measurements were performed by immersing eight transceivers inside the tank, in a 90% glycerol solution used as matching medium. The antennas were placed as close as possible to the phantom's external surface. To adjust the antennas' position, we used horizontal and vertical mounts. By sequentially transmitting from one antenna and receiving by the others, the eightantenna array produces a scattering matrix to be fed into our DBIM-TwIST algorithm for image reconstructions (see Appendix A). To examine the performance of the MTS-enhanced triangular patch antenna, two gelatin-oil phantoms mimicking the properties of average brain and blood tissue were fabricated, as described in [35]. A 3D printed elliptic mould made of ABS was used as a holder for our brain liquid phantom that, once solidified, can mimic the brain. We first measured the average brain phantom ("no target" scenario) using both the triangular patch antennas and MTS-enhanced antennas. Then, we inserted a 30 mm diameter cylindrical inclusion resembling blood tissue into the brain-mimicking mixture and performed the "with target" measurements. To test the MTS-enhanced spear antenna, we fabricated new gelatin-oil phantoms mimicking the properties of average brain, blood tissue and ischaemia. After carrying on the "no target" measurements, we inserted the 30 mm inclusion resembling the bleeding in the brain phantom and we performed the "with target" measurements. Then, we substituted the blood-mimicking inclusion with the ischaemia-mimicking target and we carried out a second round of "with target" measurements. All the inclusions were positioned at the same coordinates, at an angular position of about 320 degrees relative to the position of the first transmitting antenna, as shown in Figure 3b. The permittivity at 1 GHz of the coupling liquid and each phantom used in these experiments is shown in Table 1. Table 1. Permittivity of glycerol solution, brain phantom, blood-mimicking target and ischaemiamimicking target measured at 1 GHz. MTS-Enhanced Printed Square Monopole Antenna Operating in Air To investigate the possibility of applying MWT without employing any matching media in between the antennas and the human head, we modelled the printed square monopole antenna (PSMA) shown in Figure 4a using CST Microwave Studio. The proposed PSMA is operating in air and is specifically designed to work in close contact with our numerical head phantom. It is designed on an RT/duroid 5880 LZ low dielectric substrate (thickness = 1.026 mm, ' = 1.96, tanδ = 0.0019) and is based on a square patch with a 10 mm partial ground on the back side. After assessing the PSMA's performance, we designed the MTS-enhanced PSMA shown in Figure 3b PSMA and MTS-Enhanced PSMA: Comparison between Different MWT Scanners To test the performance of our new PSMA and MTS-enhanced PSMA operating in air, we compared our previous developed brain scanner operating in an infinite lossy matching medium to two tomographic systems: an 8-PSMA system and an 8-MTS-enhanced PSMA system. Our previous developed MWT scanner for brain imaging consists of 12 spear patch antennas immersed in a 90% glycerol-water mixture and placed around EN 50361 Specific Anthropomorphic Mannequin (SAM) head model [18,19]. A similar setup ("System 1"), including 8 antennas placed elliptically around our numerical head model, was modelled in CST Microwave Studio. The antenna array was immersed in an infinite mixture of 90% glycerol-water matching medium. Using the same head model, which is made of a nylon mould ( ' = 3.2, tanδ = 0.013) containing an average brain numerical phantom inside ( ' = 45.8 and σ = 0.76 S/m), we studied other two MWT scanners operating in air: "System 2", which comprises 8 PSMA, and "System 3", which includes 8 MTS-enhanced PSMA. In these MWT systems, the antennas were closely fixed to the head's surface. The MWT antenna arrays described above are shown in Figure 6. For each antenna array, the S-Parameters were measured over the 0.5-2.0 GHz frequency range for the "no target" configuration (head model including average brain only) and the "with target" configuration. This last configuration includes a cylindrical bloodmimicking target (30 mm diameter and 35 mm height) inserted inside the brain volume and placed in the back side of the head model, close to antenna 1. Then, the signal difference "with target-no target" (dB) at relevant frequencies was calculated. Finally, images were reconstructed at the single frequencies 0.9 GHz, 1.0 GHz and 1.1 GHz, by applying our 2D DBIM-TwIST algorithm [36] to the simulated data. Experimental Results for the MTS-Enhanced Antennas Operating in a Coupling Medium In our previous study [19], we have already shown the benefits of using MTS-enhanced antennas. In this section, we present experimental results for both the arrays described in Section 2.2. The reconstructed images and transmission and reflection parameter plots suggest that the MTS superstrate loading illustrated in Figure 2 has the potential to improve the performance of our custom-made tomographic system. The S-Parameter plots are shown for the "no target" and "with target" configuration, with and without MTS superstrate loading. Moreover, to provide more complete information on the enhancement of the system response in the presence of the MTS, we also present field amplitude distributions from accurate numerical simulations of our system interacting with the brain phantom in Section 3.3. Figure 7a shows the reflection coefficient for one of the triangular patch antennas (number 8 in Figure 6b), which is placed in front of the blood-mimicking target, while Figure 7b shows an example of transmission signal levels (in dB) across the brain phantom, along the direction containing the target. The antenna's reflection coefficient in the "no target" configuration is reduced by around 3 dB, suggesting that the antenna's matching is enhanced at the operating frequency when the MTS is present. Moreover, the plotted transmission parameter is improved of around 8 dB over all the considered frequency range. Two-dimensional (2D) multi-frequency (frequency hopping) reconstructions were performed through our DBIM-TwIST algorithm [36], which is discussed in detail in Appendix A. The reconstruction results of the dielectric contrast between the bloodmimicking target and the surrounding brain tissue are shown in Figure 8, where three frequencies (0.7 GHz, 1.1 GHz and 1.3 GHz) were used. An improvement in the quality of the reconstructed images is shown when the MTS layers are added to each antenna element. In particular, a reduction of artefacts and a better localization of the blood target are observed. The blood-mimicking target and the ischaemia-mimicking target were reconstructed via our 2D DBIM-TWiST algorithm. Multi-frequency reconstructions were performed using frequencies 0.7 GHz, 1.1 GHz and 1.3 GHz. The reconstructed blood phantom's permittivity and ischaemia-mimicking tissue's permittivity are shown in Figures 10 and 11, respectively. A better localization of the targets and a reduction of the artefacts are shown when the MTS films are integrated in the setup. PSMA and MTS-Enhanced PSMA: Simulation Results with Different Brain Scanners For each of the systems described in Section 2.4, the S-Parameters were calculated through CST Microwave Studio over the 0.5-2.0 GHz frequency range. Then, the signal difference "with target -no target" (dB) at relevant frequencies was plotted. Figure 12 shows the reflection parameter for each of the antennas of "System 1", "System 2" and "System 3". The plots show that presence of the MTS superstrate loading improves the reflection coefficient of the PSMA by 5 dB. Figure 13 shows the signal difference "with target -no target" (dB) as a function of receiver location at frequencies 1 GHz and 1.1 GHz for "System 1", "System 2" and "System 3". As shown in the graphs, when the PSMA is loaded with our MTS superstrate, it leads to an overall improvement of the signal difference (up to 25 dB) compared to "System 2". Also, the coupling liquid affects the "weak" signal scattered from the bloodmimicking inclusion, which falls below the noise level. Finally, to test the MTS's impact on tomographic reconstructions, we have applied our 2D DBIM-TwIST algorithm to the simulated data plotted in Figure 13. We carried out singlefrequency reconstructions, assuming approximate knowledge of the dielectric properties of plastic and average brain tissue as initial guess for our algorithm. The estimated dielectric properties at frequencies 0.9 GHz, 1 GHz and 1.1 GHz is shown in Figure 14. When employing our new PSMA operating in air ("System 2"), the blood-mimicking target is detected correctly. However, a higher contrast between the haemorrhage-mimicking target and the surrounding brain tissue is observed when the MTS is integrated in the scanner ("System 3"). Near-Field Analysis of the MTS-Enhanced Antennas In near-field MWI, spatial distributions of the electric field magnitude are important measures of antenna performance, which is affected by the impact of evanescent k-space contributions [37]. To evaluate performance in the near field, we have used our simulation setup to calculate examples of spatial distributions due to the transmitting antenna with and without the MTS loading. Examples of these distributions for the real part of the electric field calculated at the antennas' resonant frequencies are shown in Figures 15 and 16 for the triangular patch antenna and the spear patch antenna, respectively. These results confirm that the antenna-radiated probing fields are significantly stronger in the presence of the MTS superstrate loading, leading to an increased sensitivity of our MWI system. Discussion In this paper, we have presented experiments including brain tissue-mimicking phantoms and cylindrical targets mimicking strokes as well as numerical simulations with several brain scanners. Our study demonstrates that MTS structures have the potential of improving brain imaging when integrated in MWT scanners. The MTS unit cell shown in Figure 2a and the MTS superstrate loadings based on this design were studied experimentally using the custom-made prototype described in Section 2.2. The MTS loadings were shown to enhance transmission and improve the matching of our in-house fabricated antennas in the 0.5-2.0 GHz frequency range. In particular, the signal transmitted across the head phantom, along the direction containing the blood-mimicking target, is overall increased over all the frequency range. Moreover, the antenna reflection coefficient is reduced by several dB when the MTS films are incorporated into the system. This improvement in the near-field antennas performance translates into higher-quality reconstruction images. Furthermore, an innovative MTS-enhanced antenna design was proposed to test our 2D DBMI-TwIST algorithm in air. To this end, three MWT scanners were tested and compared using a numerical head model made of average brain tissue and including a blood-mimicking target. For each system, the S-Parameters were measured over the 0.5-2.0 GHz frequency range and the signal difference "with target-no target" (dB) at relevant frequencies was calculated. Single frequency reconstructions were also performed. Our results indicate that, when our PSMA is loaded with the MTS superstrate, the signal difference due to the presence of the target is increased up to 25 dB. This translates into a higher contrast between the target and the surrounding tissue. Another interesting point is to note that the signal scattered by the blood-mimicking inclusion is severely affected by the coupling medium. Thus, it is possible to detect a haemorrhage-mimicking target placed inside the brain volume of our numerical head model without using a bulky matching liquid. To the best of our knowledge, this is the first study presenting experimental and simulation results which demonstrate the possibility of enhancing the detection of strokemimicking targets by incorporating MTS antenna enhancers for different MWT arrays. While previous work by various research groups has argued that MWT is applicable for brain imaging (e.g., [38]), improving the sensitivity and hence the quality of the data is critical to achieve clinical accuracy. This feasibility study suggests that MTS technology might be an important step towards the goal of developing functional, portable, and ergonomic MWI scanners with the desired clinical accuracy. It is of course important to note that most clinical MWI applications involve a threedimensional (3D) non-linear inverse problem, which is complicated due to the complex structure of the brain and the high dielectric contrast between its different tissues [39]. Thus, in addition to optimising the data acquisition system, the ability of the DBIM-TwIST algorithm to tackle this highly non-linear problem within the distorted Born approximation must be demonstrated. Our previous work has shown that the algorithm can recover complex structures of high dielectric contrast in realistic numerical breast phantoms [36], and some initial simulation results with complex brain phantoms have also shown promise [40]. These studies have shown the importance of the initial guess and prior information as well as frequency hopping in getting the DBIM-TwIST algorithm to converge to meaningful reconstructions of these complex structures. Our vision of developing a portable microwave stroke detection scanner must of course tackle several additional challenges. These include the requirement to place antenna sensors conformally to the head surface, uncertainty and variability in the properties of the skin-hair-scalp interface, and inhomogeneities and variability in the dielectric properties of brain tissues (including the presence of blood vessels), which are lossy and result in compromising imaging resolution to ensure sufficient penetration. These result in a nonlinear, very challenging 3D EM inverse scattering problem, which will certainly require some prior information on the brain tissues distribution and properties. Regardless of the challenges in microwave stroke detection, we must emphasise, however, that our results suggest that MTS-enhanced arrays can be advantageous for various other medical MWI applications (e.g., breast cancer detection) and algorithms (e.g., radar-based methods). Based on these considerations, our future work will focus on conducting more experimental studies using more complex setups. Regarding the antenna arrays immersed in the coupling liquid, we aim to validate our MTS design using the 3D version of our tomographic algorithm, by employing 24-antenna arrays distributed in two or three rows around the head phantom. Then, we will work towards optimizing the MTS-enhanced PSMA and carrying out experimental studies to assess the feasibility of applying our algorithm without employing any matching media. Before evaluating our MTS-enhanced system with clinical data, we also plan to fabricate and test our system with more realistic inhomogeneous brain phantoms. In particular, we will use digital volumetric brain phantoms with complex structure and different tissues (e.g., grey matter, white matter, bone, muscle and skin) to test the capabilities and requirements of our DBIM-TwIST approach. Conflicts of Interest: The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; and in the decision to publish the results. Abbreviations The following abbreviations are used in this manuscript: Appendix A. The DBIM-TwIST Algorithm The distorted Born iterative method (DBIM) is an iterative algorithm for solving the electromagnetic (EM) inverse scattering problem, which is used for for medical imaging applications to estimate the spatial distribution of the tissue's dielectric properties inside a region V in the human body (see, for example [41]). The nonlinear inverse scattering problems for each transmitter-receiver (TR) pair is linearized and approximated by the following equation, where E t , E s , E b are the total, scattered and background electric fields respectively, r n and r m are the transmitting and receiving antenna locations, ω is the angular frequency, µ 0 is the permeability of free space, 0 is the permittivity of free space, and G b is the Green's function for the background medium. As we only consider point sources, the Green's functions can be calculated from the electric field as, G b (r n , r) = j ωµ 0 J E b (r, r n ). The difference δ between the relative complex permittivity of the reconstructed region, r (r), and the background medium, b (r), is defined as δ (r) = r (r) − b (r).We note that the scalar integral equation above assumes two-dimensional (2D) TM propagation, and is only an approximation of the three-dimensional (3D) inverse problem at hand. Despite this loss in information, the 2D approximation can produce images of acceptable quality in many microwave imaging problems arising in medical applications. At each DBIM iteration i, the integral equation can be discretized for each TR pair as, leading to an ill-posed linear system as, where A is an M × N matrix (M N), with M transmit-receive pairs and N voxels of the reconstruction region V, b is a M × 1 vector of the scattered fields. The A is calculated at each DBIM iteration by the forward solver which yields E b for a known background b . The background field is used to build the linear system above, which is then solved by an inverse solver. Finally, the background profile is updated by i+1 b (r) = i b (r) + δ (r) and the DBIM continues to next iteration i + 1. The forward solver of our DBIM-TwIST algorithm uses the finite difference time domain (FDTD) method. The FDTD method simulates the EM wave propagation of the direct, "forward" problem based on Maxwell's equations. As mentioned above, this work considers only 2D transverse magnetic (TM) waves to reconstruct 2D geometries. Furthermore, our FDTD implementation uses a single-pole Debye model to model frequency-dependent materials such as brain tissues as, ε ε 0 = ε ∞ + ∆ε 1+jωτ + σ s jωε 0 . We employ the two-step iterative shrinkage/thresholding (TwIST) algorithm as the solver of the ill-posed linear inverse problem at each DBIM iteration. Thresholding algorithms solve the ill-conditioned linear system Ax = b, by finding a solution x which minimizes the least squares error function as F(x) = 1 2 Ax − b 2 + λ x 1 with a regularization term λ x 1 to stabilize the solution by limiting its l 1 -norm. The general structure of the TwIST algorithm for solving this minimization problem is given by [42], The parameters for the TwIST algorithm are calculated as κ = ξ 1 ξ m , ρ = 1− √ κ 1+ √ κ , α = ρ 2 + 1, β = 2α ξ 1 +ξ m , where ξ 1 and ξ m are the smallest and largest eigenvalues of A T A respectively. The shrinkage/thresholding operation is a soft-thresholding function, calculated as Ψ λ (x) = sign(x) max{0, |x| − λ}. At each TwIST step, the new solution is updated based on two previous solutions and the soft-thresholding function. As the linear system can be extremely ill-conditioned for microwave imaging problems, the TwIST parameters above must be optimised specifically for the problem at hand [43]. The stopping criterion of the TwIST algorithm can be set based on a tolerance value, which is the normalised difference between the previous and current values of F(x) and is defined as tol = . The TwIST algorithm stops when tol is smaller than a preset value, usually in range between 10 −4 and 10 −1 . This early termination of the iterative algorithm serves an additional regulariser, with a similar effect to a Tikhonov approach [41]. Our l 1 -problem can be regularised further by employing the Pareto curve method, which which yields the optimal tradeoff between the residual error Ax − b 2 and the norm of the solution x 1 (similar to the L-curve method for l 2 -problems) [36]. To reduce the computational cost, a practical approach for selecting λ is based on the form λ = δ A T b ∞ , where δ is factor with 0 < δ < 1 [36].
2021-03-29T05:17:20.114Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "503ba9348197c954dc23f8183da8599cc569e1dc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/11/3/424/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "503ba9348197c954dc23f8183da8599cc569e1dc", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Medicine" ] }
55336499
pes2o/s2orc
v3-fos-license
The Determinants of the Use of Oral Health Care Services by Consumers in West Africa: The Case of Senegal The purpose of this study is to identify the determinants in the resort to oral health care by the Senegalese populations. To achieve this, we have carried out a transversal descriptive study. Our results show that the patients living less than 5 kilometers away or between 5 and 10 kilometers away from the nearest health care facilities go more to the public dental ones than to the other types of facilities, with respectively 15% and 26%. People who have no source of income or only one source of income tend to go to public dental facilities with respectively 17% and 34%. In all the study population, 38% of the people go to the public dental facilities and pay themselves for the care fees, whereas only 12.5% go to the public dental facilities and have a mutual health insurance. The distance between the living place and the heath care facility, the type of job, the level of education, the monthly income and the type of medical care, are the factors that influence Senegalese people’s use of oral health care services. Introduction In 2011, the consumption of care and medical goods in France was estimated at 178.9 billion euros, which places France in the third position behind the United States of America and the Netherlands, whereas Senegal and the other West African countries are lagging far behind [1]. Public health care spending per capita and per year in some West African countries is estimated at 3.5 dollars (USD) in Guinea, 8 in Côte d'Ivoire, 5 in Ghana, 9 in Mali whereas the standard figure of the World Health Organization is 13 USD [2]. The attention of the public health officials of these countries has been drawn by the low access to health care which derives from such factors as the cost related to the choice of the health care provider, the large discrepancy between rural and urban areas in terms of medical care expenses, work environment and welfare system. Senegal is a perfect illustration of the troubles faced by these countries: increasing urbanization accounts for the positive evolution of the health indicators; remote rural areas accumulate the handicaps and see the slow evolution of their situation; and, finally, social welfare is not well developed yet [3]. As a result, populations highly renounce the use of health care services. As a matter of fact, resorting to health care in general, and to oral health care in particular, has been a source of debate in Senegal since the last century. This is mainly due to the fact that the high rate of dental caries within the populations ranks fourth in the world's scourges [4]. In addition, renouncing health care is a complex phenomenon, which partly refers to "the non-resort to health care" and "the unmet health care needs". These two closely related notions are also discussed in the literature about health social inequalities. Renouncing health care is of course accounted for by people's resources; but it also turns out to be dependent on their representations and experiences regarding their potential recourse to dentistry [5]. Consequently, the profile of the consumer is defined as being the overall characteristics of an individual, on the basis of financial, socio-economic and geographical reasons. The profile of the Senegalese consumer of oral health care varies in accordance with many considerations, but what remains rather unknown is the identification of the different factors that determine the consumption of dental care. As a result, this study was carried out to implement a strategy for a more equitable consumption. The Study Framework The study was conducted in Senegal, a country located in the westernmost part of West Africa, and is the fourth economic power of the region with a GDP per capita of 1046.59 USD in 2013. Initially planned at a nationwide level, this study was finally restricted to the region of Dakar, the capital city, which concentrates about 2/3 of the public dental facilities, and 3/4 of the private ones. The Dakar region comprises four departments: Dakar, Pikine, Rufisque and Guediawaye. Targeted et Population Patients of the public as well as the private dental health facilities were targeted by the study. Inclusion Criteria All patients who come for a consultation or an appointment in the above targeted dental units, and who accept to participate in the study (a consent form was signed by all participants). Sample Size In the Dakar region, there are 74 public dental facilities and 140 private ones identified by the Ministry of Public Health and Welfare. We have chosen 12 facilities to conduct our survey, 6 public and 6 private ones; and they obey the following distribution: 3 public and 3 private facilities in the department of Dakar, which concentrates 75% of the dental facilities of all the Dakar region; 3 public and 3 private ones in the other departments: Guediawaye (1 public and 1 private), Pikine (1 public and 1 private) and Rufisque (1 public and 1 private). For the patients of the survey, we have fixed the number at 600, taking into account the low level of visitations in the private facilities. As a result, 80 patients out of 100 are chosen from the public facilities and 20 from the private ones, corresponding to 1/5 of the patients of the survey in the private sector. The other patients are chosen from the public sector. To sum up: 480 patients were questioned in the public dental facilities whereas only 120 patients were questioned in the private ones. Sampling Method For the recruitment of the statistical units, a two-stage random survey was carried out. First-Stage Sampling The sampling consisted in selecting at random 12 public and private dental facilities for the survey. The facilities were selected by means of a draw in two different boxes, the one containing the public dental facilities, and the other the private dental ones, for each department. The basis of the survey was constituted by the facilities that are in the list of Ministry of Public Health and Welfare with all its identifiers. Second-Stage Sampling This sampling was about the selection of the patients in the different above named facilities. It was based on a step of 2, right at the beginning of the consultations; which means that the first patient who arrived was chosen but the following patients were the 3rd and the 5th, and so on and so forth until the required number of patients was reached. The patients who met the selection criteria were questioned, and so on until the number of patients in the selected facilities was reached. Survey Conduct Before the beginning of the survey, the questionnaire was tested and validated on some patients at the Dentistry Department. Our survey lasted four weeks during which the patients were questioned according to a questionnaire comprising several indicators. We were in the public facilities from 8am to 4 pm and in the private ones from 8 am to 6 pm. We worked in close collaboration with the secretaries who managed the files of the patients who came for a consultation or for an appointment. Survey Parameters Geographical Accessibility Accessibility to dental facilities was appreciated on the basis of the distance separating the patients' living places from the nearest medical facility. Financial Accessibility It was appreciated on the basis of the source of income and of the monthly income. Social and Economic Status The social and economic level was estimated according to several indicators such as: level of education, occupation, monthly income and type of care. Data Analysis At the end of the survey with the collected data, Microsoft WORD was used for data entry, EXCEL for realization of tables and figures and SPSS software for data entry and data processing. Some variables were cross-tabulated the ones with the others. Resort to Care According to the Distance Separating Home and the Dental Service We notice that in the population, the most representative values are the patients who go to public dental facilities and who live less than 5 kilometers away or between 5 and 10 kilometers away from the nearest health facility with respectively 15% and 26% (Fig. 1) Fig. 1. Resort to care according to the distance between home and the dental service. Use of Health Care Services According to the Source of Income In our specific sample, we notice that the individuals who have no source of income, or only one source of income, tend to go to public dental facilities, with respectively 17% and 34% (table 1). Use of Care Services According to Monthly Income Patients whose monthly income is lower than 50,000 frs CFA (1frs CFA=655.55556) or is between 50,000 frs CFA and 100,000 frs CFA, go to public dental facilities, with respectively 21% and 21.5 % (Fig. 2). Social and Economic Status Use of care services according to occupation: Traders and others (students in most cases) are the ones who go more to public dental facilities, and they are the most representative categories, with respectively 31% and 11%, whereas civil servants go more to private facilities (Fig. 3). Use of care services according to education Patients who received high-school or university education go to public dental facilities and represent respectively 15% and 10% of the study population (Fig. 4). Use of care services according to type of welfare: 38% of the overall population goes to public dental facilities and themselves pay the care whereas 12.5% go to private ones and have a mutual health insurance (Fig. 5). Geographical Accessibility 26% of the survey population lives less than 5 kilometers away and 15% live between 5 and 10 kilometers away from the nearest dental service and go to public dental facilities. There is an obvious impact of distance on consumption and this suggests there are not enough dental facilities in the country. Several studies confirm our results, notably those of [Couffinal and coll] which show that the availability of care services has an impact on consumption as a low density of doctors causes a rise of care costs due to transport cost which is added, or due to the cost of the opportunity of the time related to the waiting period [6]. However, [Ould Taleb M. and coll.] show that a sick person living next to a health care facility prefers to go to a far remote one to receive care despite the transport cost and the travelling time, only because he or she will find there adequate care and quality health care services [7]. Limitations of the Study The survey was planned to take place at a nationwide level, but due to budget restrictions and lack of time, it was finally carried out in the Dakar region and in its departments. Despite our recommendation letter, the practitioners were not as cooperative as expected; especially in private practices, due to the important number of questionnaires we were supposed to give to the patients and which, according to the practitioners, contained confidential information about the patients themselves. Disclosing such information would be a violation of private life. This much impacted on the duration of the survey, which lasted longer than planned. Some patients objected to answering questions about money and about civil status. As a result, we had to give up a few questionnaires and to submit the same number of questionnaires to other patients who were willing to answer them. This impacted on both the duration of the survey and the selection of the patients which was planned according to a step of 2. The survey sample is of an indicative nature, considering the time given and the budget we had at our disposal. Carried out in the Dakar region only, our survey cannot be absolute. However, it much informs about the consumption trend at the level of the national territory considering that the Dakar region concentrates more than 3/4 of the country's dental facilities. Financial Accessibility The individuals who have no source of income or only one source of income tend to go to the public dental facilities, with a percentage of 17% and 34% respectively. Those who go to the private ones and who have only one source of income represent 9%; whereas those who have two sources of income represent 3.6%. The individuals who renounce care are those who have no source of income or only one source of income, with respectively 17% and 36.5%. [Renahy and coll.] stated in their 2011 report that the limited income of the population accounts for the latter's use of the least onerous care possible [8]. The patients who have a monthly income lower than 50,000 Fcfa or between 50,000 and 100,000 Fcfa go to public dental facilities and represent respectively 21% and 21.5%, whereas the patients whose income is between 100,001 and 150,000 Fcfa tend to go to private dental facilities and represent 3.6%. In addition, it is noticed that the patients who renounce care the most have a monthly income lower than 50,000 Fcfa or earn between 50,000 and 100,000 Fcfa, with respectively 22% and 19%. The works of [Desprès C. and coll.] on renunciation for financial reasons as well as [P. Dourgnon]'s study confirm that the population's purchasing power influences their spending on oral health care and, as a result, significantly increases renunciation of care. Cessation rate regularly increases as income per consumption unit decreases [9,10]. However, [Gobbers D.] has stated that between the 20% of the persons considered as poor and the 20% of the persons considered as rich, there is only a difference of 15% as far as the use of modern care services is concerned [11]. Economic considerations are not "the key" factor of exclusion of access to care except for people with low income or without income at all. Social and Economic Status As far as social and economic status is concerned, traders and others (students in most cases) go to public dental facilities, and they are the most representative categories with respectively 31% and 11%. Civil servants go more to private facilities, and represent 7%. The latter renounce care the most in addition to traders and others who represent 10%. [Azogui-Levyand coll.] confirm in their study on oral health care seeking behaviors that the households of unskilled workers renounce care 1.8 times more frequently than the households of both executives and traders [12]. A study carried out in Indonesia by [Chernichovsky D. and coll.] on the use of health care services [13] confirm these results. Furthermore, the patients who have a high-school or university level of education go to public dental facilities and represent respectively 15% and 10% of the study population. This very segment of the population has never renounced oral health care. Education level is one of the determinants of oral health care consumption, as renunciation tends to concern social groups with a low education level. Similar results were found by [Dourgnon P. and coll.] in France where renunciation among people who have reached a higher level is 12% whereas people who are uneducated and who only have a primary school level of education respectively have a cessation rate of 15.3% and 13.8% [14]. As far as costs are concerned, people who have no social welfare tend to go to public dental facilities, with 38% and 12.5%. It is noticed that people who themselves pay the care or who have a mutual health insurance tend more to renounce oral health care, with 37% and 15%. These results are comparable to those of [LO.C.M.MB. and coll.] who have shown that people who have no social welfare renounce 2.3 times more than those who have one [15]. A more recent study carried out by [Bayat F. and coll.] in 2006 has shown that having welfare makes access to care easier [16]. However, studies by Darmon J. on health care spending and care renunciation make it clear that if a good complementary health cover definitely limits renunciation to care for financial reasons, major differences exist between social groups at a given level of insurance [17]. Conclusion The distance between the living place and the heath care facility, the type of job, the level of education, the monthly income and the type of medical care, are the factors that influence Senegalese people's use of oral health care services. To improve this use and to overcome the obstacles related to the resort to oral health care, it is important to rely on our populations' innovative forms of organizations, such as women's groups (groups of 10 to 20 women), which receive funding to develop economic activities; dahiras, which are religion-based self-help and solidarity associations of men and women; sporting, cultural associations, and economic interest groups. critically for important intellectual content. He gave final approval of the version to be published; agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. d) AK contributed to the conception and design, the acquisition, analysis and interpretation of data; was involved in drafting the manuscript and revising it critically for important intellectual content. He gave final approval of the version to be published; agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. e) MD contributed to the conception and design, the acquisition, analysis and interpretation of data; was involved in drafting the manuscript and revising it critically for important intellectual content. He gave final approval of the version to be published; agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. f) DC contributed to the conception and design, the acquisition, analysis and interpretation of data; was involved in drafting the manuscript and revising it critically for important intellectual content. He gave final approval of the version to be published; agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
2019-03-16T13:08:02.427Z
2016-10-12T00:00:00.000
{ "year": 2016, "sha1": "568d63bcf8b79372e1b802a8687c599dc9737ee5", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.sjph.20160406.15.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9dbfa0b5f24a790741d457f29533612d2779b859", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234930626
pes2o/s2orc
v3-fos-license
A Comprehensive Understanding of UA-ADRCs (Uncultured, Autologous, Fresh, Unmodified, Adipose Derived Regenerative Cells, Isolated at Point of Care) in Regenerative Medicine It has become practically impossible to survey the literature on cells derived from adipose tissue with the aim to apply them in regenerative medicine. The aim of this review is to provide a jump start to understanding the potential of UA-ADRCs (uncultured, unmodified, fresh, autologous adipose derived regenerative cells isolated at the point of care) in regenerative medicine. We show that serious and adequate clinical research demonstrates that tissue regeneration with UA-ADRCs is safe and effective. ADRCs are neither 'fat stem cells' nor could they exclusively be isolated from adipose tissue, as ADRCs contain the same adult (depending on the definition) pluripotent or multipotent stem cells that are ubiquitously present in the walls of small blood vessels. Of note, the specific isolation procedure used has significant impact on the number and viability of the cells and hence on safety and efficacy of UA-ADRCs. Furthermore, there is no need to further separate adipose-derived stem cells (ASCs) from ADRCs if the latter were adequately isolated from adipose tissue. Most importantly, UA-ADRCs have the physiological capacity to adequately regenerate tissue without need for manipulating, stimulating and/or (genetically) reprogramming the cells for this purpose. Tissue regeneration with UA-ADRCs fulfills the criteria of homologous use. Introduction: what are UA-ADRCs and how are they used in regenerative medicine? The literature on cells derived from adipose tissue with the aim to apply them in regenerative medicine has become practically impossible to survey, even for experts. A search in PubMed on "adipose derived stem cells" on 22 February 2020 yielded over 10,000 citations, among them approximately 1000 reviews. Furthermore, there is still disagreement about the term "pluripotent stem cell", "multipotent stem cell", stromal vascular fraction (SVF), adipose derived regenerative cells (ADRCs) and adipose derived stem cells (ASCs) in the literature. For example, a very recent study defined microvascular pericytes as true pluripotent adult stem cells with the ability to produce structures typical for the three primitive germ layers (ectoderm, mesoderm and endoderm) [1]. This is in contrast to, e.g., a definition of pluripotent stem cells provided by the U.S. Library of Congress as embryonic stem cells with the ability to become any type of cell in the body (nerve, muscle, blood, etc.), in contrast to multipotent stem cells that develop from pluripotent stem cells as the embryo grows, with the ability to develop specific types of cells (terminally differentiated cells) [2]. According to the latter definition microvascular pericytes could not be considered pluripotent as long as it would not have been demonstrated that these cells have the ability to become any type of cell in the body. Ultimately, this would require to develop a full body from a few microvascular pericytes (like the full body is developed from a few embyronic cells), which appears impossible. Of note, the latter would also apply to so-called induced pluripotent stem cells (iPSCs). According to a definition provided by the U.S. National Institutes of Health (NIH) iPSCs are adult cells that have been genetically reprogrammed to an embryonic stem cell-like state by being forced to express genes and factors important for maintaining the defining properties of embryonic stem cells [3]. In the view of the NIH mouse iPSCs demonstrate important characteristics of pluripotent stem cells, including the expression of stem cell markers, formation of tumors containing cells from all three germ layers, and the ability to contribute to many different tissues when injected into mouse embryos at a very early stage in development [3]. However, according to the aforementioned definition provided by the U.S. Library of Congress iPSCs should strictly speaking not be called pluripotent. Coming back to the pluripotent or multipotent (depending on the definition) cells in the walls of small blood vessels, the fact that it has been demonstrated that these cells are different from pericytes [4] (as stated in [1]), could make the confusion complete. In our opinion the only way out of this confusion is to comprehensively provide in any study describing the use of cells in regenerative medicine a detailed description of the following: (i) the nature of the tissue from which the cells were isolated (this also implies to describe whether the cells are respectively autologous, allogeneic or (in experimental studies) xenogeneic), (ii) the isolation procedure itself, including the specific technology that was used, (iii) every process including but not restricted to selecting, cultivating, stimulating, manipulating, (genetically) reprogramming, etc. to which the cells were exposed during the period between isolation and administration into a patient or a model organism, (iv) the exact route of administration, including the total volume of the administered final cell suspension, and (v) every additional therapy that was applied (this also comprises any administration of drugs or other biologics such as platelet rich plasma (PRP) before, during or after administration of cells; c.f., e.g., [5]). We have strictly followed this route in our recent reports about the application of uncultured, autologous, fresh, unmodified, adipose derived regenerative cells (UA-ADRCs), isolated at point of care (i.e., at the same location where harvesting of adipose tissue and injection of UA-ADRCs were carried out) with the Transpose RT /Matrase system (InGeneron, Houston, TX) [6][7][8]. In this regard we define treatments with UA-ADRCs as follows: Firstly, UA-ADRCs are isolated at the point of care from the patient's own adipose tissue, usually harvested by a mini-liposuction (in specific cases adipose tissue can also be harvested by surgical extraction). This clearly differentiates UA-ADRCs from cells that are isolated from respectively bone marrow, umbilical cord tissue, umbilical cord blood or specific organs (such as the isolation of stem cells from tendons, other connective tissue or amniotic or synovial fluid [9]). Secondly, UA-ADRCs are isolated from adipose tissue such that they are separated from both adipocytes and the connective tissues. In general, one has to differentiate between methods for generating so-called nanofat (described in the literature as mechanically emulsified fat tissue in a liquid form, ideally devoid of connective tissues but containing cells of the stromal vascular fraction [10]) and methods for isolating only the stromal vascular fraction (i.e., a cellular extract made from fat that is devoid of both adipocytes and connective tissues [11]). The latter can be achieved with or without the use of enzymes, with a much higher cell yield (number of nucleated cells per unit weight of adipose tissue or volume unit lipoaspirate) achieved with enzymatic methods that with non-enzymatic ones [11]. Cells that are isolated from adipose tissue in a way that they are devoid of adipocytes but not of connective tissues (e.g., [12]) should not be called stromal vascular fraction and/or ADRCs. Thirdly, UA-ADRCs are not cultivated, selected, stimulated, manipulated, (genetically) reprogrammed etc., but administered into the patient's tissue in need for regeneration (e.g., bone defects [6], heart tissue with impaired function as a consequence of previous myocardial infarction [7] or partial tendon ruptures [8], respectively) immediately after isolation of the cells (usually within less than two hours after harvesting of the adipose tissue). Cultivating UA-ADRCs in the laboratory can be applied for isolating adipose derived stem cells (ASCs), which comes along with all the potential, culture-related mechanic and oxidative stress that could affect their safety as a medicinal product [13]. Fourthly, we administer UA-ADRCs locally according to the individual patient's need. In case of bone defects UA-ADRCs can be surgically administered together with a scaffold [6]. For treating heart failure, we recently published a novel procedure for retrograde administration of UA-ADRCs through the heart's venous system, precisely to the area in need of regeneration, combined with a temporary blockage of the coronary vein at the level of a previous arterial occlusion [7]. In the case of partial tendon ruptures the cells can be directly injected into the damaged site of the tendon [8]. It is obvious that the latter applications require a final cell suspension of small volume, which is achieved with the technology we are using (usually 3 mL). Fifthly, we do not apply any other treatment together with UA-ADRCs, except for adequate rehabilitation (such as optional outpatient rehabilitation with physical therapy modalities in case of tendon regeneration [8]). In the following text we present and discuss nine statements about UA-ADRCs (as defined above) and their application in regenerative medicine, reflecting the current state of knowledge in the literature. They are summarized in Table 1. Table 1. Nine statements about UA-ADRCs and their application in regenerative medicine, reflecting the current state of knowledge in the literature. What are the rationale and advantages of using UA-ADRCs in regenerative medicine? 1. Serious and adequate clinical research demonstrates that tissue regeneration with UA-ADRCs is safe. 2. Serious and adequate clinical research demonstrates that tissue regeneration with UA-ADRCs is effective. Why and how shall regenerative cells be isolated from adipose tissue rather than from other tissues, and how shall these cells be characterized? 3. ADRCs are neither 'fat stem cells' nor could they exclusively be isolated from adipose tissue, as ADRCs contain the same adult pluripotent or multipotent (depending on the definition) stem cells that are ubiquitously present in the walls of small blood vessels. 4. The specific isolation procedure used has significant impact on the number and viability of the cells and hence on safety and efficacy of UA-ADRCs. 5. There is no need to further separate adipose-derived stem cells (ASCs) from ADRCs if the latter were adequately isolated from adipose tissue. 6. The minimal definitions of stromal cells as ADRCs established by the International Federation for Adipose Therapeutics and Science (IFATS) and the International Society for Cellular Therapy (ISCT) are inadequate and misleading, and therefore should be amended. How do UA-ADRCs exert their function in tissue regeneration? 7. UA-ADRCs have the physiological capacity to adequately regenerate tissue without need for manipulating, stimulating and/or (genetically) reprogramming the cells for this purpose. 8. Tissue regeneration with UA-ADRCs fulfills the criteria of homologous use. 9. A certain challenge in research with UA-ADRCs lays in the fact that labeling the cells would render them modified, and unmodified cells can only indirectly be identified after transplantation in a target tissue. 2. What are the rationale and advantages of using UA-ADRCs in regenerative medicine? Statement #1: Serious and adequate clinical research demonstrates that tissue regeneration with UA-ADRCs is safe. In a position statement recently published by representatives of the U.S. Food and Drug Administration (FDA) in The New England Journal of Medicine [14] safety of stem cell treatments was a primary focus. Marks et al. [14] specifically stated that adverse events are probably more common than is appreciated, because there is no reporting requirement when these therapies are administered outside clinical investigations [14]. In fact, a number of serious adverse events related to stem cell treatments were recently published in The New England Journal of Medicine [15][16][17]. These adverse events included development of a glioproliferative lesion of the spinal cord leading to progressive lower back pain, paraplegia and urinary incontinence after intrathecal infusions of putative mesenchymal, embryonic and fetal neural stem cells for the treatment of residual deficits from an ischemic stroke [15], vision loss after intravitreal injection of autologous ADRCs for the treatment of age-related macular degeneration [16], and lethal human herpesvirus 6-related meningoencephalitis, -myocarditis and -interstitial nephritis after allogeneic transplantation of stem cells for chronic lymphocytic leukemia [17]. These and other reports about serious adverse events related to stem cell treatments highlight the need to conduct controlled clinical studies in order to determine whether these cellular therapies are safe and effective for their intended uses. Marks et al. [14] concluded that without such studies, one would not be able to ascertain whether the clinical benefits of such therapies outweigh any potential harms. These authors also stated that although autologous stem cells may typically raise fewer safety concerns than allogeneic stem cells, their use may be associated with significant adverse events [14] (as demonstrated in [16]). The outcome of a recent systematic review of reported adverse events in clinical trials on adipose derived cell therapy [18] exemplifies the need to clearly differentiate between the different types of cells derived from adipose tissue, with the aim to apply them in regenerative medicine. The authors of this systematic review identified 70 studies on adipose derived cell therapy involving more than 1400 patients. Twenty out of the 70 studies were used to evaluate thromboembolic safety and mortality, immunological safety and oncological safety. From the nine studies based on which thromboembolic safety and mortality were evaluated, only four were performed with ADRCs, and all of these studies addressed treatment of myocardial infarction (the administration route was transendocardial (two studies), intramyocardial or intracoronary (one study each), respectively). Furthermore, all of the eleven studies based on which immunological safety was evaluated were performed with allogeneic ASCs. In contrast, all of the five studies based on which oncological safety was evaluated were performed with ADCRs and did not address musculoskeletal conditions or heart failure. The administration routes in these studies were subcutaneous (two studies), transurethral, periurethral or into the corpus cavernosum of the penis (one study each), respectively. In case of treatments with ADRCs in the analyses of thromboembolic safety and mortality as well as oncological safety no distinction was made between enzymatically and non-enzymatically isolated cells. The authors concluded that adipose-derived cell therapy has so far shown a favorable safety profile, but safety assessment description has, in general, been of poor quality [18]. Furthermore, they encouraged future studies to maintain a strong focus on the safety profile of cell therapy, so its safeness can be confirmed [18]. For the aforementioned examples of the application of UA-ADRCs (treatment of bone defects [6], of heart failure with retrograde administration of the cells through the heart's venous system to the area in need of regeneration [7] and of partial tendon ruptures [8]) this safety analysis [18] is almost irrelevant. Rather, the safety of any specific combination of the type of administered cells (enyzmatically or non-enzymatically isolated ADRCs, autologous or allogeneic ASCs, etc.), the target tissue and the exact administration route must be separately evaluated. In this regard we recently performed a prospective, randomized, controlled, first-in-human pilot study on the safety and efficacy of treating symptomatic, partial-thickness rotator cuff tear (sPTRCT) with UA-ADRCs [8]. Specifically, we treated n=11 subjects with symptomatic partial rupture of the supraspinatus tendon with a single injection of UA-ADRCs, and another n=5 subjects suffering from the same condition with a single subacromial corticosteroid injection (all patients had not responded to an initial phase of at least six weeks with physical therapy treatments). All injections were made by a qualified physician under ultrasound guidance. Because of its first-in-human character the entire study was carried out according to strict guidelines set forth by U.S. FDA [14,19]. Over a period of one year after treatment, for all subjects any illnesses that made it necessary for them to see a physician had to be documented -completely irrespective of whether the illness that occurred was related to the initial treatment or not (e.g., the breaking of a tooth in one subject 164 days post treatment). This complex procedure prevented that only adverse events that are looked for will be found [18] and resulted, for the first time, in a complete risk profile for the treatment of a musculoskeletal disease with UA-ADRCs [8]. Of note, the risks associated with treatment of sPTRCT with UA-ADRCs were no higher than those with corticosteroid treatment; there were no serious complications. However, one subject treated with corticosteroid injection developed a full rotator cuff tear during the course of this pilot study [8]. This pilot study suggested that the use of UA-ADRCs in subjects with sPTRCT is safe. To verify the results of this initial safety pilot study in a larger patient population, a randomized controlled trial on 246 patients suffering from sPTRCT is currently ongoing [20]. Statement #2: Serious and adequate clinical research demonstrates that tissue regeneration with UA-ADRCs is effective. In the aforementioned position statement recently published by representatives of U.S. FDA in The New England Journal of Medicine [14] it was stated that the literature is replete with instances of therapeutic interventions pursued on the basis of expert opinion and patient acceptance that ultimately proved ineffective or harmful when studied in well-controlled trials comparing them with the standard of care [14]. In this regard another recent systematic review focused on the efficacy of treatments using ADRCs [21]. The authors identified 73 related clinical studies, of which 12 (16.5%) were randomized controlled trials (RCTs) (defined as Evidence Based Medicine (EBM) Level II in [21]), 14 (19.2%) were cohort studies (EBM Level III in [21]) and 47 (64.4%) were case series (EBM Level IV in [21]). Case series and cohort studies are important to determine whether a novel treatment is effective and should be considered for further investigation. However, the only way to reduce certain sources of bias is testing the effectiveness of new treatments in RCTs against no treatment, a conventional treatment or a placebo. We therefore restrict our analysis to the RCTs identified in [21]. Two out of the 12 RCTs listed in [21] should not be considered RCTs in a strict sense. In one of these studies [22] n=16 subjects with bilateral knee osteoarthritis were treated with UA-ADRCs on one side and with hyaluronic acid (HA) injection on the other side; allocation of either side to UA-ADRCs or HA was performed randomly. In another study [23] subjects suffering from Achilles tendinopathy were randomly allocated to respectively treatment with UA-ADRCs (n=21) or treatment with PRP (n=23). However, for evaluating treatment success using diagnostic ultrasound and magnetic resonance imaging (MRI) all subjects were pooled into one group (n=44) and no comparisons between the different treatments were performed. Four other RCTs were excluded from further consideration. In one of them ( [24]; focussing on treatment of recalcitrant chronic leg ulcers) centrifuged adipose tissue rather than ADRCs was applied. The other three RCTs that were excluded addressed myocardial infarction [25][26][27]. They were excluded because in the APOLLO trial [25] the mean left ventricular ejection fraction (LVEF) was 52% at baseline which is considered an incorrect target population [28]; in the PRECISE trial [26] the LVEF was not investigated with cardiac MRI which is considered the state-of-the-art for accurate, comprehensive and reproducible measurements of cardiac chamber dimensions, volumes, function and infarct size [28]; and the ATHENA trial [27] was initially put on hold because of delivery related cerebrovascular events [7] and afterwards terminated prematurely due to subsequent prolonged enrollment time [27]. The remaining six RCTs are summarized in Table 2. In only one of them (addressing Achilles tendinopathy [29]) UA-ADRCs were applied as the sole therapy. The same study was the only one in which a commercially available (non-enzymatic) method was used for isolating ADRCs (FastKit; Corios, San Giuliano Milanese, Italy). Only short-term benefits of injecting UA-ADRCs compared to injection of PRP were observed in this study (statistically significantly lower mean VAS pain scores at 15 and 30 days (D15 and D30) post treatment; statistically significantly higher mean VISA-A score on D30 post treatment; and statistically significantly higher mean AOFAS score on D15 post treatment), but no long-term benefits (i.e., no statistically significantly differences between the groups on D60, D120 and D180 post treatment) [29]. In the other five RCTs listed in Table 2 ADRCs were isolated with experimental, commercially not available (enzymatic) methods, and were not used as the sole therapy. Furthermore, in none of these six RCTs a safety profile comparable to the one established in our recent pilot study on treating sPTRCT with UA-ADRCs [8] was established. This overall very limited evidence base stands in contrast to the high number of enzymatic and non-enzymatic methods for isolating ADRCs that are commercially available (addressed in detail in Section 3.2). Furthermore, in case the so-called 'stem cell' preparations that are recommended, prescribed or delivered in many clinical centers around the world (most probably more than 1.000 in the U.S. [35]) are ADRCs, these treatments are indeed performed without sufficient data to support their true efficacy [36], supporting U.S. FDA's warnings about these stem cell therapies [37]. We consider our recent prospective, randomized, controlled, first-in-human pilot study on the safety and efficacy of treating sPTRCT with UA-ADRCs ( [8]; described in detail in Section 2.1) as a first, important step to overcome this unsatisfactory situation. Despite the small number of subjects in this pilot study, those in the UA-ADRCs group (n=11) showed statistically significantly higher mean American Shoulder and Elbow Surgeons Standardized Shoulder Assessment Form total scores (ASES total scores) both 24 and 52 weeks post treatment than those in the corticosteroid group (n=5). The ASES total score takes into account the patient's pain situation and the functionality of the shoulder [38,39]. As already mention above a randomized controlled trial on 246 patients suffering from sPTRCT is currently ongoing to verify the results of this pilot study [20]. Statement #3: ADRCs are neither 'fat stem cells' nor could they exclusively be isolated from adipose tissue, as ADRCs contain the same adult (depending on the definition) pluripotent or multipotent stem cells that are ubiquitously present in the walls of small blood vessels. One of the greatest misconceptions in stem cell-based regenerative medicine may be the designation of ADRCs as 'fat stem cells' and related descriptions in the recent literature (e.g. 'adiposederived stem cells: fatty potentials for therapy' [40], 'using fat to fight disease' [41], 'adipose tissue stem cells for therapy' [42] and 'stem cells derived from fat' [43]). In fact, these cells are not 'fat stem cells' at all. Rather, they are adult (depending on the definition) pluripotent or multipotent stem cells located in the walls of small blood vessels (henceforth vascular associated mesenchymal stem cells (MSCs)) (reviewed in [1,4]). Because blood vessels are stimulated to grow, branch and invade developing tissues and organs very early during human embryonic development (starting on approximately Day 18 [44]) the presence of vascular associated MSCs in the vascular location results in equal distribution of these cells throughout the body. As a result, vascular associated MSCs can in principle also be isolated from small blood vessels in other organs (shown for heart and skeletal muscle in [4]). The reason why vascular associated MSCs are isolated from adipose tissue is that the latter contains a large amount of small blood vessels and is relatively easy to be harvested in most patients through liposuction. Furthermore, vascular associated MSCs can represent up to 12% of the total population of SVF cells [7], whereas only 0.001-0.1% of the total population of bone marrow nucleated cells represent MSCs [45,46]. Besides this, harvesting adipose tissue (by liposuction) is typically much less invasive than harvesting bone marrow [40,45,47]. Approximately 400.000 elective liposuction surgeries are performed in the U.S. per year [48], with a serious adverse event rate reported between 0.07% and 0.7% [49,50]. Another misconception is the belief that microvascular pericytes are the vascular associated MSCs in the walls of small blood vessels (e.g., [1, 51,52]). This misconception is based on the fact that expression of the proteoglycan neural/glial antigen 2 (NG2) has long been associated with pericytes [51][52][53]. In the central nervous system (CNS), NG2-positive cells are responsible for the generation of oligodendrocytes [54]. Some authors presented results suggesting that even astrocytes and neurons may be generated from NG2-positive cells, which would make the latter similar to neural stem cells [54]. However, other authors could not reproduce these findings [55]. It is of note that different populations of pericytes and pericyte-like progenitor cells were described in the literature. Most probably the best studied pericytes are parts of typical capillary structures, formed by pericytes and endothelial cells [56][57][58]. Another type of pericytes is located at the surface of small blood vessels, partly taking over regulation of vessel diameter and, thus, hemodynamic regulation in the CNS [59]. Pericyte-like progenitor cells were also described in the adventitia of larger vessels [60]. However, all of these types of pericytes have positions in the wall of capillaries or larger vessels that are clearly distinct to the position of NG2-positive cells in the wall of a small human arteriole shown in Figure 1a. This indicates that NG2 is expressed by more cells than just pericytes, and it is much more likely that vascular associated MSCs are also immunopositive for NG2 [4]. Figure 1b shows the current concept of the localization of vascular associated MSCs in the wall of small vessels. Statement #4: The specific isolation procedure used has significant impact on the number and viability of the cells and hence on safety and efficacy of UA-ADRCs. It is important for physicians and patients to understand that terms such as 'UA-ADRCs', 'fat stem cells', 'stromal vascular fraction', 'SVF' etc. are only generic terms for cell preparations which are isolated from the patient's own adipose tissue immediately before transplantation into the target tissue. In fact, an optimal technology for isolating ADRCs should be able to isolate the highest possible number of living ADRCs in the shortest possible time from the smallest possible amount of adipose tissue/lipoaspirate, and should result in the highest possible concentration of cells in the final cell suspension (i.e. the smallest possible volume of cell suspension) for immediate application. Various enzymatic and non-enzymatic methods were developed for this purpose, some of which are available on the market (reviewed in [11,[62][63][64][65]). Using enzymatic methods the connective tissue of the adipose tissue and the walls of the small blood vessels are largely dissolved, resulting in a cell yield (i.e., number of isolated ADRCs) that is on average many times higher than the cell yield achieved with non-enzymatic methods in which the cells are isolated purely mechanically [11] (Fig. 2a). In addition, for the vast majority of non-enzymatic methods described in literature, no data are available on the relative number of living cells (or the extent of cell death due to the mechanical processing of the adipose tissue) in the final cell suspension [11]. However, these aspects are of immediate clinical relevance, because (i) the transplantation of an insufficient number of UA-ADRCs can lead to an unsatisfactory clinical result, and it does not appear medically justifiable to remove much more adipose tissue from the patient simply because (for whatever reason) a non-enzymatic method for isolating ADRCs is used; and (ii) injection of dying cells into tissue can lead to inflammatory reactions [66]. Considering the limited tendency particularly of tendons to heal, undesirable side effects of any kind should be avoided as far as possible in tissue regeneration. We have recently demonstrated that isolating ADRCs from adipose tissue using the Transpose RT / Matrase system (InGeneron) results in a high cell yield (7.2×10 5 ± 0.90×10 5 ADRCs per mL lipoaspirate in [11]), high cell viability (85.9% ± 1.1% in [11]) and, thus, high number of living cells per mL lipoaspirate (6.25×10 5 ± 0.79×10 5 ADRCs per mL lipoaspirate in [11]). To our knowledge the latter is the highest value ever reported in studies describing methods for isolating ADRCs [11] (Fig. 2b). Figure 3 provides a schematic representation of isolating ADRCs from lipoaspirate with the Transpose RT / Matrase system (InGeneron). [11]). The panels show individual data (dots) as well as mean and standard error of the mean (SEM) in case of the other enzymatic and non-enzymatic methods. One can immediately see that on average enzymatic methods result in a much higher cell yield than nonenzymatic methods. Furthermore, for most of the non-enzymatic methods the number of living cells per mL lipoaspirate could not be calculated because the corresponding relative numbers of living cells were not reported. Of all reported methods the Transpose RT / Matrase system (InGeneron) did not result in the highest cell yield (arrow in a) but in the highest number of living cells per mL lipoaspirate (arrow in b), which appears to be the clinically most relevant parameter (as outlined in the main text). (2) the filled processing tubes are subjected in an inverted position inside the Transpose RT system to repetitive acceleration and deceleration for 30 minutes at 39° C; (3) the processed lipoaspirate solution is filtered through a 200 µm filter and transferred into a wash tube; (4) after filling the wash tube with saline (room temperature) up to the MAX FILL line, the cells are separated from the rest of the tissue by centrifugation at 600g for 5 minutes at room temperature; (5) the ADRCs (approximately 2 mL) are extracted through a swabable luer vial adapter at the bottom of the wash tube, and the remaining substances (fat, debris and liquid) are discarded; (6) the cells are returned into the empty wash tube and (after adding fresh saline up to the MAX FILL line) centrifugated again for 5 minutes; (7,8) the previous washing step is repeated; and (9) finally the concentrated ADRCs (approximately 3 mL) are extracted and slowly pushed through a luer coupler into a new sterile syringe for further application to the subject. Statement #5: There is no need to further separate adipose-derived stem cells (ASCs) from ADRCs if the latter were adequately isolated from adipose tissue. Figure 4 provides a schematic overview of the relationship between the terms adult stem cells, vascular associated MSCs, ADRCs and ASCs. Vascular associated MSCs are a subgroup of adult stem cells and are contained in ADRCs. ASCs can be obtained by culturing ADRCs and, thus, selectively propagating the vascular associated MSCs contained in ADRCs. For example, it was shown that culturing ADRCs increased the mean relative number of cells immunopositive for the surface marker CD29 (a marker of ASCs [65,67]) (CD29+ cells) from 71% at passage 1 to 95% at passage 4, and the mean relative number of CD44+ cells (another marker of ASCs [65,67]) from 84% at passage 1 to 98% at passage 4 [68]). Considering the fact that ADRCs also contain other types of cells next to vascular associated MSCs (among them blood-derived cells, endothelial cells and pericytes [7]) one could be inclined to believe that ASCs may be the better choice for tissue regeneration than ADRCs. For the following reasons, however, this is not the case: Firstly, cultivating ADRCs inevitably results in exposure to potential, culture-related mechanic and oxidative stress that could affect the safety of ASCs as a medicinal product [13]. As a result, ASCs may not meet the criterion 'minimally manipulated' defined in 21 CFR 1271.10(a) [69] (Title 21 is the portion of the Code of Federal Regulations that governs food and drugs within the U.S. for the FDA [70]) and, thus, may not be regulated solely under Section 361 of the Public Health Service (PHS) Act and 21 CFR Part 1271 [71]. Rather, ASCs may be regulated as a drug, device and/or biologic product under the Federal Food, Drug and Cosmetic (FD&C) Act and/or Section 351 of the PHS Act (and applicable regulations) in the U.S. [71]. The European Medicines Agency may consider ASCs as an Advanced Therapy Medicinal Product (ATMP) [72]. Secondly, a number of recent studies on culture systems and animal models has indicated noninferiority or even superiority of UA-ADRCs over ASCs in rescuing heart function after acute myocardial infarction [73] as well as in tendon regeneration [74], bone regeneration [75] and erectile function recovery after cavernous nerve injury [76] (see also [77]). It is currently unknown whether this is due to alterations of the physiological functions of vascular associated MSCs when they are (as ASCs) exposed to culture-related mechanic and oxidative stress, or due to the fact that ADRCs comprise more cells than just ASCs which may act synergistically in regenerating tissue. Statement #6: The minimal definitions of stromal cells as ADRCs established by the International Federation for Adipose Therapeutics and Science (IFATS) and the International Society for Cellular Therapy (ISCT) are inadequate and misleading, and therefore should be amended. In a position statement published by ISCT in 2006 the following minimal criteria for defining multipotent MSCs were described: being adherent to plastic, expressing the surface markers CD73, CD90 and CD105, and having the ability to differentiate into osteoblasts, adipocytes and chondrocytes [78]. This definition has a number of shortcomings. Most importantly, fibroblasts are also adherent to plastic and express CD73, CD90 and CD105, but are not MSCs and cannot transdifferentiate into other lineages [79]. Furthermore, the true pluripotent stem cells do not yet express CD73, CD90 and CD105 [4]. Rather, expression of cell surface markers is a dynamic process. For example, when cultured in fetal bovine serum or platelet lysate culture media, MSCs can turn on new surface markers [4]. Alternatively, they can turn down surface markers in culture, such as the loss of the previously expressed progenitor marker CD34 or the endothelial progenitor marker CD31 [4]. Taken together, the vast majority of reported methods for isolating ADRCs were not characterized according to the position statements published by IFATS and ISCT [78,80]. Considering the large range of those few data that were reported in this regard it appears reasonable to hypothesize that determining surface markers of ADRCs is in principle not suitable for characterizing a method for isolating ADRCs. In our opinion any method for isolating ADRCs should primarily be assessed by the safety and efficacy of clinical applications of the isolated cells, determined by adequate clinical trials (pilot studies followed by RCTs). Statement #7: UA-ADRCs have the physiological capacity to adequately regenerate tissue without need for manipulating, stimulating and/or (genetically) reprogramming the cells for this purpose. In the aforementioned position statement recently published by representatives of U.S. FDA in The New England Journal of Medicine [14] it was stated that outside the setting of hematopoietic reconstitution and a few other well-established indications, the assertion that stem cells are intrinsically able to sense the environment into which they are introduced and address whatever functions require replacement or repair -whether injured knee cartilage or a neurologic deficit -is not based on scientific evidence [14]. The following two examples from our own published clinical research using UA-ADRCs may give a different picture. The first example (described in detail in [6] is a male, 79-year old patient who presented with a partly failing maxillary dentition (yellow arrows in Fig. 5a) and was treated with a bilateral external sinus lift procedure as well as a bilateral lateral alveolar ridge augmentation (called 'guided bone regeneration / maxillary sinus augmentation / lateral alveolar ridge augmentation'; henceforth: GBR-MSA/LRA). On the right side GBR-MSA/LRA was performed with a combination of UA-ADRCs, fraction 2 of plasma rich in growth factors (PRGF-2) and an osteoinductive scaffold (OIS) (Treatment A). On the left side GBR-MSA/LRA was performed with the same combination of PRGF-2 and OIS but without UA-ADRCs (Treatment B). Accordingly, the only difference between the treatments was the presence (Treatment A) or absence (Treatment B) of UA-ADRCs. Biopsies were collected at six weeks and 34 weeks post treatment. At the latter time point implants were placed. Radiographs (6 weeks and 32 months post treatment) demonstrated excellent bone healing (yellow arrows in Fig. 5b; Fig. 5c). No radiological or histological signs of inflammation were observed. Detailed histologic, histomorphometric and immunohistochemical analysis of the biopsies evidenced that Treatment A resulted in better and faster bone regeneration than Treatment B. Specifically, Treatment A resulted in faster build up of higher relative amounts (area/area) of newly formed bone, connective tissue and arteries as well as in lower relative amounts of adipocytes and veins at 34 weeks after GBR-MSA/LRA than Treatment B (Fig. 5d-f). The second example (described in detail in [4]) is a male, 51-year-old patient who presented with recurring and increasing pain in both knee joints during walking and other activities. The patient's history included a tibial chondrocyte transplant that had been performed three years previously. Figure 6a shows an arthroscopic view of third-degree damage to the right tibial plateau where the transplanted chondrocytes were gone and only the artificial matrix with small holes implanted on the tibial plateau was still present (white asterisk in Fig. 6a). Furthermore, considerable osteoarthritic damage of the femoral cartilage was observed (black asterisk in Fig. 6a). Figure 6b shows the situation after arthroscopic removal of the failed chondrocyte transplant (white asterisk in Fig. 6b) as well as 'mushy' and damaged cartilage structure on the femoral condyles before it was removed (black asterisk in Fig. 6b). Then, the right knee was treated with a single application of UA-ADRCs, whereas the left knee was treated with a standard therapy, i.e., arthroscopic removal of damaged cartilage and drilling of small holes into the bone. Control arthroscopies were performed one year later. On the right side (treated with UA-ADRCs) complete healing of the tibial defect (white asterisk in Fig. 6c) and of the femoral parts (black asterisk in Fig. 6c) was observed, with formation of new whitish cartilage that showed a sharp demarcation border to the original, more yellowish cartilage (arrows in Fig. 6c). In contrast, a somewhat uneven, overshooting fibroblastic scar formation was found on the left side (treated with a standard therapy) (asterisk in Fig. 6d), without a sharp demarcation border to the original cartilage (arrows in Fig. 6d). This indicated that there was some sort of healing, but not a regrowth of organized cartilage, as we hypothesized for the right knee after application of UA-ADRCs. Small biopsies that were taken from the regenerated tissue during the follow-up arthroscopies showed the following. After application of UA-ADRCs there was newly formed cartilage with a zonal organization and (like in a textbook of histology) differently shaped chondrocytes in a superficial layer (SL in Fig. 6e), middle layer (ML in Fig. 6e) and deep layer (DL in Fig. 6e). Furthermore, the contact zone between the newly formed cartilage and bone showed (also like in a textbook of histology) typical chondrocytes with a small nucleus and a hollow space around (arrows in Fig. 6f). In contrast, after treatment with a standard therapy there was a more amorphous fibrocartilage with scattered cells (arrows in Fig. 6g) but without layered organization, and the contact zone between the newly formed cartilage and bone showed an infiltration with inflammatory cells, fibroblasts (arrows in Fig. 6h) and small blood vessels (arrowheads in Fig. 6h). Figure 5. Example of regeneration of bone with UA-ADRCs (modified from [6]). Details are provided in the main text. Abbreviations: R, right; L, left; B, bone; Al, allograft; V, vein; Ad, adipocyte; F / CT, fibrin and connective tissue. In (f) the green bars represent data obtained on the right side (Treatment A with UA-ADRCs) six weeks (light green bars) and 34 weeks (dark green bars) post treatment, and red bars data obtained on the left side (Treatment B without UA-ADRCs) six weeks (light red bars) and 34 weeks (dark red bars) post treatment. With cells there was considerably more bone and connective tissue formed already at six weeks than was achieved without cells even after 6 month. The scale bar in f represents 100 µm. Figure 6. Example of regeneration of knee cartilage with UA-ADRCs (modified from [4]). Details are provided in the main text. The scale bar in h represents 100 µm Accordingly, in both examples the application of UA-ADRCs resulted in better and more adequate tissue regeneration than a standard therapy. Importantly, this was achieved in both examples without any manipulation, stimulation and/or (genetically) reprogramming of the UA-ADRCs prior to transplantation. As a result, these examples demonstrate that UA-ADRCs are indeed intrinsically able to sense the environment into which they are introduced and adequately regenerate tissue. Related examples from the fields of wound healing and tendon regeneration can be found in [87,88]. Statement #8: Tissue regeneration with UA-ADRCs fulfills the criteria of homologous use. According to 21 CFR 1271.3(c), homologous use means the repair, reconstruction, replacement, or supplementation of a recipient's cells or tissues with a human cell, tissue, and cellular and tissuebased product (HCT/P) that performs the same basic function or functions in the recipient as in the donor [69]. As an example, FDA's regulatory considerations for HCT/Ps define transplantation of a heart valve to replace a dysfunctional heart valve homologous use because the donor heart valve performs the same basic function in the donor as in the recipient (i.e., ensuring unidirectional blood flow within the heart) [71]. On the other hand, the same regulatory considerations by FDA specify that HCT/Ps from adipose tissue used to treat musculoskeletal conditions such as arthritis or tendonitis (by regenerating or promoting the regeneration of articular cartilage or tendon) are generally not considered homologous use because regenerating or promoting the regeneration of cartilage or tendon is not a basic function of adipose tissue [71]. However, regeneration is not based on adult tissue such as adipose tissue, but on the presence of the ubiquitously distributed small universal stem cell. Based on the following evidence we hypothesize that future research will demonstrate that regeneration of musculoskeletal tissue is indeed a basic function of the stromal vascular fraction (and, thus, HCT/Ps from adipose tissue used to treat musculoskeletal conditions should be considered homologous use). Firstly, ADRCs can induce the formation of new blood vessels in adipose tissue [89] as well as in bone [6], ischemic myocardium [7] and other target tissues [90]. Accordingly, application of ADRCs with the aim to induce the formation of new blood vessels fulfills the criterion of the same basic function or functions in the recipient as in the donor and, thus, should be considered homologous use. Secondly, it is well known that various pathological conditions result in mobilization of stem cells into the peripheral blood. For example, in the peripheral blood of patients suffering from Crohn's disease [91] or skin burn injury [92] higher mean numbers of cells expressing markers for MSCs, endothelial progenitor cells and very small embryonic-like stem cells (VSELSCs) were found than in the peripheral blood of age-matched controls. Other studies demonstrated mobilization of stem cells into the peripheral blood after acute myocardial infarction in both patients [93] and an animal model [94], and after Achilles tendon transection in an animal model [95]. However, a recent study found that one day after induction of acute myocardial infarction (AMI) in rats the number of ASCs was significantly reduced in the stromal vascular fraction compared to healthy control animals, without alterations in the cell surface marker profile and the differentiation capacity of the ASCs [106]. The authors of this study hypothesized that the decreased number of ASCs after AMI could be the result of mobilization of vascular associated MSCs from adipose tissue into the peripheral blood. Collectively one can hypothesize on the basis of these data that isolating ADRCs from a patient's adipose tissue and transplanting them as UA-ADRCs into the same patient's target tissue in need of regeneration may represent augmentation of a physiological process that also runs to a lesser extent on its own. It will be the task of future research to test this hypothesis. If this hypothesis turns out to be correct, application of ADRCs should be fully considered homologous use, because regenerating or promoting the regeneration of musculoskeletal tissue would indeed be a basic function of a certain component of adipose tissue. Statement #9: A certain challenge in research with UA-ADRCs lays in the fact that labeling the cells would render them modified, and unmodified cells can only indirectly be identified after transplantation in a target tissue. With regard to the potential mechanisms of action of UA-ADRCs in tissue regeneration it is crucial to bear in mind that, in contrast to ASCs, UA-ADRCs in principle cannot be labeled. Accordingly, it is not possible to experimentally (or even clinically) determine whether the following benefits of ASCs also apply to UA-ADRCs, although it is reasonable to hypothesize that this is indeed the case. Specifically, it has been demonstrated that ASCs can stay locally, survive and engraft in the new host tissue into which the cells were applied [107] (an example is shown in Fig. 7), differentiate under guidance of the new microenvironment into cells of all three germ layers [11], integrate into and communicate within the new host tissue by forming direct cell-cell contacts [4], exchange genetic and epigenetic information through release of exosomes [4], participate in building new vascular structures in the host tissue [4,6,7] (c.f. Fig. 7), positively influence the new host tissue by release of cytokines (among them vascular endothelial growth factor and insulin-like growth factor 1) [108], protect cells at risk in the new host tissue from undergoing apoptosis [108][109][110] and induce immunemodulatory and anti-inflammatory properties [111,112]. Most probably the combination of these mechanisms of action render UA-ADRCs a powerful tool in tissue regeneration. The panels show photomicrographs of paraffin-embedded, 5 µm thick tissue sections of a post mortem heart from a pig, taken from the left ventricular border zone of myocardial infarction ten weeks after experimental occlusion of the left anterior descending (LAD) artery for three hours, followed by delivery of eGFP-labeled autologous ASCs into the balloonblocked LAD vein (matching the initial LAD occlusion site) at four weeks after occlusion of the LAD (experiments are described in detail in [7]). (a-e) One tissue section was stained with DAPI (blue) (a) and processed for immunofluorescent detection of GFP (green) (b), von Willebrand factor (vWF) (red) (c) and Troponin (yellow) (d). The arrows indicate cell nuclei that were immunopositive for GFP and were found in the wall of small vessels (the positions of these cell nuclei are also labeled in the panel representing vWF). (f-j) Another tissue section was was stained with DAPI (blue) (f) and processed for immunofluorescent detection of GFP (green) (g), Cx43 (red) (h) and Troponin (yellow) (i). The circles indicate regions where most of the cell nuclei were immunopositive for GFP, and the arrow a GFP-positive cell nucleus inside (or directly adjacent to) a cardiomyocyte. (k-o). A third tissue section was stained with DAPI (blue) (k) and processed for immunofluorescent detection of GFP (green) (l), Ki-67 (red) (m) and Troponin (yellow) (n). The white arrows point to cell nuclei that were immunopositive for GFP but not for Ki-67, the yellow arrows to a cell nucleus that was immunopositive for Ki-67 but not for GFP, and the red arrows to a cell nucleus that was immunopositive for both GFP and Ki-67 (indicating that this cell had re-entered the cell cycle). The scale bar represents 25 µm in the merged panels and 50 µm in the individual panels. For labeling cells with eGFP cells isolated from subcutaneous adipose tissue of pigs (described in detail in [7]) were expanded in cell culture for 5-7 days. At passage 3 the cells were simultaneously transfected (using FuGENE 6 Transfection Reagent; Promega Coporation, Madison, WI, USA) with plasmids encoding eGFP fused to the nuclear localization signal H2B and other plasmids containing PiggyBac Transposase (System Bioscience, Mountain View, CA, USA) which was transiently expressed in order to integrate the eGFP cargo into the genome. After transfection, eGFP positive cells were selected for 14 days in complete growth media containing 400 ng/ml G418 (Life Technologies, Carlsbad, CA, USA). Then, cells were separated using fluorescence-activated cell sorting (FACS) using a BD FACSAria Fusion device (BD Bioscience, San Jose, CA, USA). Sorted cells (with >95% of the cells expressing eGFP) were expanded for additional 5-7 days in cell culture. On the day of delivery, eGFP+ cells were trypsinized for 5 min at 37° C, washed twice with PBS, centrifuged at 600 g for 10 min, passed through a 70 µm cell strainer (Falcon, Corning, NY, USA) to avoid cell clumping, and suspended in 10 ml sterile saline solution for delivery (B. Braun Medical Inc., Bethlehem, PA, USA) (on average 10×10 6 cells per animal). For counting cells they were stained with fluorescent nucleic acid stain (SYTO13; Life Technologies, Grand Island, NY, USA) following manufacturer's instructions, and then counted using a hemocytometer under an Eclipse Ti-E inverted fluorescence microscope (Nikon Corporation, Tokyo, Japan) using a PlanFluor 10× objective (numerical aperture [NA] = 0.3) (Nikon). Expression of eGFP was confirmed by fluorescence microscopy during cell counting. After de-paraffinizing and rehydrating, tissue sections were washed with PBS containing 0.3% Triton X-100 (Sigma Aldrich, St. Louis, MO, USA) and blocked with 10% casein solution (Vector Laboratories, Burlingame, CA, USA) for 30 min at room temperature. Sections were incubated overnight with diluted primary antibodies and subsequently with diluted secondary antibodies for 1 h. The following antibodies were used: Goat anti-GFP, Mouse anti-cardiac troponin T, Rabbit anti-Cx43, Rabbit anti-vWF (all from Abcam, Cambridge, MA, USA); Mouse anti-Ki67, Alexa Fluor 647 conjugated (BD Bioscience); Rabbit anti-goat-IgG secondary antibody, FITC conjugated, Goat anti rabbit-IgG secondary antibody, Cy5 conjugated, Donkey anti-mouse-IgG secondary antibody, TRITC conjugated (all from Life Technologies), and Goat anti-chicken-IgG secondary antibody, Texas red conjugated (Thermo Scientific, Waltham, MA, USA). Counterstaining of nuclei and mounting were performed with Vectashield Antifade Mounting Medium with DAPI (Vector Laboratories). The photomicrographs shown in this figure were produced by digital photography using a CoolSNAP HQ2 CCD monochrome camera (1392 x 1040 pixels; Photometrics, Tucson, AZ, USA) attached to an Eclipse Ti-E inverted microscope (Nikon) and NIS-Elements AR software (Nikon), using the following objectives (all from Nikon): PlanApo 20× (NA = 0.75) and 40× (NA = 0.95). Merged figures were constructed using ImageJ software (version 1.51j8; U.S. National Institutes of Health). The final figures were constructed using Corel Photo-Paint X7 and Corel Draw X7 (both versions 17.5.0.907; Corel, Ottawa, Canada). Only minor adjustments of contrast and brightness were made using Corel Photo-Paint, without altering the appearance of the original materials. Summary This article demonstrates that serious and adequate basic and clinical research is in progress to establish a comprehensive understanding of the potential of UA-ADRCs for regenerative medicine purposes. One of the biggest challenges for the near future is to eliminate the substantial discrepancy between the very high number of publications on "adipose derived stem cells" in PubMed (>10,000; among them approximately 1000 reviews) on the one hand and the very small number of RCTs with UA-ADRCs that have been published so far on the other hand. Authorities worldwide including U.S. FDA will base their judgement about safety and efficacy of tissue regeneration with UA-ADRCs primarily on the results of adequately designed and executed RCTs. As shown here it is indeed possibe to demonstrate safety and efficacy of treatments using UA-ADRCs at the highest possible level of evidence based medicine, to the benefit of the countless patients worldwide who are in need of effective tissue regeneration. We will continue to work consistently to ensure that warnings from authorities about stem cell treatments may one day be a thing of the past. InGeneron, Inc. and InGeneron, GmbH had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
2020-03-05T10:44:32.313Z
2020-02-28T00:00:00.000
{ "year": 2020, "sha1": "949b01c0e821c03b354f45ca626247d46873eb6e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/9/5/1097/pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "7a3fee469aa4babef460d4be52f94f1da2499431", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246833140
pes2o/s2orc
v3-fos-license
Information and Participatory Research and Action: An Alternative to Avoid Domination and Develop Citizenship Objective: The research presents the effectiveness of Participatory Research and Action (PRA) specifically in information science. The main statement that motivated this qualitative study was that there is no citizenship without critical awareness. This research has the objective to demonstrate that collaborative work combined with multidirectional and interactive communication in PRA can develop critical awareness and citizenship. Methodology: The study is focused on citizens’ critical awareness development. It is based on bibliographical research, within critical thinking theoretical perspective, one of the currents contemplated by subjectivism epistemology. The study search mainly two data sources. First, Borges’ article, which presents comparative issues related to socialism and capitalism ideologies. Second, Tavares’ PhD thesis, in which PRA techniques were tested. Additionally, findings from several researches related with social development, social rights, popular participation, as well as citizen’s manipulation and domination were also collected. Data analysis were based on grounded theory principles, particularly the coding process and textual examination. Findings: Data analysis revealed three great postulations. First, lack of critical awareness and dissemination of fake news has both guided people to be manipulated, oppressed and dominated. Second, it is essential for citizen to be educated critically to deal with information that affects his life. Finally, it is possible to use PRA to develop critical awareness and citizenship. PRA promotes learning, broadens the notion of citizenship, and develops attitudes such as reading, analyzing and criticizing information, which constitutes an essential skill to developing critical awareness. responsible for solving social problems and managing resources to develop their communities. (TAVARES, 2011). Demo (2002) observed that the full exercise of social rights and social duties promotes the development of a nation and is given by the restoration of social assets and the redistribution of income and power. Moreover, Calabrese and Burgelman (1999) add that when society privileges economic development without concern for guaranteeing equal social rights for all citizens, it ends up privileging a particular social layer, causing inequality, imbalance, misery and stagnation. Popular Participation and Social Organization of Learning The main statement that motivated this study was that there is no citizenship without critical awareness development. Critical awareness can be developed through both embracing of popular participation and creation of social organization that facilitate the learning and empowerment of their members (CHOO, 2006). According to Cogo and Maia (2006), popular participation methods tend to observe the properties of the argumentative process as publicity, equal rights, lack of coercion and attempt to deceive. Argyris and Schön (1996) asset that social organizations are composed of citizens and its focus is citizen's learning that culminates in community transformation and individual development. According to Henten (1999), globalization involves a smaller scale of state interventionism, since the economic policy is more subject to international law. Thus, the best way for nations to react to globalization is to strengthen themselves internally, in order to not become that part of the mass (80%) exclusively consumer, which revolves around the minority holding the wealth. Strengthening, however, demand to improve the power of employment, that is, creating skills, knowledge and technology that generates innovation and wealth. Thus, Henten (1999) points out that the State of Social Welfare in the information society is the State that regulates work and education to create the necessary conditions for citizen himself to promote his well-being. "Societies will be more related to work and education issues, as well as how all this should happen" (p.89). One way for engagement and commitment is through collaborative work, which can be created in two steps. The first one consists of building physical and technological structures where citizens would be together, learning and making decisions about their community development. The second one consists of providing technical and financial resources for citizens to perform their social solutions. These attitudes could contribute to achieving effective solutions and social development since citizens will be committed to the entire process and the consequent results (TAVARES, 2011). In other words, community's place should be created aimed at learning organization concept. A place where social work can be done, and community members can learn. In these rooms, people will use information to create meaning, build knowledge and make decisions (CHOO, 2006). Participatory Processes, Critical Education and Political Being According to Demo (2002, p.130), "human beings thus need to know how to manage inequality, so that societies are at least bearable". Developing the "political being" is fundamental for conquering social rights. It represents the biological capacity to make its own, individual and collective history. The "political being" manages to interfere in one's destiny through learning and knowledge, which will promote a more egalitarian and polarized coexistence, moving forward to institutional processes that improve the living condition. Critically conscious and well-known citizens present a more reconstructive attitude towards reality, an essential characteristic today in the knowledge society. Although it is impractical to imagine complete autonomy, the citizen can, through knowledge, become more independent and contribute to the development of society in more egalitarian terms (Ibid, 2002). In a post on BBC News Brazil, Bara (2021) questions why so many young people complete studies without developing a true critical spirit. He asserts that the critical spirit frees us from ignorance. It means that critical awareness frees us from any person, entity or thing that wants to think for us. In this sense, the social environment is full of people and entities attacking the citizens to impose excessive duties and usurp fundamental rights. They do this for the sole purpose of empowering and enriching themselves. In order to succeed in this task, these people and entities need to dominate and weaken the citizen and do so by attacking their ability to think, reflect and act. They eliminate from education the critical awareness development, so they can control citizens and thus acquire power from the ignorance of their equals. Perhaps this is the answer to Bara's questioning (2021): young people complete studies without developing a true critical spirit because these contents were removed from educational institutions in order to train weak and dependent citizens. In summary, social rights can be conquered by the citizens themselves, from their transformation into a "political being" International Journal of Social Science Studies Vol. 10, No. 2;2022 with a critical capacity to intervene in their destiny through learning and knowledge. The establishment of a strong relationship between man and information causes this transformation. However, this transformation would probably happen if, and only if, communities are transformed into a type of social learning organization, with a physical, financial and technological framework. Information for citizenship can reduce both inequality of opportunity and social exclusion. Authors such as Calabrese and Burgelman (1999) and Demo (2002) argue that restoring citizens' social rights through access to information and knowledge is necessary. Information has the power to instruct citizens, providing conditions to argue and discuss critically. Additionally, Apple (2008) asserts that critical education plays a key role in citizens' insertion in society. The critical education enables citizens to access and use information, knowledge, and innovation. It is also important to develop moral and ethics skills to form reflexive judgments and balance between personal convictions and impersonal principles of justice (DEMO, 2002). The Cost of Popular Participation Anderson, Warburton and Wilson (2005) pointed to the cost of the public participation. Their research's findings are summarized next.  "Researchers and practitioners for continuing and enhancing public participation. Understanding of the benefits is growing in general terms, although there is significant unwillingness to quantify these benefitsand particular reluctance to 'monetarize' the benefits (assign a monetary value to them).  There is a serious lack of data on the practical costs and benefits of participation, for a range of practical and ethical reasons.  The lack of understanding of potential costs and benefits makes it difficult to develop a coherent hypothesis about participation overall.  New analytical frameworks are needed. Participation is a new and cross-cutting approach that is only partly captured by existing academic and professional disciplines. A new theoretical model is needed that goes beyond the disciplines and fields within which participation began.  Participants' perspectives are critical to defining the costs and benefits of participation. Only by including this perspective alongside that of institutional interests, and considering the wider impacts on local communities and society as a whole, can the true costs and benefits of participation be understood.  Greater investment in assessing participation processes is required, to build a robust evidence base.  A simple framework for capturing the actual practical costs and benefits of participation is needed, to complement the wider thinking needed around broad new analytical frameworks. In this way, simple data can begin to be captured and provide benchmarks against which future activity can be tested" (ibid, 2005, p.9). The authors assert that the absence of robust evidence about the cost of benefits do not impede that the rhetoric on public participation continues to grow. It is the danger of poor participation and on the potentially negative implications for conventional political leadership. Capitalism, Socialism, State and Democracy This section also begin with a statement: either capitalism and socialism adopts actions to weaken citizens, depriving them of their social rights. Both of them intend to control people through the dependent relationship from both workers to State or worker to market. There is not democracy in both, socialism and capitalism. This statement was examined based on two studies from Borges. The first brought a comparative and complementary position of socialism and capitalism in accordance with Henry's and Hurz's studies (BORGES, 2020-I). The second brought a criticism made to the liberal democracy from Schmitt comparing with Hurz's socialistic studies (BORGES, 2020-II). According to Borges (2020-I), Henry exposes socialism as an ideology that can be modified in the sense of being decomposed, and then reorganized, for finally being transformed into a source of prosperity and well-being. As a cause of socialism's failure, Henry pointed its effort to organize human beings' activities and lives in a rational and lined way, overlooking their rationality and freedom potential. It is a mistake of conception. The countries in which socialism has been deployed present the same disastrous consequences: economic failure, authoritarian policies, elimination of dissidents, and hunger. excludes the only true force from social dynamics, which is the worker's strength, as a source for producing wealth. At the political level, this undervaluing results in renouncing human rights, arbitrariness, deportation and death. Capitalism, in turn, had its deception via appropriation of community assets by large property-owners. Besides, farmers were expropriated from smallholding and moved to cities to be "free" workers in the system guided by market. After being stripped of all means of production, workers start to be measured exclusively as workforce. The genius of the capitalist was to perceive that this force could be bought. That is why all " economy guided by market " is addicted at first. It is addicted mostly because economic transactions do not occur between two exchange values. They actually occur between an exchange value and a usage value. The exchange value is the worker's EARNINGS, and the usage value is the worker's PRODUCTION. The last (production) does not keep equivalence with the former (earnings), because salary is not connected to production. In both capitalist and socialist productive processes, there is the exploitation of the labor force. It means that it does not matter what the production value of each worker is. On the contrary, the matter is which worker would be cheaper to produce that particular product. The value of the worker has been defined by the value that is usually necessary for his subsistence, which is always lower than the production value given by the same worker (BORGES, 2020). Kurz (BORGES, 2020-I) reached conclusions strikingly similar to Michel Henry's. In accordance with Kurz, the end of the Cold War was interpreted as the definitive victory of capitalism, with the illusion that a golden age would arise. The market had opened, in a globalized system. However, the result was a great disappointment. Countries have had wealth losses, besides new social cuts, economic crises, civil wars, and growth of predatory competitiveness. Kurz pointed to two post-war society systems. The first one adopted a model that is more regulated by the state, and the second one adopted a model that is more regulated by the market. The author added yet that these societies did not have different points, but rather the common points, since they are both linked to goods and services production. Therefore, it does not matter which model each society adopted; they have a common point: to commerce goods and services and exploit the workforce. In pre-modern societies, goods production was only a marginal issue to ensure the development of societies and citizens. Money played a role as a mediator between different "objects of need". The desire or need was related to having money for acquiring goods. In turn, modern societies have money as a good, so money is no longer a mediator to acquire goods. It becomes an asset by itself. The relationship is reversed. In pre-modern societies, acquiring objects of need was the final step in the production system, and money was only a mean for acquiring these objects. Nowadays, acquiring objects of need has been a mean for accumulating capital and money. The consequence of this reversal is "having it" by having, and not "having it" because it is a needed. In addition, workers have been reduced to a mere production factor and mere consumers. As a consequence, they receive less for the product they produce and pay more for the same product when they buy it. Thereby, the modern production system has generated an excess of capital for owners. According to Borges (2020-I), Kurz classifies as naï ve the typical conception of Marxist movements, that the origin of all social injustices would reside primarily in the appropriation of the work of others. In opposition, he states that coercion in the capitalist system is not personal, as in a relationship between lord and servant; it is an anonymous system of coercion, which results in domination without a subject (might be socialism, capitalism, the State, the market, among other entities). Marxists insist on the themes of added value and domination. However, they devote little attention to the systemic aspects of the economy unrelated to human needs. This lack of attention led to a progressive loss of the mobilization power of left-wing movements. It was also due to the growing misstep between their intellectual leaders' discourse and the masses' empirical perceptions about their exploitation processes. It means that leaders promise a better life, but living is more difficult on citizen's perception. They promise a fairer society, but, on citizens' perception, society is increasingly centralized and most citizens are poorer and more exploited. They promise freedom, but, on citizens' perception, oppression has been increased. Criticism of liberal democracy in Carl Schmitt and Robert Kurz (BORGES, 2020-II) brought a study that aims at comparing the similarities contained in criticism of liberal democracy present in some selected works of Carl Schmitt and Robert Kurz (1943Kurz ( -2012. Borges pointed numerous similarities between the two authors, which are verified when they attempt to analyze the characteristics of parliamentarian liberalism in twentieth-century democracies. Schmitt accuses liberalism as hypocritical. It has been used universal principles but only as an excuse to defend particular interest and selfish economic aims. Liberalism is not a State's theory, despite it has not denied the State. There is a conclusion in these findings at least. It is neither about personal domination (employer x employee) nor added value (exploitation of work by the employer). Rather, it is an economic system detached from human needs who appropriates domination, added value and other means of production to accumulate wealth, in a system globalized, centralized and detached from necessity. This system can be perpetuated under the aegis of any political ideology or economic regime and certainly generates social injustice and citizen mutilation. Therefore, they cause harmful effects to society and citizenship development. Brazil and Desiformation Brazil is currently experiencing a war between right-wing and left-wing ideologies. What is perceived in this war is the clear interest of leaders to remain in power and exercise dominance over citizens. There is no intention to strengthen and protect citizens; on the contrary, there is a clear intention to deceive citizens with false information in order to use them as manipulated receiver. In this specific case, both socialism and capitalism have significant advantages and disadvantages regarding social development. The capitalist discourse focuses on attacking socialist disadvantages, and reinforces their own advantages. On the opposite, the socialist discourse attacks capitalism through its disadvantages and reinforces its own advantages. Thus, they have both gained adherents, due to citizens' fear of the disadvantages from the other system. The war between socialism and capitalist maintains dominance over citizens from fake news or half-truths, taking the citizen out of focus, dividing Brazilians into party groups and weakening the nation as a unit of work, strength and development. There is another dichotomy in productive systems from these two ideologies. On one hand, there are exploited citizens who need to be dominated in order to work more and earn less. For being dominated, there are employees deprived of an education that qualifies them for work, therefore makes them uninformed about their rights and duties. On the other hand, there are employers who are losing productivity and competitiveness because their workers are unmotivated and poorly educated. An education that develops technological abilities and critical awareness makes staffs much stronger, smarter and hard to control. They stop being productive resources for being a partner. They would probably start debating, claiming, and fighting for the same cause: social rights. They would be probably free to interfere in their own destiny with assertiveness to really achieve autonomy, emancipation and dignity (FREIRE, 2007 andDEMO, 2002). Nonetheless, everybody gains in this process. Employees get productivity and competitiveness, and thus they will increase their incomes. Employers get empowerment from education to manage their lives and achieve autonomy, emancipation and dignity. Communities will have local problems solved by communities' members. Government and State will be more effective by focusing in nation development. It is not easy to manage strong, smart and conscious citizens. They are independent, learn by themselves, and are able to share knowledge and responsibility. Owners, in this hypothetical situation, would probably have to share part of their gain, that is, part of the added value. However, owners' wealth would probably enlarge by increasing productivity and the quality of products for sale. Besides, owners would probably have staff much more compromised and enlaced with their work, as well as more productive, creative and efficient. These results surely will increase wealth, but maybe without an exorbitant added value for them. Disinformation allows leaders practice false discourses. They might present projects which either they do not have any intention to execute or do not bring any benefit for citizen. Leaders only do this because citizens do not know how to charge, claim, or demand positive results from leaders. There are no serious employment policies in Brazil that provide properly work and education for citizens, for example. On the contrary, leaders have promised to end the misery not by work, but by sharing family grant. As a result, government is mutilating citizens, taking their capacity to be autonomous, and so creating a dependent relationship between citizen and their leaders. Rinaldi (2019) adds that the media is one of the forces behind this problem. The circulation of fake news has become profitable for companies that use this artifice. The capitalist system implements consumption needs linked to pseudo-happiness, but it actually wants only to sell products to increase profits. In turn, the socialist system implements the idea of equality, but it actually divides citizens by groups (gender, class, origin and sex condition, for example), stimulating discords between each other. Therefore, the citizens are alienated and easily influenced by liars that circulate more and more in the media. Rinaldi (2019) points to two measures to solve this problem. First, fake news has to be treated as a crime, liable to fine and imprisonment, in more serious cases. Second, citizens should be instructed to access information and analyze it critically before using it and disseminating it. Again, citizens need to be educated to access and use information consciously and critically. A Model of Learning Organization for Citizenship This research started from the premise that there is no citizenship without critical awareness. It keeps showing a sort of studies evidencing the importance to insert community members into the information and knowledge society. Finally, the study intends to present a modal of learning organization for citizen. This model uses participatory research techniques, which focus on both information literacy and collaborative work. The objective is to demonstrate that participatory research contributes to citizenship development at a time when enables citizens to manage information and solve problems within their community in a collaborative way. Although the individual development of these skills is possible, according to the social constructivist perspective of learning (MACKERACHER, 2004), informational literacy that leads to community development is different and probably much more effective. Lloyd (2007), for example, reshapes the nature of informational literacy, from the individual sphere to a community sphere as an approach included in the socio-cultural context. Additionally, participatory approaches to problem solving have been identified as effective forms of community members' engagement. Participation leads people to develop sustainable solutions that genuinely express the needs of community members (more than those unsustainable solutions that outside experts impose). By using participatory methods, research will also act as a mechanism that helps develop new insights and capabilities in people. This is one of the characteristics of PRA (CHAMBERS, 2005). This study suggests a solution for the imbalance between citizens and State or Market. By using PRA, it is possible to develop critical awareness and citizenship. PRA is based on the theoretical perspective of critical thinking. It is more concerned with the development of critical awareness than with the solution of the problem itself. Tavares (2011) defined a conceptual model of communication of information and citizenship. The model, represented in Figure 1, highlights not only these concepts but also their relationship. Figure 1. Conceptual Model of Participatory Process The conceptual model points to the process of effective communication between citizens in order to develop information literacy skills and collaborative work. It is premised that participatory research techniques combined with informational literacy constitute essential to communicate in an interactive and multidirectional way. Allied to this, a channel of communication between the community members contributes to the critical and technical development of the citizens, enabling them to conquer autonomy, emancipation and dignity. Therefore, by encouraging people to discuss and propose solutions to social problems, they experience, in the community where they live, the participatory process contribution to the development of the community through critical awareness of its members, commitment and social responsibility (BROOKFIELD, 1987). Thus, an ideal model of communication of information for citizens should promote citizens' autonomy as communicators spreaders and receivers simultaneouslynot as mere receivers, in which citizens solve their social problems. This process will develop the community through the growth, engagement and commitment of their citizens. Finally, a model of communication of information for citizens must be transactional. It means that communication is multidirectional, from many to many. Thus, the environment where citizens will communicate, access and use information, and share experience and knowledge, should constitute a learning environment. Skills such as teamwork, talking and listening, respecting differences, analyzing problems, and making decisions are examples of topics to explore in these learning communities. The production system in any society has had an imbalance from dialogue to practice. Political leaders, both socialist and capitalist, have presented themselves as citizen's protector. Both have committed to supply citizens' needs as well as to promote workers' dignity. The theoretical perspective of critical thinking shares many characteristics with PRA, which provided the methodological structure and influenced the interventions. PRA has been widely applied in developing countries, initially with the name Participatory Rural Appraisal. Chambers (2005) asserts that PRA is "a set of approaches, behaviors and methods to enable people to do their own planning, analysis and evaluations, implement their own actions and act themselves to inspector and control these actions" (translated by the author, ibid, p.3). Using a participatory approach, many topics can be discussed in groups, but the discussion is strongly structured in participatory rules. Thus, the key aspect of PRA is the development of capabilities, i.e., skills and attitudes. Cohen and Uphoff (1980) advocate that governments need to adopt popular participation as a guideline and basis for development. They argue that participation is a basic necessity, and give some examples: The United Nations Economic and Social Council (UNESCO) has advised countries to include proposals for popular participation in their government programs. Also, the World Employment Conference has an action named "people participation the in decision-making process that affect them discussing their choices and planning" (translated by the author, ibid, p.18). Popular participation in institutions that govern their lives is a basic human right and it is essential to realign political powers in favor of citizens as well as for economic and social development (translated by author, ibid, p.18). Cornwall (1996) notes that PRA can mobilize the population to support change. In addition, popular participation, as a government policy, promotes efficiency and effectiveness. Popular participation encourages voluntary work, which improves the quality and quantity of public services available to the population without burdening the government's budget. Through popular participation, citizens become a government knowledgeable partner. As a result, the performance of public administration will be much better and more effective. Additionally, government projects would probably succeed when community members are involved, because there is engagement and commitment, and it is vital to the success of any enterprise (KURFISS, 1997). Many organizations agree that there is no sustainable development without popular participation. Participation is a centerpiece of development, according to Kumar (2008). PRA carries the idea that participation promotes greater democracy, equality and social justice, in addition to promote citizens' autonomy, emancipation and dignity. Findings from PRA Tavares's doctoral thesis (2011) was developed from the use of PRA. It aimed to show that PRA is a proper methodology to develop citizenship. Two concepts were used: informational literacy and collaborative work. Data were gathered in a small community at Candangolâ ndia/Brasí lia/Brazil. The work was conducted in periodic meetings so that all participants could define a common social problem and then analyze, discuss, access and use information, negotiate and seek consensus to finally point out solutions. All meetings began and ended with an evaluation of the participatory process itself. Regarding the evaluation, it was evidenced that the work improved the participants' critical sense. In the beginning, they had the impression that evaluating was synonymous with criticizing pejoratively, but throughout the process, they understood the importance of evaluating to improve. According to them, "criticism can build much more than destroy". The most important thing, however, was that the participants began to evaluate themselves, in a very conscious way, when they criticized their own abilities to handle technological resources of information. They found that younger people who needed to work prematurely interrupted their studies before completing them, so they missed the opportunity to enter the information society. In turn, a small number of illiterate participants had difficulty in reading and understanding information. They all acknowledged that they do not take time to engage in the community, although they understood that this was important. Regarding the analysis of a social problem itself, the researcher showed that the solutions proposed by the participants indicate the development of an active sense of citizenship, even if incipient. Participants were able to identify a social problem in their community, as well as access and use information to analyze this problem. They could also perceive what their new roles would be, guided to either implement or request implementation of projects to solve their local problems. The importance of this research was to create a procedure to enable people to handle information critically in an integrated and collaborative manner. From using these participatory techniques, one of the aspects of citizenship development could be verified, which is citizens' involvement with the community, their commitment to the problem solution, and the recognition of their rights and duties. Steps to Make PRA Real Theoretical studies on participatory informational literacy point to six participatory processes, which are done in workgroups (TAVARES, 2011;CHAMBERS, 2005). First, citizens select one of the public needs from their community, like a social problem that needs to be solved or a lack of knowledge that needs to be fulfilled. Second, they identify their information needs related to the selected need. Third, they seek information from information repositories. Forth, information is analyzed and criticized for understanding its content. Then, citizens use it to make decisions and solve social problems. Finally, citizens feed information repositories with new information produced during their PRA practice. To make PRA real is not trivial. In dominated societies, it is essential to make a roll of environmental changes in order to transform the place where people think, reflect and act. It is about a large structural change which would probably be planned and implemented slowly. From beginning, some steps are described in the sequence. First, public authorities need to define public places inside each local communities, where small groups of citizens can meet. It could work in classrooms, churches or libraries, for example. Second, it is necessary to build physic and logistic infrastructure, where citizens could have pleasant and comfortable meetings. It means a place with chairs, table, TV, internet, magazines, books, computers, blackboard, paper, pen/pencil, and everything else necessary for meetings. Later on, a technical facilitator needs to be provided. This person will act only to help people work together, promoting interaction, stimulating discussion and everything else necessary to create an environment of exchange, interaction, consensus and decision-making. Facilitator needs to be an organizer instead of advisor. They would have not any authority or influence on participants discussion or decision. Their role is only restricted in organizing meetings and motivating people to work together, discussing about problems and situations inside their community. The making-solution process have to work inside communities and with entire citizens participation. Then, facilitator need to invite people for meetings. Thus, they will let citizens get together as much time as citizens want, discussing and getting consensus about actions and solution. In the end of these processes, a formal document has to be made and driven to public local authorities. This document should have two schedules. The first to convert decisions into projects to be carried out by community members. The last, those projects will drive to public and private sector to search financial resources. Last, but not least, authorities who decide to adopt these participatory techniques inside local communities need to begin working with a large and general project. It has concerned about structuring critical awareness education for citizens, training platform for facilitators and government publicity for all stakeholders involved. Denotation that everyone will be put together in the same direction, search the same aim, and implement the same project. People, black or white, poor or rich, men or women, young or elderly, all of them are citizens in their societies. Thus, citizenship development would probably need be claimed by all of them, together, despite of their conditions or singularities. Above others issues, citizens need to defend this common cause, which is emancipation, autonomy and dignity for all. This happens only when citizens start to be conscious about how their society work and what they can do together to solve social problems and make social decisions about issues that affect their lives. The current cycle of information literacy is defined by several authors in four steps, described as needs' survey, search, access and use of information (DEMO, 2002;CHOO,2006;BELKIN and VICHERY, 1989;BATES, 2002;HEPWORTH andWALTON, 2009, CHAMBERS, 2005 among several others). This article, however, proposes a new cycle in which information accessed goes through filters for understanding, analysis and criticism its content. Only after that, information will be used in order to produce effective and real knowledge for citizens. It means to say, a new cycle would be described in five steps. First, to identify and survey information needs. Second, to search information in information repositories. Third, to access information. Forth, inside social organization of learning, to reed, analyze and criticize information, together, in order to understand its content. Only after that, the fifth step is to use information in order to solve problem and make decision. Conclusion This research has the objective to demonstrate that collaborative work combined with multidirectional and interactive communication in PRA can develop critical awareness and citizenship. Findings points to citizens working together in a social organization of learning, where citizens could learn how to search information at reliable information repositories. Additionally, they could access and use the information to understand subjects, share knowledge, discuss situations and make decisions about issues that affect their lives. It was evidenced that participatory investigation that uses multidirectional and interactive communication can contribute to developing skills related to information literacy and collaborative work, which together promote citizenship development. Participatory intervention research has been a revolutionary methodology in social research for exploring reality in a unique way, giving conditions to both the researcher and the researched to understand reality together, creating synergy and opening new perspectives. This methodology allows participants to work with the researcher, design the project, collect and analyze data and use the results in their own benefits. They stayed together, in small groups, discussing and sharing experiences, suggesting solutions to social situations and problems, showing decision-makers the best course of action to follow. Conscious citizens, regardless of ideology, would probably analyze information and fake news assertively and manifest against this kind of deceptive attitude, suggesting and demanding severe punishment for public people who act in this wrong way. They would also act against those who manipulate information to confuse people, instigate conflicts and benefit from it. Finally, critical citizens are smarter and more productive. They make better decisions about their work and their quality of life. They learn better, are efficient and generate more wealth. Conscientious citizens demand more from institutions and the State, but they also make institutions and the State more efficient and effective. In short, critically conscious citizens promote the development of the society where they live in. Participatory intervention research is a learning tool. It enables people to learn through information literacy and collaborative work, which contributes to the involvement of everyone with the implementation of solutions. It can surely promote changes that will bring benefits for all. This research did not receive specific cost aid from any public, commercial or non-profitable agency.
2022-02-16T16:11:59.843Z
2022-02-14T00:00:00.000
{ "year": 2022, "sha1": "dcbcc560630e816ed9f0a28196da0f60269c3bb4", "oa_license": null, "oa_url": "https://redfame.com/journal/index.php/ijsss/article/download/5484/5687", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "868a595202716ba7156ebb1e07108e56ad811b31", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
218936765
pes2o/s2orc
v3-fos-license
Intelligent Image Synthesis for Accurate Retinal Diagnosis : Ophthalmology is a core medical field that is of interest to many. Retinal examination is a commonly performed diagnostic procedure that can be used to inspect the interior of the eye and screen for any pathological symptoms. Although various types of eye examinations exist, there are many cases where it is di ffi cult to identify the retinal condition of the patient accurately because the test image resolution is very low because of the utilization of simple methods. In this paper, we propose an image synthetic approach that reconstructs the vessel image based on past retinal image data using the multilayer perceptron concept with artificial neural networks. The approach proposed in this study can convert vessel images to vessel-centered images with clearer identification, even for low-resolution retinal images. To verify the proposed approach, we determined whether high-resolution vessel images could be extracted from low-resolution images through a statistical analysis using high- and low-resolution images extracted from the same patient. image quality than DRIVE in general. Although HRF does not provide a blood vessel mask, it contains both high- and low-quality retinal images, making it useful for the evaluation of the proposed approach. HRF consists of 18 high- and low-quality retinal image pairs and due to the characteristics of HRF that does not provide any masks, it is more reasonable for it to be used for a testing set. As the performance of the proposed method might change following the accuracy of the similar image representing a patient’s retina image, it is better to have more images in the dataset. In this experiment, we assume that the retinal images of all people with normal retinas are similar. Introduction As the population ages, ophthalmology has become a core medical field. Ophthalmology not only treats severe diseases such as glaucoma but also non-disease issues such as vision correction. Minor diseases such as conjunctivitis can be diagnosed visually or through simple examinations; however, severe diseases that may lead to vision loss cannot be diagnosed accurately without a detailed examination performed by a physician. An example of an ophthalmology examination is the retinal or fundus examination, in which a physician checks the interior of the eye through the pupil, including the vitreous, retina, retinal blood vessels, optic disc, and macula. The physician can then make a diagnosis, such as glaucoma and diabetic retinopathy, based on the examination results and their own expert knowledge. In addition to eye diseases, Alzheimer's disease can also be diagnosed through a retinal examination [1][2][3][4]. Several techniques have been developed in the field of ophthalmology for performing retinal examinations. Ophthalmoscopy is a retinal examination method that is widely used today. Typical ophthalmoscopy techniques include direct ophthalmoscopy, indirect ophthalmoscopy, and slit lamp retinal examination. Direct ophthalmoscopy is an examination method that requires the use of a direct ophthalmoscope, which is portable, low-cost, and relatively easy to perform. It is capable of 15 times the magnification of the naked eye [5]. When using the indirect ophthalmoscopy method, the physician uses a hand-held lens and headband with a light attached. Indirect ophthalmoscopy requires more expensive equipment and has a lower magnification. However, it provides a wider viewing angle than that of direct ophthalmoscopy and offers a better view of the eye's interior when the image is blurry • This technique is different from those of previous studies because a mask function is used that selects only the good pixels from the original and similar images. • A further difference is that gray level co-occurrence matrix (GLCM) Haralick textures are used to retrieve the images that are most similar to the input retinal images. • A statistical analysis is used to clearly demonstrate how different the original images are from the images created in this study. • By synthesizing high-quality retinal images from low-quality ones, the accuracy of retinal disease diagnoses is improved, and the cost of obtaining high-quality retinal images using existing methods is reduced. The rest of this study is organized as follows. Section 2 discusses related work. Section 3 describes background knowledge. Section 4 describes the proposed method in detail. Section 5 evaluates the proposed method. Section 6 presents a discussion regarding this study. Finally, Section 7 presents this study's conclusions and future research. Challenges for Synthesized Retinal Images Retinal images are used to monitor abnormal symptoms or diseases associated with eyes and are widely utilized for a diagnostic purpose, as they often contain much disease-related information [10]. Since diagnosing the symptoms associated with retina is not an easy task for ophthalmologists, those images are highly significant as base data for making an accurate diagnosis [11,12]. Moreover, a number of automatic retina examination models such as 'retinal artery and vein classification' [13] or 'glaucoma detection' [14] using the retinal images have been developed. Recently, DeepMind (Google) introduced a new model that can diagnose 50-plus eye diseases including three typical ones like glaucoma, diabetic retinopathy, and macular degeneration by using a deep learning architecture [15]. These models are utilizing a retinal-image database containing many annotations and the accuracy increases when more data is accumulated. Retinal images are usually captured with an optical coherence tomography (OCT) scanner but since their quality is often unsatisfactory due to environmental conditions such as an uneven illumination, refraction/reflection or incorrect focus/blurring resulting from corneal clouding, or cataract or vitreous hemorrhage showing low contrast, it is quite difficult to obtain a large enough number of diagnostically valid images [16]. Such low-quality images make it hard for the ophthalmologists to make a clear diagnosis or reduce the performance of automatic retina examination models. Thus, improvement of the visibility of the anatomical structure through image synthesis along with the acquisition of a variety of retinal image patterns have been required [16][17][18][19][20][21][22]. Further, there was a case of conducting a Kaggle competition in 2015 to comprehensively and automatically identify diabetic retinopathy in a retinal image [23], and as a result, all the top-ranking winners had adopted a learning-based method relying on a large training set. This served as a momentum to reconfirm that a large amount of clear synthesized image data and diversely patterned retinal image data are essential for medical examinations. Research on the improvement and/or synthesis of retinal images are continuing even today. Some of the image-processing methods are being used to improve the contrast or luminosity for the former, whereas a deep-learning method such as a generative adversarial network (GAN) is being adopted for the latter. Xiong et al. [24] proposed an enhancement model based on an image formation model of scattering which consisted of a Mahalanobis distance discrimination method [17] and a gray-scale global spatial entropy-based contrast improvement technique. The authors claimed that it was the first technique that could solve the problems associated with illumination, contrast, and color preservation in a retinal image simultaneously. Mitra et al. [18] pointed out that the cause of a low-quality retinal image was the non-uniform illumination resulting from a cataract and proposed a retinal image improvement method for its diagnosis, which was to reduce blurring and increase the intensity by applying the histogram intensity equalization to a modified hue saturation intensity (HSI) color space; at the same time, the colors were compensated with Min/Max color correction. Zhou et al. [19] used a luminance gain matrix obtained by the gamma correction performed for the individual channels in an HSV color space for the control of the luminosity of retinal images. In addition, it was possible to improve contrast without damaging the naturalness of the image by proposing a contrast-limited adaptive histogram equalization (CLAHE) technique. The proposed method showed an improvement in quantitative numerical values in an experiment using a 961 low-quality retinal image data set. Gupta et al. [20,21] improved the luminosity and contrast of the images by proposing an adaptive gamma correction (AGC) method [20] along with a quantile-based histogram equalization method [21], which were tested for the Messidor database [25]. The result showed that they were useful in the diagnosis by the ophthalmologists or achieved a sufficient level of improvement to be used as a preprocessing step for the automated retinal analysis systems. Meanwhile, Cao improved the contrast in the retinal structure by using a low-pass filter (LPF) and the α-rooting in an attempt to make the images clearer, and at the same time, the gray scale [22] was used to restore colors. Additionally, the performance of the proposed method was compared with the four aforementioned methods [17][18][19]21] and the result statistically proved that the method was relatively superior in terms of visual and quantitative evaluations (i.e., contrast enhancement measurement, color difference, and overall quality). Zhao et al. [26] performed a research on synthesizing the retinal images after being inspired by the development of GAN which has come into the spotlight recently and proposed the original GAN-based synthesis model Tub-sGAN. The retinal images created from Tub-sGAN were quite similar to a visual shape of a training image so that they performed well for a small-scale training set. The authors also mentioned that the synthesized images can be used as additional data. Tub-sGAN inspired many research works associated with image synthesis. Niu et al. [27] pointed out that even though much medical evidence was needed to support the reliability of the prediction made through machine learning, they were not sufficient in reality, and thereby, a retinal image synthesis model based on Koch's postulates and a convolutional neural network was proposed. This model received excellent scores in the three performance evaluations (i.e., the realness of fundus/lesion images and severity of diabetic retinopathy) conducted by five certified ophthalmic professionals. Zhou et al. [16] also emphasized the difficulty in collecting training data for the optimization of an f-level diabetic retinopathy (DR) grading model and proposed a DR-GAN to synthesize high-resolution fundus images using the EyePACS dataset of Kaggle [23]. The proposed DR-GAN model exhibits a superior performance compared to the Tub-sGan [26] model in the independent evaluations (i.e., qualitative and quantitative evaluations) conducted for the synthesized images by three ophthalmologists. An additional test was conducted to determine whether the increased dataset due to synthesized images had mitigated the distribution at each level and gave a positive effect on the training model. As a result, it was possible to confirm that the grading accuracy had been increased a little, as much as 1.75%. Deep Learning for Image Processing Most of the newest retina image synthesis studies are based on artificial neural networks [28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46]. The generative adversarial networks (GANs) model is a learning model that improves learning accuracy by using two models-a discriminative model with supervised learning and a generative model with unsupervised learning-and having them compete with each other. In studies that use GANs, mapping of new retina images is learned from binary images that depict vessel trees by using two vessel segmentation methods to couple actual eye images with each vessel tree. After this, the GANs model is used to perform a synthesis. In a quantitative quality analysis of synthetic retina images that were obtained using this technique, it was found that the generated images maintained a high percentage in the quality of the actual image data set. Another synthesis model called an auto-encoder aims to improve learning accuracy by reducing the dimensions of the data. The reduction of the data's dimensions is called encoding. The auto encoder is a model that finds the most efficient encoding for the input data. A study that used an auto-encoder resolved retina color image synthesis problems, and it suggested that a new data point between two retina images can be interpolated smoothly. The visual and quantitative results showed that the synthesized images were considerably different from the training set images, but they were anatomically consistent and had reasonable visual quality. However, because there is merely a concept of data generation in the auto-encoder, it is difficult for the auto-encoder to generate better quality data than the GANs model. The GANs model is not trained because it is difficult for generators to create significant data at the beginning of training [28]. A convolutional neural network (CNN) is an artificial neural network model that imitates the structure of the human optic nerve. Feature maps are extracted from multiple convolutional layers, and their dimensions are reduced by subsampling to simplify the image. Then, the processing results are connected to the final layer via the fully connected layer to classify images. Studies that used CNN [42][43][44][45][46] addressed vessel segmentation as a boundary detection problem and used CNNs to generate vessel maps. Vessel maps separate vessels from the background in areas with insufficient contrast and are useful for pathological areas in fundus images. Methods that used CNN achieved performances comparable to the recent studies in the DRIVE and STARE data sets. In a study [44] that employed a CNN and the Random Forest technique together, the proposed method proved that features can be automatically learned and patterns can be predicted in raw images by combining the advantages of feature learning and traditional classifiers using the aforementioned DRIVE and STARE data sets. There was also a study [46] that aimed to increase the efficiency of CNN medical image segmentation, as it is the deep learning method that is most compatible with image processing. Studies that do not use artificial neural networks [47][48][49][50] use computer vision techniques. Marrugo et al. [47] proposed an approach based on multi-channel blind deconvolution. This approach performs pre-processing and estimates deconvolution to detect structural changes. In the results of this study, images that were degraded by blurriness and non-uniform illumination were significantly restored to the original retina images. Nguyen et al. [48] proposed an effective method that automatically extracts vessels from color retina images. The proposed method is based on the fact that line detectors can be created at various scales by changing the length of the basic line detector. To maintain robustness and remove the shortcomings of each individual line detector, line responses at various scales were combined linearly, and a final segmented image was generated for each retina image. The proposed method achieved high accuracy (measurements for evaluating accuracy in areas around the line) compared to other methods, and it maintained comparable accuracy. Visual observations of the segmented results show that the proposed method produced accurate segmentation in the central reflex vessel, and close vessels were separated well. Vessel width measurements that were obtained using the divided images calculated by the proposed method from the dataset are very accurate and close to the measurements produced by experts. Dias et al. [49] introduced a retina image quality assessment method that is based on hue, focus, contrast, and illumination. The proposed method produced effective image quality assessments by quantifying image noise and resolution sensitivity. Studies that do not employ artificial neural networks show good performance for their experiment environments, but they can only be used with robust data, and they are less able to handle a variety of situations compared to methods that use artificial neural networks. Generally, existing studies have focused on restoring image resolution. This study generates clear images by synthesizing parts of high-resolution images. Gray Level Co-Occurrence Matrix A gray level co-occurrence matrix (GLCM) [51][52][53][54], also known as a gray level spatial dependence matrix, is the best-known technique for analyzing image texture. GLCM is a matrix that counts how often different combinations of pixel brightness values (gray levels) occur in images, and it extracts a second-order statistical texture. Figure 1 shows an example of a GLCM. Assuming that a 4 × 4 image is the gray level information of the original image. If there are 4 known stages as stages 0-3, the GLCM is created as a 4 × 4 matrix. The GLCM in the Figure 1 was created by grouping the values of the original image horizontally in twos. For example, the original image's (4 × 4 image in Figure 1) (2(row), 2(column)) and (2(row), 3(column)) values are 2 and 1, respectively, and the combination of (2, 1) is just one case in the 4 × 4 image. Thus, this is converted to GLCM as 1 value in (2,1). The value of the GLCM (0, 0) is 2, and this is because there are 2 pairs of (0, 0) in the original image. One characteristic of the GLCM is that the sum of the GLCM found by the same method is always the same. The sum of the values of the GLCM in the figure is 12, as there is a total of 12 pairs in the original image. Therefore, even if the original image's values change, the sum of the GLCM's values is fixed as the GLCM is created by grouping pairs. The figure's normalized GLCM is normalized by dividing each GLCM value by 12, which is the sum. Electronics 2020, 9, x FOR PEER REVIEW 6 of 25 counts how often different combinations of pixel brightness values (gray levels) occur in images, and it extracts a second-order statistical texture. Figure 1 shows an example of a GLCM. Assuming that a 4 × 4 image is the gray level information of the original image. If there are 4 known stages as stages 0-3, the GLCM is created as a 4 × 4 matrix. The GLCM in the Figure 1 was created by grouping the values of the original image horizontally in twos. For example, the original image's (4 × 4 image in Figure 1) (2(row), 2(column)) and (2(row), 3(column)) values are 2 and 1, respectively, and the combination of (2, 1) is just one case in the 4 × 4 image. Thus, this is converted to GLCM as 1 value in (2,1). The value of the GLCM (0, 0) is 2, and this is because there are 2 pairs of (0, 0) in the original image. One characteristic of the GLCM is that the sum of the GLCM found by the same method is always the same. The sum of the values of the GLCM in the figure is 12, as there is a total of 12 pairs in the original image. Therefore, even if the original image's values change, the sum of the GLCM's values is fixed as the GLCM is created by grouping pairs. The figure's normalized GLCM is normalized by dividing each GLCM value by 12, which is the sum. Haralick texture [55][56][57][58] is a representative value that is expressed as a single real number like an average or a determinant obtained based on the GLCM. The GLCM Haralick texture was created from the need to use flat images for extracting features from three-dimensional elements that cannot be touched or directly extracted in part. As such, it is effective for obtaining features from retinas that cannot be directly touched or partially extracted. Retinal Image Retina images are digital images of the interior of the eye, specifically the rear portion. Retina images are required to show the retina, optic disk, and blood vessels, as shown in Figure 2. Figure 2 shows a retina image from the digital retinal images for vessel extraction (DRIVE) [59] dataset, which is often used in retina-related studies. Many studies have endeavored to automate the extraction of vessels from retina images and to improve the efficiency and accuracy of retinal diagnoses [43]. If a retinal image's quality is low, the state of the patient's retina cannot be sufficiently reflected. For example, in the case of diabetic retinopathy, which is the world's most common diabetic eye disease and a major cause of blindness, one of the criteria for diagnosing the progress of the disease is angiogenesis, in which tiny new vessels grow due to vessel occlusion. If the image quality is low, very fine angiogenesis cannot be seen. Figure 3 shows angiogenesis that has occurred due to diabetic retinopathy. When the resolution of a retinal examination's output images is poor, considerable money may be spent on readily requesting images for reconfirmation. When a patient with expert knowledge of the eyes has doubts about the physician's diagnosis regarding their retina images, clear retina images can act as a basis for a rediagnosis. They may also be helpful in future studies for automating retina image-based diagnostics. However, if the patient does not readily receive high-quality images, the value of the retinal examination may be negligible, and the examination results may be questionable. The techniques that are often used for automating retinal vessel segmentation are based on Haralick texture [55][56][57][58] is a representative value that is expressed as a single real number like an average or a determinant obtained based on the GLCM. The GLCM Haralick texture was created from the need to use flat images for extracting features from three-dimensional elements that cannot be touched or directly extracted in part. As such, it is effective for obtaining features from retinas that cannot be directly touched or partially extracted. Retinal Image Retina images are digital images of the interior of the eye, specifically the rear portion. Retina images are required to show the retina, optic disk, and blood vessels, as shown in Figure 2. Figure 2 shows a retina image from the digital retinal images for vessel extraction (DRIVE) [59] dataset, which is often used in retina-related studies. Many studies have endeavored to automate the extraction of vessels from retina images and to improve the efficiency and accuracy of retinal diagnoses [43]. If a retinal image's quality is low, the state of the patient's retina cannot be sufficiently reflected. For example, in the case of diabetic retinopathy, which is the world's most common diabetic eye disease and a major cause of blindness, one of the criteria for diagnosing the progress of the disease is angiogenesis, in which tiny new vessels grow due to vessel occlusion. If the image quality is low, very fine angiogenesis cannot be seen. Figure 3 shows angiogenesis that has occurred due to diabetic retinopathy. When the resolution of a retinal examination's output images is poor, considerable money may be spent on readily requesting images for reconfirmation. When a patient with expert knowledge of the eyes has doubts about the physician's diagnosis regarding their retina images, clear retina images can act as a basis for a re-diagnosis. They may also be helpful in future studies for automating retina image-based diagnostics. However, if the patient does not readily receive high-quality images, the value of the retinal examination may be negligible, and the examination results may be questionable. The techniques that are often used for automating retinal vessel segmentation are based on machine learning [28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46]. This is because replacing the lost vessel portion of an original retina image with another person's vessels lacks ethical credibility. On the other hand, machine learning is credible because it learns patterns in which people's vessels are spread. The focus of most recent studies is on deep learning-based supervised learning, and this is the same for retinal vessel segmentation automation. To learn vessel images, masks must be prepared in advance for the vessel portion. Figure 4 shows the manually prepared vessel portion of Figure 2. Moreover, the areas of the vessels must be specified when the dataset's retina images are in a different environment (minor changes in position that occur when the image is captured). Figure 5 shows the position mask for Figure 2. because it learns patterns in which people's vessels are spread. The focus of most recent studies is on deep learning-based supervised learning, and this is the same for retinal vessel segmentation automation. To learn vessel images, masks must be prepared in advance for the vessel portion. Figure 4 shows the manually prepared vessel portion of Figure 2. Moreover, the areas of the vessels must be specified when the dataset's retina images are in a different environment (minor changes in position that occur when the image is captured). Figure 5 shows the position mask for Figure 2. because it learns patterns in which people's vessels are spread. The focus of most recent studies is on deep learning-based supervised learning, and this is the same for retinal vessel segmentation automation. To learn vessel images, masks must be prepared in advance for the vessel portion. Figure 4 shows the manually prepared vessel portion of Figure 2. Moreover, the areas of the vessels must be specified when the dataset's retina images are in a different environment (minor changes in position that occur when the image is captured). Figure 5 shows the position mask for Figure 2. Method To perform an accurate retinal diagnosis, it is necessary to clearly see the retinal vessels in the images obtained by a retinal examination. Figure 6 illustrates the overall approach. Let us assume that low-quality retinal images (A) have been obtained from a simple retinal examination, such as direct ophthalmoscopy. Low-quality images usually contain blurry portions with clear portions in several parts. Therefore, vessel segmentation is performed only in the clear portions. The low-quality image (A) is used to retrieve the most similar image (B) in the dataset that includes high-quality (clear) vessels, and vessel segmentation is also performed on the retrieved clear image (B). Note that vessel segmentation should only be performed on the clear portions of B (the blurry portions of A). Finally, the identified vessel images are combined to generate a high-resolution image for an accurate retinal diagnosis. Method To perform an accurate retinal diagnosis, it is necessary to clearly see the retinal vessels in the images obtained by a retinal examination. Figure 6 illustrates the overall approach. Let us assume that low-quality retinal images (A) have been obtained from a simple retinal examination, such as direct ophthalmoscopy. Low-quality images usually contain blurry portions with clear portions in several parts. Therefore, vessel segmentation is performed only in the clear portions. The low-quality image (A) is used to retrieve the most similar image (B) in the dataset that includes high-quality (clear) vessels, and vessel segmentation is also performed on the retrieved clear image (B). Note that vessel segmentation should only be performed on the clear portions of B (the blurry portions of A). Finally, the identified vessel images are combined to generate a high-resolution image for an accurate retinal diagnosis. Method To perform an accurate retinal diagnosis, it is necessary to clearly see the retinal vessels in the images obtained by a retinal examination. Figure 6 illustrates the overall approach. Let us assume that low-quality retinal images (A) have been obtained from a simple retinal examination, such as direct ophthalmoscopy. Low-quality images usually contain blurry portions with clear portions in several parts. Therefore, vessel segmentation is performed only in the clear portions. The low-quality image (A) is used to retrieve the most similar image (B) in the dataset that includes high-quality (clear) vessels, and vessel segmentation is also performed on the retrieved clear image (B). Note that vessel segmentation should only be performed on the clear portions of B (the blurry portions of A). Finally, the identified vessel images are combined to generate a high-resolution image for an accurate retinal diagnosis. Algorithm 1 shows the procedure of the proposed technique. As inputs of the algorithm, R p is a set of low-quality images in which the user expects to increase quality, and R c is a set of high-quality retinal image data collected to improve the quality of R p . Lines 4-10 show the process of finding an image similar to a low-quality retinal image in a data collection. Haralick similarity is used to find the image with the highest similarity to the input low-quality retinal image. The searched similar image is a source of good pixels to replace bad pixels (pixels that cause deterioration in quality) of the original image. This process is covered in detail in Section 4.1. Lines 11-12 show the process of dividing the blood vessel from the retinal image. The blood vessel is divided based on the learned MLP using the DRIVE dataset. This process is covered in detail in Section 4.2. Lines 13-26 show the process of creating a synthesized image by applying a mask. A threshold is used to create a mask with the criteria of noise to be properly removed. In our approach, the Otsu method [60][61][62][63][64][65][66][67][68][69] is applied to dynamically select the threshold. After the mask generation is completed, a pixel value is determined for each pixel of the mask. If the pixel value is 0, the pixel of the original image is fetched, and if 1, the pixel of the similar image is fetched. The final synthesized results are returned as a set. This process is covered in detail in Section 4.3. The proposed approach consists of three stages: searching for similar images, segmenting retinal vessels, and combining images. Each stage is described in detail in the following subsections. Electronics 2020, 9, x FOR PEER REVIEW 10 of 25 Figure 6. Overall approach. Searching for Similar Images For the proposed technique, the degraded (low-quality) retinal images are inputted, and images are found that are the most similar to the input images in the pre-constructed retinal image data set, which only contains high-quality retinal images. The criteria for finding similar images are based on a logistic regression analysis [70][71][72][73] of Haralick textures, which are the GLCM matrix calculation results. These are used because the appearances of most retinas are similar, and therefore, finding features is difficult, as different patient retinal images cannot be distinguished without expert ophthalmology knowledge. GLCM-based Haralick textures are useful for determining similarities between retinal images because they are based on pixel changes. The proposed method uses a Haralick texture logistic regression process to find the similarity between the bad and dataset images. It then selects the image with the maximum similarity with the next input image. Segmenting Retinal Vessels During this stage, vessel segmentation is performed on the low-quality (low-resolution) retina and similar images. MLP, which uses an artificial neural network algorithm based on supervised learning, is employed for the vessel segmentation. Since MLP is a universal approximator, all functions can be represented if the MLP model consists of a large enough number of neurons and layers. It is also better to find an accurate weight value for every pixel instead of extracting partial features, as the retina images of the people having an average retina are not that different. MLP receives the image pixels as the input. Each input is multiplied by the weight of the edges and added. Next, the activation function is used on the results to determine how much they affect the next node, and this is transferred to the next layer. This is summarized by Equation (1), where is the input image pixels, is the weight assigned to each edge, and y is the node output. Figure 7 shows a simple example of the MLP method used in this study. The high-resolution fundus (HRF) [74] image dataset contains higher-quality retinal images compared to those of DRIVE, Searching for Similar Images For the proposed technique, the degraded (low-quality) retinal images are inputted, and images are found that are the most similar to the input images in the pre-constructed retinal image data set, which only contains high-quality retinal images. The criteria for finding similar images are based on a logistic regression analysis [70][71][72][73] of Haralick textures, which are the GLCM matrix calculation results. These are used because the appearances of most retinas are similar, and therefore, finding features is difficult, as different patient retinal images cannot be distinguished without expert ophthalmology knowledge. GLCM-based Haralick textures are useful for determining similarities between retinal images because they are based on pixel changes. The proposed method uses a Haralick texture logistic regression process to find the similarity between the bad and dataset images. It then selects the image with the maximum similarity with the next input image. Segmenting Retinal Vessels During this stage, vessel segmentation is performed on the low-quality (low-resolution) retina and similar images. MLP, which uses an artificial neural network algorithm based on supervised learning, is employed for the vessel segmentation. Since MLP is a universal approximator, all functions can be represented if the MLP model consists of a large enough number of neurons and layers. It is also better to find an accurate weight value for every pixel instead of extracting partial features, as the retina images of the people having an average retina are not that different. MLP receives the image pixels as the input. Each input is multiplied by the weight of the edges and added. Next, the activation function is used on the results to determine how much they affect the next node, and this is transferred to the next layer. This is summarized by Equation (1), where x is the input image pixels, w is the weight assigned to each edge, and y is the node output. Figure 7 shows a simple example of the MLP method used in this study. The high-resolution fundus (HRF) [74] image dataset contains higher-quality retinal images compared to those of DRIVE, and provides both low-and high-quality retinal images. Figure 8 shows a low-quality retinal image. The bottom part of the image is unclear owing to white noise caused by environmental factors. Figure 9 shows a retinal vessel image created by performing vessel segmentation on Figure 8. The white noise seen in Figure 8 also affects vessel segmentation. Figure 10 is a high-quality retinal image and the white noise observed in Figure 8 has disappeared. The bottom part of the vessel stem is clearly visible, which did not appear in the original image. Figure 11 shows a retinal vessel image created by performing vessel segmentation on Figure 10. Unlike Figure 9, the bottom part of the vessel stem is shown clearly. Electronics 2020, 9, x FOR PEER REVIEW 11 of 25 and provides both low-and high-quality retinal images. Figure 8 shows a low-quality retinal image. The bottom part of the image is unclear owing to white noise caused by environmental factors. Figure 9 shows a retinal vessel image created by performing vessel segmentation on Figure 8. The white noise seen in Figure 8 also affects vessel segmentation. Figure 10 is a high-quality retinal image and the white noise observed in Figure 8 has disappeared. The bottom part of the vessel stem is clearly visible, which did not appear in the original image. Figure 11 shows a retinal vessel image created by performing vessel segmentation on Figure 10. Unlike Figure 9, the bottom part of the vessel stem is shown clearly. and provides both low-and high-quality retinal images. Figure 8 shows a low-quality retinal image. The bottom part of the image is unclear owing to white noise caused by environmental factors. Figure 9 shows a retinal vessel image created by performing vessel segmentation on Figure 8. The white noise seen in Figure 8 also affects vessel segmentation. Figure 10 is a high-quality retinal image and the white noise observed in Figure 8 has disappeared. The bottom part of the vessel stem is clearly visible, which did not appear in the original image. Figure 11 shows a retinal vessel image created by performing vessel segmentation on Figure 10. Unlike Figure 9, the bottom part of the vessel stem is shown clearly. and provides both low-and high-quality retinal images. Figure 8 shows a low-quality retinal image. The bottom part of the image is unclear owing to white noise caused by environmental factors. Figure 9 shows a retinal vessel image created by performing vessel segmentation on Figure 8. The white noise seen in Figure 8 also affects vessel segmentation. Figure 10 is a high-quality retinal image and the white noise observed in Figure 8 has disappeared. The bottom part of the vessel stem is clearly visible, which did not appear in the original image. Figure 11 shows a retinal vessel image created by performing vessel segmentation on Figure 10. Unlike Figure 9, the bottom part of the vessel stem is shown clearly. Synthesizing Images The vessel images obtained from the low-quality (low-resolution) retinal images do not sufficiently show the patient status. Thus, the proposed method can be used to synthesize the original low-quality retinal image with the image that is the most similar to that of the high-resolution vessel image dataset. The mask concept generally used in computer vision techniques is applied to synthesize the low-quality input and similar images. The proposed method must satisfy three constraints to obtain a safe synthesis. The constraints are as follows. 1. Removal of noise mixed with the bad image. 2. Express damaged vessels owing to low quality. 3. Does not damage remaining vessels in the bad image. To synthesize two images, a mask is created by setting a threshold value for the gray levels of both the original low-quality and similar images. Areas with gray levels below and above the threshold are set to 0 and 1, respectively, to create a binary image in advance. The binary image is used as the input for the mask equation. The pixels of the grayscale images are used to create a new synthetic image. The mask pixel ( , ) equation is as follows: A pixel of the original (low-quality) image is inserted into the synthesized image when the mask is set to 0. On the other hand, a pixel of the similar image is inserted into the synthesis image when the mask is set to 1. Constraint (1) assumes that a pixel (a, b) of the low-quality image is noise. If this is so, then the mask is set to 1 by constraint (1), and the proposed method uses the similar image pixel. As the original image pixel (a, b) is noise, the similar image pixel (a, b) is clean. Therefore, the proposed method satisfies constraint (1). Constraint (2) assumes that the original image pixel (a, b) is damaged and is part of the vessel that is not shown. If the dataset sufficiently guarantees that the similar image Synthesizing Images The vessel images obtained from the low-quality (low-resolution) retinal images do not sufficiently show the patient status. Thus, the proposed method can be used to synthesize the original low-quality retinal image with the image that is the most similar to that of the high-resolution vessel image dataset. The mask concept generally used in computer vision techniques is applied to synthesize the low-quality input and similar images. The proposed method must satisfy three constraints to obtain a safe synthesis. The constraints are as follows. 1. Removal of noise mixed with the bad image. 2. Express damaged vessels owing to low quality. 3. Does not damage remaining vessels in the bad image. To synthesize two images, a mask is created by setting a threshold value for the gray levels of both the original low-quality and similar images. Areas with gray levels below and above the threshold are set to 0 and 1, respectively, to create a binary image in advance. The binary image is used as the input for the mask equation. The pixels of the grayscale images are used to create a new synthetic image. The mask pixel ( , ) equation is as follows: A pixel of the original (low-quality) image is inserted into the synthesized image when the mask is set to 0. On the other hand, a pixel of the similar image is inserted into the synthesis image when the mask is set to 1. Constraint (1) assumes that a pixel (a, b) of the low-quality image is noise. If this is so, then the mask is set to 1 by constraint (1), and the proposed method uses the similar image pixel. As the original image pixel (a, b) is noise, the similar image pixel (a, b) is clean. Therefore, the proposed method satisfies constraint (1). Constraint (2) assumes that the original image pixel (a, b) is damaged and is part of the vessel that is not shown. If the dataset sufficiently guarantees that the similar image Synthesizing Images The vessel images obtained from the low-quality (low-resolution) retinal images do not sufficiently show the patient status. Thus, the proposed method can be used to synthesize the original low-quality retinal image with the image that is the most similar to that of the high-resolution vessel image dataset. The mask concept generally used in computer vision techniques is applied to synthesize the low-quality input and similar images. The proposed method must satisfy three constraints to obtain a safe synthesis. The constraints are as follows. 1. Removal of noise mixed with the bad image. 2. Express damaged vessels owing to low quality. 3. Does not damage remaining vessels in the bad image. To synthesize two images, a mask is created by setting a threshold value for the gray levels of both the original low-quality and similar images. Areas with gray levels below and above the threshold are set to 0 and 1, respectively, to create a binary image in advance. The binary image is used as the input for the mask equation. The pixels of the grayscale images are used to create a new synthetic image. The mask pixel (i, j) equation is as follows: A pixel of the original (low-quality) image is inserted into the synthesized image when the mask is set to 0. On the other hand, a pixel of the similar image is inserted into the synthesis image when the mask is set to 1. Constraint (1) assumes that a pixel (a, b) of the low-quality image is noise. If this is so, then the mask is set to 1 by constraint (1), and the proposed method uses the similar image pixel. As the original image pixel (a, b) is noise, the similar image pixel (a, b) is clean. Therefore, the proposed method satisfies constraint (1). Constraint (2) assumes that the original image pixel (a, b) is damaged and is part of the vessel that is not shown. If the dataset sufficiently guarantees that the similar image is similar to the low-quality image (i.e., the low-quality image is a favorable state), the similar image pixel (a, b) is set to 1, and therefore, the mask is set to 1. As such, the proposed method satisfies constraint (2). Constraint (3) assumes that pixel (a, b) is part of the properly depicted vessel, and that the similar image pixel (a, b) is also part of the vessel. Therefore, the proposed method satisfies constraint (3). It is assumed that pixel (a, b) is part of a blank area without noise in the low-quality image. Then, pixel (a, b) of the similar image is also part of a blank area without noise. Thus, the mask is set to 0. Once the mask-generating process for the pair consisting of an image and a similar image has been completed, the proposed technique will be employed to create a restored (synthesized) image based on the mask. The pixel value (i, j) value of restored image imports the pixel value (i, j) of the bad or the similar image. If the pixel value (i, j) of the mask is 1, the restored image imports similar (good) image value (i, j), whereas if the pixel value (i, j) of the mask is 0, the restored image imports the bad (original) image value (i, j). That is, the restored image imports the similar image's pixel for the main branch or the noise, or imports the original image's pixel for the sub-branches as the sub-branches are excluded by threshold function. The final restored (synthesized) image is shown in Figure 12. is similar to the low-quality image (i.e., the low-quality image is a favorable state), the similar image pixel (a, b) is set to 1, and therefore, the mask is set to 1. As such, the proposed method satisfies constraint (2). Constraint (3) assumes that pixel (a, b) is part of the properly depicted vessel, and that the similar image pixel (a, b) is also part of the vessel. Therefore, the proposed method satisfies constraint (3). It is assumed that pixel (a, b) is part of a blank area without noise in the low-quality image. Then, pixel (a, b) of the similar image is also part of a blank area without noise. Thus, the mask is set to 0. Once the mask-generating process for the pair consisting of an image and a similar image has been completed, the proposed technique will be employed to create a restored (synthesized) image based on the mask. The pixel value ( , ) value of restored image imports the pixel value ( , ) of the bad or the similar image. If the pixel value ( , j) of the mask is 1, the restored image imports similar (good) image value (i, ), whereas if the pixel value ( , ) of the mask is 0, the restored image imports the bad (original) image value ( , ). That is, the restored image imports the similar image's pixel for the main branch or the noise, or imports the original image's pixel for the sub-branches as the subbranches are excluded by threshold function. The final restored (synthesized) image is shown in Figure 12. Evaluation To evaluate the proposed method, we collected retinal images from the DRIVE and HRF datasets (These data sets are available as presented in Supplementary Materials). DRIVE is a retinal image dataset that has been used in many retinal image classification and vessel segmentation studies. DRIVE consists of 140 retinal images. Since DRIVE provides 2 manual vessel masks per retina image, the user does not need to make blood vessel masks separately and can choose a mask that produces a more accurate result from the two makes for each image. Thus, it is more reasonable to use DRIVE for a training set. Using DRIVE makes this study comparable to the other studies. HRF has a higher image quality than DRIVE in general. Although HRF does not provide a blood vessel mask, it contains both high-and low-quality retinal images, making it useful for the evaluation of the proposed approach. HRF consists of 18 high-and low-quality retinal image pairs and due to the characteristics of HRF that does not provide any masks, it is more reasonable for it to be used for a testing set. As the performance of the proposed method might change following the accuracy of the similar image representing a patient's retina image, it is better to have more images in the dataset. In this experiment, we assume that the retinal images of all people with normal retinas are similar. Evaluation To evaluate the proposed method, we collected retinal images from the DRIVE and HRF datasets (These data sets are available as presented in Supplementary Materials). DRIVE is a retinal image dataset that has been used in many retinal image classification and vessel segmentation studies. DRIVE consists of 140 retinal images. Since DRIVE provides 2 manual vessel masks per retina image, the user does not need to make blood vessel masks separately and can choose a mask that produces a more accurate result from the two makes for each image. Thus, it is more reasonable to use DRIVE for a training set. Using DRIVE makes this study comparable to the other studies. HRF has a higher image quality than DRIVE in general. Although HRF does not provide a blood vessel mask, it contains both high-and low-quality retinal images, making it useful for the evaluation of the proposed approach. HRF consists of 18 high-and low-quality retinal image pairs and due to the characteristics of HRF that does not provide any masks, it is more reasonable for it to be used for a testing set. As the performance of the proposed method might change following the accuracy of the similar image representing a patient's retina image, it is better to have more images in the dataset. In this experiment, we assume that the retinal images of all people with normal retinas are similar. It has been known that there are not any specific values that meet the number of nodes or layers appropriate for all situations or datasets. Thus, we needed to find the appropriate numbers for both nodes and layers and found them by constructing the MLP models having a different number of nodes and layers manually. As a result, we realized that 3 hidden layers and 1.8-2 times nodes of the input layer were sufficient enough. As the noise removal performance by the proposed technique can be affected by the threshold values used when defining a mask, we evaluated the noise removal performance of each threshold value to achieve the optimal noise removal performance level possible. Each pixel in an input image converted with a gray scale has 256 levels, indicating that the color would become darker as it approaches near 0 or vice versa. For the threshold function, all the pixels having a gray-scale level exceeding the parameter value were set to 255, whereas the opposite ones were set to 0. Figure 13 shows the masks generated from the six threshold types in the image in Figure 9. Since the area indicated as 1 (yellow section) due to the mask applied with a threshold type would import the pixel of a similar image, the mask that represents the area of the noise (i.e., the yellow section) more clearly will be deemed as a good mask. In Figure 13, it is possible to know in the picture that there is no significant meaning in the areas distinguished by the mask as most their pixel values are 1 when the threshold value is 8. Although such a phenomenon had decreased when the threshold value was increased from 16 to 32, the areas that had been excessively identified as noise existed at the top of the image. The optimal performance was seen when the threshold value was 64, but it became harder to distinguish the noise areas as the value exceeded it. For this study, the Otsu method [60][61][62][63][64][65][66][67][68][69] was applied to select reliably threshold value that has a great influence on the synthesized results of the proposed approach. The Otsu method provides the criteria for setting the most natural threshold value using a statistical method. This method defines two variances for a threshold value, T, which could be from 0 to 255. where α is the ratio of pixels darker than the T value, and β is the ratio of pixels brighter than the T value. σ 2 1 is the variance of pixels darker than the T value, and σ 2 2 is the variance of pixels brighter than the T value. µ 1 is the average brightness of the pixels darker than the T value, and µ 2 is the average brightness of the pixels brighter than the T value. The Otsu method is to select T with the largest inter-class variance when pixels are divided into two classes based on T while increasing the threshold value T from the minimum value to the maximum value (from 0 to 255). That is, the Otsu method minimizes (3) or maximizes (4) to dynamically find the optimal threshold T (this method tries to maintain a small variance inside the group, and groups divided by T try to maintain a large variance). Figure 14 shows a comparison of a vessel image extracted from the original low-quality image with one synthesized by the proposed method. The white noise area at the bottom of the original low-quality image is clearly not present in the synthesized vessel image. As for the bottom part of the vessel stem that was damaged and could not be distinguished, this was created independently with noise removal. Electronics 2020, 9, x FOR PEER REVIEW 15 of 25 such a phenomenon had decreased when the threshold value was increased from 16 to 32, the areas that had been excessively identified as noise existed at the top of the image. The optimal performance was seen when the threshold value was 64, but it became harder to distinguish the noise areas as the value exceeded it. For this study, the Otsu method [60][61][62][63][64][65][66][67][68][69] was applied to select reliably threshold value that has a great influence on the synthesized results of the proposed approach. The Otsu method provides the criteria for setting the most natural threshold value using a statistical method. This method defines two variances for a threshold value, , which could be from 0 to 255. where is the ratio of pixels darker than the value, and is the ratio of pixels brighter than the value. is the variance of pixels darker than the value, and is the variance of pixels brighter than the value. is the average brightness of the pixels darker than the value, and is the average brightness of the pixels brighter than the value. The Otsu method is to select with the largest inter-class variance when pixels are divided into two classes based on while increasing the threshold value from the minimum value to the maximum value (from 0 to 255). That is, the Otsu method minimizes (3) or maximizes (4) to dynamically find the optimal threshold (this method tries to maintain a small variance inside the group, and groups divided by try to maintain a large variance). Figure 14 shows a comparison of a vessel image extracted from the original low-quality image with one synthesized by the proposed method. The white noise area at the bottom of the original low-quality image is clearly not present in the synthesized vessel image. As for the bottom part of the vessel stem that was damaged and could not be distinguished, this was created independently with noise removal. Figure 15 shows a comparison of the vessel image extracted from the original high-quality image with one synthesized by the proposed method. The vessel portion that was newly created in the synthesized vessel image is similar to the vessels in the original high-quality image. However, compared to the original high-quality image, the right part of the vessel is not clean, and there is faint noise in the vessel image that was synthesized by the proposed method. The noise is below the threshold value that was set when the mask was created. Even though the area below the threshold value is noise, the pixels from the original low-quality image are used because it was recognized as an empty area and was set to 0 in the binary image. However, owing to this reason, merely lowering the threshold value increases the effects of the similar image. This makes it possible to ignore the empty parts in the low-quality image and vessels can be used that are not related to the owner of the original image. Electronics 2020, 9, x FOR PEER REVIEW 16 of 25 Figure 15 shows a comparison of the vessel image extracted from the original high-quality image with one synthesized by the proposed method. The vessel portion that was newly created in the synthesized vessel image is similar to the vessels in the original high-quality image. However, compared to the original high-quality image, the right part of the vessel is not clean, and there is faint noise in the vessel image that was synthesized by the proposed method. The noise is below the threshold value that was set when the mask was created. Even though the area below the threshold value is noise, the pixels from the original low-quality image are used because it was recognized as an empty area and was set to 0 in the binary image. However, owing to this reason, merely lowering the threshold value increases the effects of the similar image. This makes it possible to ignore the empty parts in the low-quality image and vessels can be used that are not related to the owner of the original image. In our evaluation, we focused on the two types of the comparisons: (1) a comparison of the lowquality vessel image with the high-quality vessel image, and (2) a comparison of the low-quality vessel image with the restored image by the proposed method using the HRF dataset. A statistical analysis was performed on the experimental results to objectively assess the proposed method. An independent sample t-test is often used to compare the population means of two groups, mainly to observe the similarities or differences between two different test groups [75][76][77][78]. In our experiment, an independent sample t-test was used to determine if there was a significant difference between the high-quality and synthesized images. For the evaluation, we conducted three experiments using independent sample t-test for the three separate techniques: (1) analysis of feature matching, (2) analysis of image similarity based on the Haralick algorithm, (3) analysis of mean-square error (MSE). We checked whether these two groups (low-quality image vs. high-quality image and low-quality vs. the restored image) had an equal variance prior to performing the t-test. Thus, we conducted the F-test first and then checked whether the p-value was greater than the significant level (0.05). Table 1 shows the number of high-quality and restored (synthesized) image features that match those of the low-quality image using feature matching. In Table 1, the "# of features matched in bad- In our evaluation, we focused on the two types of the comparisons: (1) a comparison of the low-quality vessel image with the high-quality vessel image, and (2) a comparison of the low-quality vessel image with the restored image by the proposed method using the HRF dataset. A statistical analysis was performed on the experimental results to objectively assess the proposed method. An independent sample t-test is often used to compare the population means of two groups, mainly to observe the similarities or differences between two different test groups [75][76][77][78]. In our experiment, an independent sample t-test was used to determine if there was a significant difference between the high-quality and synthesized images. For the evaluation, we conducted three experiments using independent sample t-test for the three separate techniques: (1) analysis of feature matching, (2) analysis of image similarity based on the Haralick algorithm, (3) analysis of mean-square error (MSE). We checked whether these two groups (low-quality image vs. high-quality image and low-quality vs. the restored image) had an equal variance prior to performing the t-test. Thus, we conducted the F-test first and then checked whether the p-value was greater than the significant level (0.05). Table 1 shows the number of high-quality and restored (synthesized) image features that match those of the low-quality image using feature matching. In Table 1, the "# of features matched in bad-good images" represents the number of matching features obtained when the feature-matching algorithm was used on the low-and high-quality-images. The "# of features matched in bad-synthesized image" represents the number of matching features obtained when the feature-matching algorithm was used on the low-quality and synthesized images. Table 2 shows that the retrieved high-quality and restored (synthesized) images are similar to the low-quality images. In Table 2, "Bad-Good Image Similarity" represents the similarity when the Haralick algorithm was used on the low-quality and good images. The "Bad-Synthesized Image Similarity" represents the similarity when the Haralick algorithm was used on the low-quality and synthesized images. The p-value exceeded the significance level of 0.05 for the independent sample t-test results. Thus, there were no statistically significant differences between the high-quality and synthesized images. In entries 1, 12, and 18 of Table 2, the similarity of the low-and high-quality images was greater than that of the low-quality and synthesized images, unlike the majority of cases. This is because the similar images used in the image synthesis did not adequately represent the original images. As such, this appears to be a problem that will be resolved by improving the dataset or the algorithm used to find similar images. Table 3 shows how many high-quality and restored (synthesized) images are different with the low-quality images through mean-square error (MSE). In Table 3, the "MSE Between Bad-Good Image" represents the MSE value when the MSE algorithm was used for the low-quality images and the high-quality images. In addition, the "MSE Between Bad-Synthesized Image" represents the MSE value when the MSE algorithm was used for the low-quality images and the synthesized images. Figure 16 shows the six p-values obtained from the experiments (i.e., 2 types of tests and 3 experiments). Here, the Y axis represents the p-values of the tests, whereas the black bar and slash-patterned bar represent the F-test and T-test, respectively. The results from the three experiments showed that there was no difference between the good image and the synthesized image. In all experiments, two groups (low-quality image vs. high-quality image and low-quality vs. the synthesized image) had an equal variance as F-test p-values were greater than the significant level (0.05). Moreover, since all the t-test p-values were greater than the significant level (0.05) in all experiments, we were able to assume that there was no significant difference between the two groups, confirming that the synthesized retina image was similar enough with the good quality retina image. In addition, for the three types of experiments, we have also evaluated how the threshold values affected the p-values. Table 4 shows how the p-values had changed according to the changes in the threshold types: Otsu method and Global values (8 to 128). The result showed that for the global threshold, the p-values also increased following the increase in the threshold values but they dropped drastically when the latter reached 128. As previously mentioned, this might have been caused as the noise was not clearly removed when the threshold value had become excessively higher. Especially, in a similarity test with a Haralick algorithm, all the p-values could not exceed the significance level of 0.05 except when the threshold value was 64. As the picture shows, the p-value in an extreme situation, having a threshold value of 8, for example, decreases less when compared with another extreme situation where the threshold value is 128. This means that it is better to use the similar image as it is instead of using a low-quality image. Since the synthesized image will become closer to a similar image as the number of areas having a pixel value of 1 (i.e., a low threshold value) increases, it would represent the actual retinal image of a patient more clearly when the dataset is organized in a better way, which means that the lower threshold values would reduce damage to the synthesized image. Independent t-test pvalue 0.33 Figure 16 shows the six p-values obtained from the experiments (i.e., 2 types of tests and 3 experiments). Here, the Y axis represents the p-values of the tests, whereas the black bar and slashpatterned bar represent the F-test and T-test, respectively. The results from the three experiments showed that there was no difference between the good image and the synthesized image. In all experiments, two groups (low-quality image vs. high-quality image and low-quality vs. the synthesized image) had an equal variance as F-test p-values were greater than the significant level (0.05). Moreover, since all the t-test p-values were greater than the significant level (0.05) in all experiments, we were able to assume that there was no significant difference between the two groups, confirming that the synthesized retina image was similar enough with the good quality retina image. In addition, for the three types of experiments, we have also evaluated how the threshold values affected the p-values. Table 4 shows how the p-values had changed according to the changes in the threshold types: Otsu method and Global values (8 to 128). The result showed that for the global threshold, the p-values also increased following the increase in the threshold values but they dropped drastically when the latter reached 128. As previously mentioned, this might have been caused as the noise was not clearly removed when the threshold value had become excessively higher. Especially, in a similarity test with a Haralick algorithm, all the p-values could not exceed the significance level of 0.05 except when the threshold value was 64. As the picture shows, the p-value in an extreme situation, having a threshold value of 8, for example, decreases less when compared with another In the case of the threshold derived by the Otsu method, the p-value obtained from the feature matching was similar to the manually selected global threshold. However, the p-value generated by the Haralick algorithm and MSE was higher than the manually selected global threshold. In our experiments, it was confirmed that the Otsu method was a very appropriate threshold setting method for the Haralick algorithm and MSE. Discussion An image synthesis approach was proposed in this study to improve the quality of retinal images, particularly their vessel areas, to help physicians perform diagnoses after retinal examinations. As this is a medical study, the proposed method will be very useful for emergency medical situations requiring high-quality images in real time. Table 5 shows the time taken to perform 10 repeated rounds of training and testing for the experiments described in Section 4. For the 10 rounds of data, the smallest value is marked in bold, and the largest value is underlined. The average training time was 634 s, and in all cases, it took approximately 10 min. The average testing time was approximately 33 s, and in all cases, it took Electronics 2020, 9, 767 20 of 24 approximately 30 s. The training time is not a significant issue as this is the processing time required by the server. The problem is the testing time, as this is the time taken to produce results after a physician has inputted a newly created retinal image into the trained model, and it has the greatest effect on real-time service. Nevertheless, an average of 30 s is sufficient for situations with soft real-time conditions. However, it is difficult for our approach to satisfy hard real-time conditions such as life-threatening emergency medical situations. Table 5. Time for generating synthetic vessel images (The minimum value in each column is presented in bold, and the maximum value in each column is underlined). Count The Conclusions This study has proposed an image synthesis method that allows for accurate diagnoses after retinal examinations. The proposed method ensures that it is always possible to clearly distinguish the vessel portions of images that are essential for examinations even when the images that were obtained in the retinal examination have a low resolution. Specifically, it collects the most similar images from a dataset that stores existing high-resolution images, segments the high-resolution vessel portions, and combines them with the low-quality (low-resolution) retinal image to obtain the clearest vessel image. Through this process, optimal high-resolution vessel images, which can aid in making accurate diagnoses, are extracted. In addition, this type of study contributes toward future research by formulating a complex retina vessel structure model from an anatomic and ophthalmological perspective. When a patient with expert knowledge of the eyes has doubts about the physician's diagnosis regarding their retinal images, clear retinal images can act as a basis for a re-diagnosis. Our results also contribute toward future studies on automating retinal image-based diagnoses. Future directions for this study are as follows. As discussed in Section 6, the current method cannot provide the critical real-time services that are needed in emergency medical situations. We believe that it is very important to study this first. In addition, this study's dataset follows a fixed form, and the method cannot dynamically handle a variety of datasets. This does not reflect reality, and retina data formats can vary according to the country or type of hospital. To deal with this, we will study data reduction techniques that are based on the characteristics of retina images. By doing so, it will be possible to dynamically handle different retinal data formats. Moreover, studies on efficiently generating retinal data are expected to contribute toward improving the quality of the retinal images themselves rather than efficient retinal data structures.
2020-05-21T00:05:14.430Z
2020-05-07T00:00:00.000
{ "year": 2020, "sha1": "5ea9b5593b9f2a38dbe8f21456a60237827aa89f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/9/5/767/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a469978eb30b019c25ee9c43396a282a2753e989", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
13915266
pes2o/s2orc
v3-fos-license
The relationship between wave and geometrical optics models of coded aperture type x-ray phase contrast imaging systems X-ray phase contrast imaging is a very promising technique which may lead to significant advancements in medical imaging. One of the impediments to the clinical implementation of the technique is the general requirement to have an x-ray source of high coherence. The radiation physics group at UCL is currently developing an x-ray phase contrast imaging technique which works with laboratory x-ray sources. Validation of the system requires extensive modelling of relatively large samples of tissue. To aid this, we have undertaken a study of when geometrical optics may be employed to model the system in order to avoid the need to perform a computationally expensive wave optics calculation. In this paper, we derive the relationship between the geometrical and wave optics model for our system imaging an infinite cylinder. From this model we are able to draw conclusions regarding the general applicability of the geometrical optics approximation. Introduction It is hoped that x-ray Phase Contrast Imaging (XPCi) will provide a generational improvement in the effectiveness of mammography [1]. To our knowledge, the only in vivo mammography program is in progress in Trieste, Italy, using the SYRMEP beam line [2]. This program has provided mammograms of improved spatial resolution and detail visibility compared with conventional mammography. It cannot, however, be considered a viable tool for clinical screening due to its reliance on a synchrotron source. An alternative XPCi technique employing laboratory sources was suggested by Olivo et. al [3,4] in 2007. This technique is known as coded aperture XPCi and has since been under continuous development within the radiation physics group at UCL (see references [5,6,3] for example). This technique has been demonstrated experimentally and validated theoretically in the aforementioned references. We are now building a pre-prototype coded aperture XPCi system in order to demonstrate the efficacy of the technique using in vitro human breast tissue samples. In order to design the system and verify the experiments, it is necessary to model the entire imaging system, including the interaction of x-rays with tissue. The small refractive index contrast of tissue combined with the unpolarised x-ray source mean that a full electromagnetic calculation for the scattered x-rays can be avoided. Furthermore, the short wavelength of x-rays relative to typical cell structure dimensions means that a geometrical optics model is often sufficient. This is important as a rigorous scalar calculation of the scattered field would require prohibitively large computational resources. In this paper, we thus attempt to establish conditions under which a geometrical optics approximation can be employed to model a coded aperture XPCi system. As with models of other, related imaging systems [7,8,9,10,11], we consider phase objects whose optical thickness is at least piece-wise smooth. Work was done by Peterzol et. al [12] to determine the limits of validity of the geometrical optics approximation for free space propagation type XPCi systems. Such an analysis has not been performed for coded aperture XPCi systems. The link between geometrical and wave optics is by no means a new area of research. Keller [13] was the first to show that geometrical optics need not be limited to modelling objects with smoothly varying refractive index. He showed that geometrical optics is an approximation to wave optics which can be made more accurate by the inclusion of higher order terms. A good account of this technique is given by James [14]. In this paper we calculate higher order terms to show how the geometrical optics and wave optics solutions vary in predicting coded aperture XPCi images. The paper is arranged as follows. We first present the wave optics model and show how it can be implemented efficiently. We then derive the geometric optics model before showing how a source of finite width can be introduced into the system. We then apply the developed theory to the particular example of an infinite cylinder. By employing the stationary phase approximation to the diffraction integrals which result from the wave optics model, we derive the geometrical optics model, thus showing how the two models are related. Finally we show some numerical examples and show conditions under which the geometrical optics model may be accurately employed. Wave optics model We consider first the wave optics model of the imaging system depicted in Fig. 1. Normally a sample would be placed on the detector side of the sample apertures however we initially consider the sample free case. Following the method employed by Olivo and Speller [16] we use Fresnel-Kirchhoff diffraction theory to calculate the field incident upon the detector apertures. We consider initially a single point source at position (x s ,0, −z so ) emitting a spherical wave at wavelength λ. Previous experiments have shown [16] that modelling the system at the source's average energy gives a good prediction of the image. The assumption of a point source will be relaxed later. Assuming the exp(−iωt) sign convention, the field at position P = (x,y,z od ) may be given by [16]: (1) where (2) and represents the transmitting regions of the sample apertures. In addition, x, ξ and z are defined in Fig. 1 and (ξ,ψ,z) and (x,y,z) form right handed coordinate systems. The integration over ψ can be performed by noting that the apertures have no dependence upon ψ. We must thus evaluate: evaluating this integral with limits at +∞ contradicts the Fresnel approximation used to obtain Eq. (1) and should be solved using the theory of distributions [15]. This problem can however be avoided by noting that the kernel of the integral in Eq. (3) is a rapidly oscillating function which lends itself to asymptotic evaluation by the method of stationary phase. According to the method of stationary phase [14,, an integral of the form: (4) where g(x) has a single first order stationary point, x 0 , such that g′ (x 0 ) = 0, g′′ (x 0 ) ≠ 0, can be approximated as: (5) in the limit of large k. Applying this approximation to Eq. (3) we find that (6) the role of this term is to ensure energy conservation and give the incident field the correct phase relationship with y. This result is also obtainable using Fourier theory applied to distributions [15] which reveals that Eq. (6) is in fact the solution to Eq. (3) [17]. This enables us to write Eq. (1) as: (7) where we now introduce the periodic function T(ξ) to represent the transmission function of the sample aperture. It is now easy to include the effect of a phase object with phase function ϕ (ξ) by following an approach similar to that of Arfelli et. al [18]. The total field at the detector apertures may be found according to: (8) where is the extent of the object. Efficient evaluation of wave optics field We now turn our attention to how the expression in Eq. (7) may be efficiently evaluated. As T(ξ) is a periodic function with period L, it can be represented as a complex Fourier series written in general as: (9) which upon substitution into Eq. (7) yields (10) As also suggested by Engelhardt et. al [10], the Fast Fourier Transform (FFT) can be used to efficiently evaluate this expression. In particular, starting with the definition of the discrete Fourier transform [19]: (11) by allowing x′z so /(z so + z od ) to take on values κL/(2N) the summation in Eq. (10) may be evaluated, for a finite number of terms, by constructing a vector of the form: (12) where (13) and finally taking the Fourier transform of the vector in Eq. (12). Noting also that the coefficients C n may also be evaluated using the FFT, Eq. (10) may be evaluated very efficiently. The second term in Eq. (8) must, in general, be evaluated numerically unless the object has a phase function permitting analytic evaluation. It was found that Gaussian Quadrature integration [19] provided accurate results. Geometrical optics model Olivo and Speller [6,20] have previously used geometrical optics to model the coded aperture XPCi system. Their approach used a "forward" technique where photons emitted by the source were traced through the system. Photons could be blocked by an aperture, refracted by a sample or both. The number of photons reaching a particular pixel represent the signal detected by that pixel. We now consider the ray optics approach in a more formal manner in order to relate it to the wave optics approach. For the remainder of this section we consider only non-trivial rays which are transmitted by the sample aperture. We consider here a first order geometrical optics. It has been shown by Keller [13] and later by James [14] that geometrical optics may be extended to include higher order terms which represent what is usually termed diffraction. Here we consider only the first order terms of the geometrical optics approximation. The trajectory of a light ray is described by the expression [21]: (14) where r is the position vector of a point on the ray, s the length of the ray, n the refractive index of the medium and defines a wave front of constant phase, ie, . It is evident from this that we assume rays are deflected in the ξ direction only. Consider a phase object as depicted in Fig. 2. We define the phase function, ϕ (ξ), as (15) Europe PMC Funders Author Manuscripts where n(ξ,z) is the refractive index at position (ξ,z) and we have assumed that rays make only small angles, θ i , with the z-axis. The angle by which the ray is deflected in then given by: (16) With reference to Fig. 1, we can say that a ray emitted at angle θ i to the z-axis will intercept the ξ-axis at position ξ = z so tan(θ i ) and, if deflected by an object, will intercept the x-axis at position (17) The phase of the ray at the detector apertures is calculated by taking into account the phase introduced by the object and the distance travelled in free space according to the Fresnel approximation. The amplitude of the ray must be such that energy is conserved. In particular, the time average power propagating in a small pencil of rays emanating from the source must remain constant. The ratio between ray amplitudes at z = z od and z = 0 is thus given by: (18) Modelling a finite size source Secs. (2)-(4) show how to calculate the field incident upon the detector apertures. The detected signal is found by integrating the intensity of x-rays transmitted by the detector apertures and incident upon a particular pixel. In general, the pth transmitting region of the detector apertures is given by [pLM − LM/4+dL, pLM +LM/4+dL] where dL is the displacement of the detector apertures relative to the projection of the sample apertures as shown in Fig. 1 and M = (z od + z so )/z so is the system magnification. We assume that the pixels are aligned as shown in Fig. 1 such that a single pixel entirely covers a single transmitting region of the detector apertures. Before calculating the signal detected by each pixel, we introduce a source of finite size in the x̄ direction. The brightness is described by P(x) which we will take to have a Gaussian profile. We can then take the signal of the pth pixel to be given by: (19) where, for mathematical convenience we have assumed that the source brightness profile limits the effective source size rather than the limits of integration. By making the substitution P(x) = exp(−(x/σ) 2 ), Eq. (19) may be expressed as: (20) where (21) and erf(z) is the error function (22) Equation 20 shows that K(x) may effectively be considered as a pixel sensitivity function. Figure 3 shows plots of K(x) for a variety of source Full Widths at Half Maximum (FWHM). This shows how a broad source leads to a broad K(x) thus diminishing the sensitivity of the system to fine variations in the intensity caused by phase variations in the object. Wave optics model We now apply the results of previous sections to model the XPCi image of a cylindrical fibre. This problem has been considered previously by Olivo and Speller [6] in order to verify experimental results. We consider a non-absorbing cylinder of radius R, refractive index n = 1 − δ, parallel to the ψ-axis centered upon (ξ,z) = (ξ 0 ,0). Absorbing materials can be modelled by writing n = 1 − δ + iβ thus introducing an attenuation term in Eq. (8). We have opted to set β to 0 to simplify the following analysis. Note that δ is of the order of 10 −6 to 10 −7 for the range of x-ray energies and materials which we consider here. The phase function, ϕ (ξ), may thus be calculated as: (23) We consider first the wave optics calculation. We must evaluate the second term of Eq. (8) after substituting Eq. (23) into it. The bounds of integration are found by taking the intersection of the transmitting part of the sample aperture and the cylinder. Without loss of generality we consider three cases depicted in Fig. 4 where the transmitting part of the sample aperture is assumed to be centered upon ξ = 0. Note that the cases depicted in Fig. 4 do not limit the cylinder radius, all that is important is where the cylinder boundaries lie relative to the transmitting regions of the apertures. A cylinder covering more than one sample aperture could be modelled using a combination of the cases depicted in Fig. 4. In practice our system employs a series of apertures to simultaneously image a wide field of view. For clarity, we consider here a single sample/detector aperture pair and scan the object to obtain its image. Images obtained in this way will be equivalent to those obtained in practice only when photons are not scattered between differing pre-sample/detector aperture pairs. Only a simple extension is required to model the practical system as is shown at the end of Sec. (6.2). An analysis of when this approximation is valid is given in Sec. (6.3). The integration may be evaluated numerically however it is instructive to analytically evaluate by approximation. We start by writing Eq. (8) as (24) where (25) where Ω = [ξ 0 − R,ξ 0 + R] ∩ [−Lη/2,Lη/2] and η is the fill factor of the sample apertures. We now attempt to find asymptotic solutions, for large k, to the integrals in Eq. (25) by (5). We now turn our attention to evaluating leading terms in the asymptotic expansions of the integrals in Eq. (25). We start by defining the functions g 1 (ξ) and g 2 (ξ) and finding their derivatives as: (27) it is easy to verify that when M, δ and z od are limited to those values experienced in practice, has a unique solution for every value of x′. This solution must in general be calculated numerically. This may be done efficiently by evaluating where ξ i = [ξ i ] is a discretisation of the domain [ξ 0 − R +ε,ξ 0 + R − ε] for some small ε. This corresponds to case (2) in Fig. 4 where the entire cylinder is illuminated and thus rays are refracted to all values of x′. In cases (1) and (3), the bounds of integration are affected by the sample apertures which in turn affects the values of x′ to which rays are refracted. The stationary point of g 1 for a particular x′ may be found by interpolation with γ as the abscissa. This enables the leading term in the expansion to be calculated as (28) examination of shows that g 2 has a single stationary point at ξ 2,0 = x′/M. In the case that x′/M is within the bounds of integration of U 2 , the following term is contributed by the stationary point ξ 2,0 : (29) supposing then that Ω = [a,b] and that neither a or b are stationary points of g 2 , the next term in the asymptotic expansion may be found as (30) which, in the special case where b = −a, becomes Relationship between wave and geometrical optics models We show here how the wave and geometrical optics solutions are related. As θ i and α in Eq. (17) are small we can write Eq. (17) as (32) where Eq. (23) has been substituted into Eq. (16) to find α. This expression is identical to in Eq. (27), the stationary phase condition for integral U 1 . It is then easy to verify that assuming identical incident field conditions, substituting Eqs. (23) and (16) into Eq. (18) results in the same magnitude as CΓ(y)I 1,0 . Furthermore, substitution of the phase contributions from the phase object and the Fresnel approximation for free space propagation result in the same phase as in CΓ(y)I 1,0 . This shows that the leading term in the asymptotic expansion of U 1 gives the same field as the geometrical optics approximation to the refracted field. Examination of CΓ(y)I 2,0 = 1/(z so + z od )exp(ik((y 2 + (x − x s ) 2 )/(2(z so + z od )) + z so + z od )) shows that this is the geometrical optics field of the light which reaches the detector apertures without being refracted by the cylinder or blocked by the sample apertures. Closer examination of CΓ(y)I 2,1 shows that this is the field due to diffraction at the edges of Ω. Note that this quantity becomes infinite at the edges of the geometrical projection of Ω onto the detector apertures. This non-physical result can be remedied by modifying the stationary phase solution [23] however this is beyond the scope of this work. Table 6.2 shows how the intensity and complex amplitude, for the geometrical and wave optics models respectively, are calculated in each region defined in Fig. 5. Note that the geometrical optics solution provides the intensity of the field whilst the wave optics solution provides the complex amplitude of the field. Examples and analysis The validity of the presented model and technique depend on the tendency for photons to be scattered between adjacent sample/detector aperture pairs. It is however possible to develop a minimum bound upon the separation of apertures required to maintain validity. If a cylinder of radius R is placed with its centre at ξ = 0 in the imaging system of Fig. 1, its edge will be projected onto the position x′ = MR in the space of the detector. We are interested in knowing how quickly the field scattered by the cylinder decays away from x′ = MR. Assuming that the edge of the cylinder is illuminated, photons are refracted to values of x′ approaching ∞ and are described by the term I 1,0 defined in Eq. (28). Photons reaching a position x′ ≫ MR must be incident upon the cylinder for a value of ξ very close to, but not exceeding R. By writing ξ = R − ε, ε > 0, in Eqs. (27) it is easy to find a simple analytic expression giving I 1,0 for x ⪢ MR as ε tends to 0. It is then simple to show that will reduce by two orders of magnitude at a position x′ = RM + Δx′ where: Figure 6 shows contours of Δx′ for values of R and δ encountered in practice. Δx′ may be considered the minimum separation of adjacent sample/detector aperture pairs to ensure detector apertures principally detect photons originating from their associated sample aperture. The above analysis considers only a point source. A source of finite width may be considered by noting the definition of x′ in Eqs. (2) and thus adding (W/2)z od /z so to Δx′, where W is the detector FWHM. Previous studies [6] have shown that coded aperture XPCi contrast is increased by reducing the fraction of detector pixel exposed to directly incident radiation. This however leads to an increase in the exposure time as fewer photons reach the pixel. In this work we have thus chosen a displacement, dL, equal to half of the transmitting width of the detector apertures, thus exposing half of the pixel to directly incident radiation. We used a sample aperture periodicity of L = 40μm along with z so = 1.6m and z od = .4m to match the dimensions of an experimental system currently under construction. The simulations were performed for a photon energy of 100keV. Figure 8 shows the intensity incident upon the detector apertures as calculated by the wave optics and geometrical optics solutions for a point source illuminating a cylinder. The cylinder has a value of δ = 10 −7 , a radius of 5μm and was situated with its axis at ξ = −5μm. As is expected, the wave optics intensity exhibits oscillations resulting from interference between different field components. The geometrical optics solution is physically impossible as the sharp edge occurring at x = 0 would require the field to contain infinite spatial frequencies. Consideration of the angular spectrum of a propagating aperiodic field shows that such a field would require evanescent waves which, in our case, would have negligible magnitude such a distance from the sample apertures. Figure 9 compares the directly calculated wave optics intensity to that calculated using the stationary phase approximation. As explained in Sec. (6.2), singularity anomalies arise in this solution which have been neglected. This plot shows that apart from these anomalies, the approximate solution agrees well with the directly calculated intensity. One can use the components which comprise the approximate solution to determine when the geometrical optics and wave optics solutions converge. This is however made difficult by the singularity anomalies present in the approximate solution and so we have opted to use a more pragmatic approach as outlined below. One can envisage that when a source of finite width is employed, the XPCi signals predicted by the geometrical and wave optics models should converge. There are two explanations for why this should be the case. The first explanation observes that the point source intensity incident upon the detector apertures is convolved with the magnified source profile which in our case is Gaussian. This is equivalent to applying a low pass filter to the intensity distribution causing the oscillations in the wave optics intensity and the sharp transition in the geometrical optics intensity to be smoothed. The second explanation considers that, as shown in Sec. (5), a source of finite width may be modelled by a system employing a point source and equivalent detector apertures which cause the pixels to have a spatially dependent sensitivity as described by Eq. (21). Figure 3 shows that as the source broadens, so does the width of the equivalent detector aperture sensitivity function. Because of energy conservation, one would expect the geometrical and wave optics XPCi signals to converge as the source broadens. In particular, consider the plot shown in Fig. 7. This shows the difference between the intensities, incident upon the detector apertures, predicted by wave and geometrical optics. This signal has a zero mean value as required by conservation of energy. The coded aperture XPCi signal thus depends upon the domain over which the field intensity is integrated by the detector pixel. As the sensitive part of each detector pixel increases, or equivalently, as the source broadens, the geometrical and wave optics signals thus tend to converge. Previous studies [6] have shown that the maximum XPCi signal for a cylindrical object using the system described in this section occurs when the cylinder is positioned at approximately ξ 0 = −R. This is demonstrated in Fig. 10 where wave and geometrical optics signal traces have been plotted for a cylinder of radius 5μm and δ = 10 −6 . The signals have been normalised by the signal for the object free case. These plots demonstrate how the signal traces converge as the FWHM of the source increases. It also shows how the peak of each trace is in the vicinity of ξ 0 = −R, as expected. Simulations run over a range of radii, values of δ and source FWHM show that the peak of the signal trace does indeed occur in the region of ξ 0 = R. This is suggests a good way of assessing the difference between the wave and geometrical optics XPCi signals as the two signals are likely to vary most at the peak. We thus calculate an error term, ε(−R), where , and I WO (ξ 0 ) and I GO (ξ 0 ) are the XPCi signals for the geometrical and wave optics (full expression evaluated numerically) cases respectively, for a cylinder at position ξ 0 . and are the object free XPCi signals for the geometrical and wave optics cases respectively. Before proceeding to calculate ε it is useful to note that some approximations can provide further insight into the problem. In the case of ξ 0 = −R, g 1 in Eq. (27) can be well approximated by for x′ > −MR, but not too close to −MR. This approximate form leads to a solution of for the stationary point of g 1 . Substitution of ξ 1,0 back into the approximate forms of g 1 and show that both of these functions have a dependence upon δ 2 R rather than each of these independently. This suggests that it is reasonable to expect ε for a particular source FWHM to be constant for constant values of δ 2 R. This is indeed the case as was verified by a large number of simulations, a small selection of which are shown in Fig. 11. This significantly simplifies the task of determining the source size for which the geometrical and wave optics signals converge. Figure 12 is a contour plot of ε as a function of source FWHM and δ 2 R. The important conclusion which we can draw from this is that for our particular choice of z od and z so , as we expect a source to have a FWHM of around 50μm, the geometrical optics model will provide results consistent with those of the wave optics model. This result will make it feasible to model much larger objects. Conclusions In this paper we have outlined the two most widely used techniques for modelling XPCi systems: wave and geometrical optics. We have used the theory developed to model the image of an infinite cylinder in a coded aperture XPCi system. This problem has practical significance as it can be tested experimentally. For this particular problem, we show how the geometric and wave optics models are related. we then show how this theory can be used to develop a guide for when the two techniques can be trusted to give consistent results. Schematic diagram of imaging system including reference frames used in the paper. Note that (x,ȳ,z), (ξ,ψ,z) and (x,y,z) all form right handed coordinate systems. The imaging system is assumed to have no y dependence. Note that dL is defined by the displacement between the detector apertures and the projection of the sample apertures onto the detector apertures. Diagram illustrating the three regions which must be considered when analysing the field incident upon the detector apertures. Plot of the difference between intensities calculated using the wave optics (full expression evaluated numerically) and geometrical optics approximations. The sensitive region of the pixel is shaded. Simulation parameters were the same as in Fig. 8. Plot of the intensity of the field incident upon the detector apertures for the geometrical and wave optics (full expression evaluated numerically) solutions. Simulation parameters used were R = 5μm and δ = 10 −7 , all other parameters were as described in Sec. (6.3). Plot of the intensity of the field incident upon the detector apertures as calculated using the exact and approximate wave optics formulations. Simulation parameters were the same as in Fig. 8. Contour plot of the error between the normalised XPCi signals as calculated by geometrical and wave optics models. Source FWHM is on the vertical axis and δ 2 R is on the horizontal axis.
2018-04-03T02:01:03.762Z
2010-03-01T00:00:00.000
{ "year": 2010, "sha1": "942852f1658dd2e767aee0c899e0711c1489d768", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.18.004103", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "eeba7327b10295536d47d08a8338df4645af94af", "s2fieldsofstudy": [ "Engineering", "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
25674070
pes2o/s2orc
v3-fos-license
EXPERIMENTAL MODELS OF HEPATECTOMY AND LIVER REGENERATION USING NEWBORN AND WEANING RATS OBJECTIVES: Liver regeneration is a complex process that has not been completely elucidated. The model most frequently used to study this phenomenon is 70% hepatectomy in adult rats; however, no papers have examined this effect in developing animals. The aims of the present study were: 1) to standardize two models of partial hepatectomy and liver regeneration in newborn suckling and weaning rats, and 2) to study the evolution of remnant liver weight and histological changes of hepatic parenchyma on the days that follow partial hepatectomy. METHODS: Fifty newborn and forty-four weaning rats underwent 70% hepatectomy. After a midline incision, compression on both sides of the upper abdomen was performed to exteriorize the right medial, left medial and left lateral hepatic lobes, which were tied inferiorly and resected en bloc. The animals were sacrificed on days 0 (just after hepatectomy), 1, 2, 3, 4 and 7 after the operation. Body and liver weight were determined, and hepatic parenchyma was submitted to histological analysis. RESULTS: Mortality rates of the newborn and weaning groups were 30% and 0%, respectively. There was a significant decrease in liver mass soon after partial hepatectomy, which completely recovered on the seventh day in both groups. Newborn rat regenerating liver showed marked steatosis on the second day. In the weaning rat liver, mitotic figures were observed earlier, and their amount was greater than in the newborn. CONCLUSIONS. Suckling and weaning rat models of partial hepatectomy are feasible and can be used for studies of liver regeneration. Although similar, the process of hepatic regeneration in developing animals is different from adults. INTRODUCTION The liver has significant regenerative capacity after injury.Even after major insult, such as extensive surgical resection, its function usually recovers within a couple of weeks.This is accomplished through complex mechanisms that have not been fully elucidated. 1 Studies of liver regeneration in humans are difficult because of the heterogeneous etiology of liver lesions that pre-cede regeneration. 2For these reasons, investigation of liver regeneration in standardized experimental models seems to be more useful than clinical studies.Regeneration models may be in vitro and in vivo.Cultured hepatocytes (in vitro model) have very different physiological responses relative to in vivo models, and it has been increasingly recognized that non-parenchymal cells may play an important role in in vivo regeneration because their interaction with hepatocytes is implicated in all physiological responses of the liver. 3n 1931, Higgins and Anderson published a model of 70% hepatectomy in adult rats 4 , which has been employed heavily in investigations of hepatic regeneration. 5,6Small animals, such as mice and rats, are useful because they are easy to manage and represent minimal logistical, financial or ethical problems 2 .However, physiological differences, such as having a faster metabolism relative to humans, must be considered.Under normal circumstances, the human liver initiates regeneration within 3 days and reaches its original size by 3-6 months. 1 In rats, the interval between partial hepatectomy and initiation of DNA synthesis in hepatocytes is 10 to 12 hours and peaks at about 24 hours. 7iver weight is completely recovered by the seventh day.Histological examination reveals slight hypertrophy of both cytoplasmic bodies and nuclei.Mitosis begins by the end of the first day, and cell division is completed on the second and third days. 4n normal individuals, liver regeneration can be affected by a number of factors that jeopardize the quality and end result of the process.Aging is one of these factors.Biondo-Simões et al. observed that hepatocyte replication is delayed in the livers of older animals. 8Therefore, the phenomena involved in the liver regeneration of developing animals are characterized by different intensity and quality, as compared to the adult animals. 0][11] Although the Higgins and Anderson model has been used extensively, there are no studies of hepatectomy and liver regeneration in growing animal models. The aims of the present experimental study were: 1) to standardize two animal models of partial hepatectomy (PH) and liver regeneration using newborn suckling and weaning rats, and 2) to study the evolution of remnant liver weight and histological changes of regenerating hepatic parenchyma on the days following partial hepatectomy. Animals Fifty newborn suckling rats (age 5-7 days, weight 6-10 g) and forty-four weaning rats (age 21-23 days, weight 30-50 g) were operated upon.All animals received care according to the criteria outlined in the "Guide for Care and Use of Laboratory Animals" prepared by the National Academy of Sciences; this study protocol and the anesthetic procedures were approved by the Animal Ethic Committee of University of São Paulo Medical School. The suckling rats were maintained with their mothers in stainless steel cages.The weaning rats were kept on standard laboratory diet and tap water ad libitum throughout the experiment. Creation of experimental models All the animals were operated on by the same two surgeons (UT and ACAT) wearing surgical telescopes (magnification 3.5X) and microsurgical instruments.The surgical procedures were performed under sterile conditions between 9:00 AM and 10:00 AM, due to the circadian rhythm of liver regeneration.Ether-soaked gauze was kept near the animals' nose to induce and maintain anesthesia.This type of anesthesia is considered safe for small animals subjected to short surgical procedures.Following a 1cm midline incision, the upper abdomen and lateral lower portions of both hemi-thoraces were compressed to exteriorize the liver.Consequently, adequate mobilization and exposure of the liver could be attained without dividing the ligaments of the right and left lobes (Fig. 1).The liver parenchyma could not be touched because of the dangers of injuring the viscera and bleeding.A 2-0 cotton thread knot, surrounding the hilum and also the hepatic vein, was tied and the right medial, left medial and left lateral lobes were resected en bloc (Fig. 2 and 3).Because the rat liver is lobulated, the hilum of these lobes could be safely ligated without involving the vasculature of the remnant lobes.The abdomen was closed with a single-layer running suture using 6-0 prolene.Following surgery, the suckling animals were returned to their mothers, and the weaning animals were fed regular diets and water ad libitum.The animals were sacrificed 0 (just after the hepatectomy), 1, 2, 3, 4 or 7 days after the operation, under ether anesthesia, and the body weights were determined.A midline single abdominal and thoracic incision was performed to harvest the remnant liver lobes, which were then weighed and fixed in 10% neutral buffered formalin for routine histology.Groups of normal non- Experimental models of hepatectomy and liver regeneration using newborn and weaning rats Tannuri ACA et al. operated weaning and suckling animals served as controls. Histological analysis Qualitative histological examination was performed in 4-µm thick sections of all liver samples (x 300, 600 and 1500).Lobular architecture, as well as presence of mitotic figures, apoptotic bodies and steatosis, was evaluated. Statistical analysis The mortality rates of the groups were expressed as percentages and were compared using the Fisher test.The other results were expressed as means ± SD.For statistical purposes, ANOVA and Bonferroni tests were employed (liver weights/body weight ratios presented a parametric distribution).P < 0.05 was considered significant. Mortality rates Twelve deaths occurred in the newborn group (24%).The causes were anesthetic complications, cut surface bleeding and maternal cannibalism.There were no deaths in the weaning group.The comparison of mortality rates of groups demonstrated a significant difference (P = 0.003). Liver weight/body weight ratios following partial hepatectomy To evaluate growth of the remnant liver following PH, the liver weight/body weight ratio was calculated on 0, 1, 2, 3, 4 and 7 days after hepatectomy (Fig. 4). There was a significant decrease in liver mass just after PH (P < 0.0001 for both newborn and weaning groups).On the seventh day, the liver weight was completely recovered in both groups (P > 0.05). Pathological analysis Liver parenchyma of control newborn rats showed a typical sinusoidal architecture and several extramedullary hematopoietic foci (Fig. 5-1).Mitotic figures and apoptotic bodies were absent. On the first day after PH, hepatocyte nuclei of suckling animals were increased in size, with vesicular bodies and prominent nucleoli, but no mitosis was detected.On the second day, entire lobules were characterized by macrovesicular steatosis, which markedly decreased on the following day, when a few mitotic figures could be observed.Apoptotic bodies were rarely seen.On the fourth day, hepatocyte mitosis was still observed, and fatty infiltration was further diminished.Finally, on the seventh day after PH, the histological aspect of the liver was similar to controls (Fig. 5-2 to 6). Liver parenchyma of weaning control animals displayed similar architecture to the newborn animals, but no hematopoetic foci were evident (Figure 6-1).On the first day after PH, their nuclei became vesicular with prominent nucleoli, and different phases of hepatocyte mitosis were observed throughout the lobule.On the second day, an increased number of hepatocyte mitoses were observed, although no steatosis was detected.Apoptotic bodies were rarely observed.From the third day, the number of hepatocyte mitoses decreased and could not be detected on the seventh day, when parenchymal architecture was completely recovered (Fig. 6-2 to 6). Figures 7 and 8 highlight hepatocyte mitosis and an apoptotic body, respectively. DISCUSSION Despite the widespread use of in vivo models for biological phenomena studies, research on growing animals is rare.Models of pancreatic beta-cells regeneration in neonatal streptozotocin-treated rats 12 and wound-healing studies in genetically modified newborn rats are rare examples of studies in growing animals. 13nesthesia, respiratory depression, and frail and small sized organs, together with maternal cannibalism, represent severe difficulties in experiments with growing animals, especially newborns.Indeed, there was a significantly Experimental models of hepatectomy and liver regeneration using newborn and weaning rats Tannuri ACA et al. higher mortality rate in newborn animals relative to weaning animals (P = 0.003).Despite this fact, the creation of a newborn experimental model of hepatectomy and liver regeneration in rats (age 5-7 days) is important because these animals correspond to children weighing less than 5 kg.With the development and refinement of surgical techniques and microsurgical anastomoses, a series of liver transplants in such babies have been described and performed in centers throughout the world. 14,15As a result, we conclude that learning about hepatic regeneration and remodeling mechanisms in newborns is of significant importance. The weaning model resembles infants who have been submitted to partial liver transplantation at age 1-year, secondary to biliary atresia, which is the most common indication for hepatic transplantation in the pediatric population. 11Patients without biliary drainage after Kasai's procedure and non-operated children develop rapidly progressive cirrhosis, which necessitates liver transplantation within 6 to 18 months. 16ecause it is technically difficult to create experimental models of liver transplantation using small growing animals, we developed the present experimental models to study molecular histomorphological and immunohistochemical mechanisms of liver regeneration.Although these models do not include liver transplantation, data obtained can be transposed to all conditions of liver parenchyma regeneration or liver size remodeling. The daily assessment of remnant liver weight showed a gradual increase in hepatic mass from the first post-operative day until complete recovery by the seventh day.These results are similar to descriptions in adult rat models. 4,5,7However, newborn animals exhibited a sharp increase in liver weight from the first to the second day after hepatectomy.Histological examination revealed intense fat accumulation in the liver parenchyma, resulting in weight gain.In addition, the increased number of hepatocyte mitoses observed after the third day reflects the high proliferative activity of liver cells during this phase. During the early period of regeneration, the liver accumulates fat. 17Neither the mechanisms responsible for nor the functional significance of transient steatosis have been determined.In the current investigation, we observed that steatosis was more prominent in the newborn rat as compared to weaning rat livers.Interestingly, there are no descriptions of such fat accumulation in adult rat models of hepatectomy and liver regeneration.It is likely that the immaturity of the enzymatic systems of newborn hepatocytes promotes insufficient fat metabolism due to the increased metabolic demand of the remnant liver parenchyma. Serial pathological analyses revealed that hepatocyte mitoses were more evident and earlier in the weaning animals than those observed in the newborn animals.Therefore, the initial liver weight gain in weaning animals was due to cellular proliferation, not steatosis; likewise, the proliferative activity of newborn hepatocytes, although slower, resulted in complete recovery of liver mass by the seventh day. CONCLUSIONS The present investigation demonstrates that suckling and weaning rat models of partial hepatectomy are feasible and can be used to study liver regeneration.Serial weight and histological analysis revealed that, although similar, the process of hepatic regeneration in growing animals is different from adult animals, which highlights the need for a model to study this process in young, growing organisms.The models created and standardized in the present research will enable further elucidation of the mechanisms involved in liver regeneration, as well as the development of therapeutic interventions in this complex phenomenon. Figure 1 - Figure 1 -Partial hepatectomy in newborn rat: liver, stomach and guts exteriorized by compression of the upper abdomen and the lateral inferior portions of the hemithorax bilaterally. Figure 2 - Figure 2 -A cotton ligature was passed between the liver and stomach. Figure 3 - Figure 3 -Schematic illustration of the parenchymal resection; detailed lateral view.The dark area indicates the parenchyma to be resected. Figure 4 - Figure 4 -Changes in the ratio of the remnant liver wet weight relative to body weight at varying timepoints after hepatectomy (for each group, n = 6-8 animals).Values are means ± SEM. ( * significant; ** nonsignificant in comparison to C -control group). Figure 7 - Figure 7 -Photomicrograph of a weaning rat liver on the second day after PH, with hepatocyte mitoses (arrow) (original magnification X 1500, under oil immersion). Figure 8 - Figure 8 -Photomicrograph of a weaning rat liver on the first day after PH, with an apoptotic body.See eosinophilic cytoplasm and dense nuclear fragments arrow (original magnification X 1500, under oil immersion).
2017-06-17T04:55:53.962Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "e9f55f076f8134118996654c3e6826a9a5199518", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1590/s1807-59322007000600016", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e9f55f076f8134118996654c3e6826a9a5199518", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267034927
pes2o/s2orc
v3-fos-license
Giant enhancement of vacuum friction in spinning YIG nanospheres Experimental observations of vacuum radiation and vacuum frictional torque are challenging due to their vanishingly small effects in practical systems. For example, a nanosphere rotating at 1GHz in free space slows down due to friction from vacuum fluctuations with a stopping time around the age of the Universe. Here, we show that a spinning yttrium iron garnet (YIG) nanosphere near aluminum or YIG slabs generates vacuum radiation with radiation power eight orders of magnitude larger than other metallic or dielectric spinning nanospheres. We achieve this giant enhancement by exploiting the large near-field magnetic local density of states in YIG systems, which occurs in the low-frequency GHz regime comparable to the rotation frequency. Furthermore, we propose a realistic experimental setup for observing the effects of this large vacuum radiation and frictional torque under experimentally accessible conditions. I. INTRODUCTION The physics of rotating nanoparticles is gaining more attention as recent technological advancements provide experimental platforms for rotating levitated nanoparticles at GHz speeds [1][2][3][4][5][6][7][8].Besides having implications in the fields of quantum gravity [9], dark energy detection [10], and superradiance [11], rotating nanoparticles are crucial for studying the effects of quantum vacuum fluctuations [12][13][14][15][16][17].Rotating nanoparticles can emit real photons and experience frictional torques from the fluctuating quantum vacuum even at zero temperature [18,19].Although Casimir forces between static objects have been measured extensively [20][21][22], the experimental sensitivity is only starting to reach the limit needed to measure the frictional torque exerted on rotating nanoparticles from the vacuum [23].Meanwhile, direct observation of vacuum radiation from rotating nanoparticles remains challenging due to the extremely low number of radiated photons. In the specific case of moving media or rotating particles, a unique regime of light-matter interaction occurs when the material resonance frequency becomes comparable to the mechanical motion frequency [24][25][26].In particular, a giant enhancement or even a singularity is possible in vacuum fluctuation effects [24][25][26].Recently, world record rotation frequencies were achieved for optically levitated nanospheres [2,3,6].This immediately opens the question of whether unique material resonances comparable to this rotation frequency can help enter a new regime of light-matter interaction.Here, we show that gyromagnetic yttrium iron garnet (YIG) exhibits the magnon polariton resonance at GHz frequen- * zjacob@purdue.educies [27,28] comparable to the levitated nanoparticle's rotation frequency, providing a unique opportunity for enhancing vacuum fluctuation effects on rotating nanoparticles. In this article, we put forth an approach to enhance and observe the vacuum radiation and frictional torques by leveraging a YIG nanosphere spinning at Ω = 1 GHz in the vicinity of a metallic or YIG interface.Our proposal exploits an asymmetry between the electric and magnetic local density of states (LDOS) which was previously reported in Ref. [29].In particular, near conventional metals, the electric LDOS is enhanced at optical frequencies, whereas the magnetic LDOS becomes dominant at GHz frequencies.Therefore, our proposal exploits magnetic materials with magnon polaritons to enhance the magnetic local density of states beyond those of conventional plasmonic metals.Due to the large magnetic LDOS and YIG magnetic resonance at GHz frequencies, the fluctuating magnetic dipoles of the YIG nanosphere can strongly couple to a large density of evanescent waves in the near-field of metallic and magnetic interfaces, leading to colossal vacuum radiation. We demonstrate that a spinning YIG nanosphere generates vacuum radiation eight orders of magnitude larger than other metallic or dielectric nanospheres in the vicinity of a metallic or magnetic slab.We show that, near magnetic materials, most of this radiated energy can be transferred to surface magnon polaritons.Furthermore, we reveal that the large vacuum radiation and vacuum friction have experimentally observable effects on the balance rotation speed, stopping time, and balance temperature of the spinning YIG nanospheres under experimentally accessible rotation speeds, particle sizes, temperatures, and vacuum pressures.Therefore, the setup proposed in this article based on spinning YIG nanospheres represents a unique tool for detecting and analyzing vacuum radiation and frictional torques. II. GIANT VACUUM RADIATION FROM SPINNING YIG NANOSPHERES We first consider the vacuum radiation from a spinning YIG nanosphere with a radius of 200 nm, as illustrated in Fig. 1(a, b).A stationary nanosphere at the equilibrium temperature exhibits zero net radiation since the number of photons emitted by the fluctuating dipoles of the nanosphere is equal to the number of photons absorbed by the nanosphere from the fluctuating electromagnetic fields in the vacuum.However, for rotating nanospheres, the balance between the emitted and absorbed photons is broken.A net radiated power from the nanosphere arises even at zero temperature due to the extra boost of mechanical rotational energy [30].The source of this vacuum radiation energy is the non-inertial motion of the nanosphere, which is transferred to generate real photons from vacuum fluctuations [19].Based on fluctuational electrodynamics (see derivations in Appendix A), we find the total radiated power from a spinning YIG nanosphere P rad = ∞ 0 dωℏω Γ H (ω) − Γ H (−ω) can be determined from Γ H (ω), which is the spectral density of the radiation power arising from magnetic dipole fluctuations. In the absence of any interface, vacuum radiation from a spinning YIG nanosphere does not exhibit any substantial enhancement.However, metallic or magnetic interfaces can drastically change this observation.Metallic nanospheres are known to possess higher radiation rates compared to dielectric nanospheres near material interfaces [30,31].Here, we observe that magnetic nanospheres exhibit even larger radiation rates, which are about eight orders of magnitude compared to metallic nanospheres near metallic or magnetic interfaces, as shown in Fig. 1(c, d).We demonstrate that radiated photons per second per frequency expressed through Γ H (ω) − Γ H (−ω) from spinning YIG nanospheres (blue curves) are much more than those from the aluminum nanospheres (orange curves) near Al interfaces (Fig. 1(c)) and YIG interfaces (Fig. 1(d)).Furthermore, we find that a spinning YIG nanosphere radiates about 6 femtowatts of power, in stark contrast to the Al sphere, which radiates about 6 × 10 −7 femtowatts near Al interfaces (Fig. 1(c)).In the vicinity of YIG interfaces (Fig. 1(d)), we find about 61.3 femtowatts and 4.63 × 10 −7 femtowatts of radiated power from YIG and Al nanospheres, respectively.The radiated energy mostly goes into the lossy surface waves in both metallic and magnetic materials [32].However, if the magnetic material is properly biased, as is the case studied here with a bias magnetic field of 812 Oe for the YIG slab, the magnetic resonance in the magnetic slab can become resonant with the magnetic resonance in the magnetic sphere.As a result, most of the radiated energy is transferred to surface magnon polaritons.These results clearly show the advantage of YIG over Al nanospheres for probing vacuum radiation. The above results are explained by the YIG magnon polariton resonance at GHz frequencies and differences in the low-frequency electric and magnetic LDOS near metallic and magnetic interfaces.Vacuum fluctuation effects on rotating nanoparticles can be significantly enhanced when the rotation frequency is comparable to resonance frequencies.In addition, as shown by Joulain et al. [29], LDOS near metals is dominated by the magnetic LDOS at wavelengths above a few microns.Here, we extend this observation to magnetic materials and take into account the effects of non-local electromagnetic response in Al [32] (also see Appendix F).Higher magnetic LDOS than electric LDOS at low frequencies originates from differences in the reflection of the sand p-polarized evanescent waves.The near-field electric LDOS is mainly influenced by p-polarized evanescent waves since their contributions to the electric LDOS are strongly momentumdependent and dominate the high momentum contributions crucial for near-field LDOS.In contrast, the opposite is true for the near-field magnetic LDOS, and the contributions from the s-polarized evanescent waves dominate.At GHz frequencies, the imaginary part of the reflection coefficient for evanescent s-polarized waves is much larger than that for evanescent p-polarized waves.Thus, the s polarization contributes more to the LDOS than the p polarization, leading to a more dominant magnetic LDOS near metallic and magnetic interfaces.These near-field LDOS can be further enhanced by material resonances [24-26, 33, 34]. To this end, we discuss the spectral density Γ H (ω) that determines the vacuum radiation.Through a similar approach as the methods used by Abajo and Manjavacas [18], our result for the radiation spectral density Γ H (ω) of a spinning gyromagnetic nanosphere due to magnetic dipole fluctuations is (see derivations in Appendix A): where , g H ⊥,2 are the two components of the magnetic Green's function in the plane of the interface (the xx and zz components for the setup shown in Fig. 1(b)), g H ∥ is the component normal to the interface (the yy component here), and g H g,2 is the off-diagonal component between the in-plane and normal directions (the xy component here), all normalized by πωρ 0 /8.α m,⊥ (ω), α m,g (ω), and α m,∥ (ω) are the xx (or yy), xy, and zz components of the YIG nanosphere magnetic polarizability tensor in the rotating sphere frame (see Appendix D for derivations).Ω is rotating frequency of the nanosphere and ω − = ω − Ω. n 1 (ω) and n 0 (ω) are the Bose-Einstein distribution functions pertinent to the sphere temperature T 1 and the environment temperature T 0 , respectively.Detailed derivations for all these quantities and discussions of various YIG interface orientations and bias magnetic field directions are provided in Appendix B. When the sphere is stationary ω − = ω, and the sphere temperature is equal to the temperature of the environment T 1 = T 0 , the terms n 1 (ω − )−n 0 (ω) and n 1 (ω)−n 0 (ω) become zero; thus, the radiation becomes zero as expected. Here, we emphasize one important aspect of Γ H (ω) regarding the rotation-induced magnetization of the YIG nanosphere, which can occur without any external magnetic field.This is known as the Barnett effect and originates from the conservation of angular momentum, where the mechanical angular momentum of the sphere is transferred to the spin of the unpaired electrons in the magnetic material [35].Assuming the magnetic field is parallel to the rotation axis, the Larmor precession frequency ω 0 of the electrons inside the sphere is [36] (also see Appendix E): for the electron gyromagnetic ratio γ, vacuum permeability µ 0 , and applied external magnetic field H 0 .We incorporate this effect on ω 0 to find the magnetic response of the spinning YIG nanosphere. III. ENHANCEMENT OF VACUUM FRICTIONAL TORQUE We now discuss the vacuum frictional torque exerted on the rotating YIG nanosphere in the vicinity of YIG and Al interfaces.We use a similar approach to find the vacuum torque exerted on the spinning gyromagnetic YIG sphere due to magnetic dipole and magnetic field fluctuations (detailed derivations are provided in Appendix G).The torque along the axis of rotation is given by , where the expression for Γ H M (ω) is similar to the expression for Γ H (ω) in Eq. ( 1), with the difference being that the last term on the second line is not present in Γ H M (ω) (see Appendix G).Additionally, we find that other components of the torque (M x and M y components) are not necessarily zero in the vicinity of the YIG interface, in contrast to the Al slab.Due to the anisotropy of the YIG slab, M x and M y do not vanish for some directions of the bias magnetic field.We provide further discussions of these cases in the supplementary material. In Fig. 2, we compare vacuum torques exerted on spinning YIG nanospheres (Fig. 2(a, c)) and spinning Al nanospheres (Fig. 2(b, d)), on nanospheres spinning in the vicinity of YIG slabs (Fig. 2(a, b)) and Al slabs (Fig. 2(c, d)), as well as on nanospheres spinning in the vicinity of slabs (solid colored curves) and spinning in vacuum (dashed black curves).We demonstrate that vacuum torques exhibit more than 10 orders of magnitude enhancement in the vicinity of YIG and Al slabs compared to the vacuum, and about 4 orders of magnitude enhancement due to employing YIG nanospheres instead of Al nanospheres.These results unravel the advantage of utilizing YIG nanospheres for probing vacuum frictional torques at GHz frequencies.In Fig. 2, we consider nonlocal electromagnetic response [32] for Al interfaces and incorporate effects from the magnetic and electric dipole and field fluctuations on vacuum torques.We notice that the vacuum torque is dominated by magnetic rather than electric fluctuations in all cases (see Appendix G).In addition, we have taken into account the effect of recoil torque [37] -the torque exerted on the sphere due to the scattering of vacuum field fluctuations off the particle.As discussed in Appendix G, we find that effects from this second-order torque are negligible compared with the effects of magnetic fluctuations in the studied cases. IV. OBSERVABLE OUTCOMES OF GIANT VACUUM FRICTION IN SPINNING YIG NANOSPHERES The observable effects of the colossal vacuum radiation and frictional torques come down to changes in experimentally measurable parameters when the spinning nanosphere is brought closer to the vicinity of Al/YIG interfaces.In Fig. 3(a), we show the proposed experimental setup for this observation where a YIG nanosphere is trapped inside an Al or YIG ring.We note that the size of the ring is much larger than that of the nanosphere, and it does not lead to any resonant behavior.However, for smaller ring sizes, LDOS can be further enhanced compared to the slab interface case due to the presence of interfaces on all sides. We evaluated some observable experimental outcomes due to large vacuum radiation and friction.This analysis is based on the experimentally accessible parameters from Refs.[3,38,39].In Fig. 3(b), we show the balanced rotation speed Ω b of the spinning nanosphere normalized by the rotation speed Ω 0 in the absence of any interface as a function of distance d from the interface.The balance rotation speed is defined as the sphere's stable, perpetual rotation speed and occurs when the driving force due to the laser is equal to the drag force due to the vacuum chamber.In the absence of any interface, due to the negligible value of vacuum radiation, the balance rotation speed Ω 0 is obtained when the torque from the trapping laser balances the frictional torque from air molecules in the imperfect vacuum [3] (also see Apeendix H).We assume the laser driving torque is constant and the drag force from air molecules has a linear dependence on rotational speeds [3].In Fig. 3(b), we show that the balance rotation speed of the YIG nanosphere is reduced when it is closer to Al (blue curve) or YIG (pink curve) interfaces, as a result of the large frictional torques from vacuum fluctuations.Remarkably, we notice that there is no observable change in the balance speed for spinning Al nanospheres in the vicinity of Al or YIG interfaces (red curve). In Fig. 3(c, d), we further demonstrate outcomes of the large vacuum radiation in other experimental observables, such as the stopping time as a function of distance (Fig. 3(c)) and the balance temperature as a function of the vacuum temperature T 0 (Fig. 3(d)).Stopping time is the time constant of the exponential decrease of the nanosphere rotation velocity after the driving torque is turned off.The torque can be switched off by changing the polarization of the trapping laser from circular to linear without having to switch off the trapping laser.The balance temperature refers to the nanosphere temperature T s , at which the loss of mechanical rotational energy due to vacuum frictional torque stops heating the nanospheres.As shown in Fig. 3(c, d), YIG nanospheres exhibit distinct behaviors in the stopping time and balance temperature compared to Al nanospheres near YIG and Al interfaces. The results of Fig. 3 show that the vacuum radiation and frictional torque can be experimentally measured through the balance speed, balance temperature, and stopping time of the YIG nanosphere.In stark contrast, the Al nanosphere (or any other metallic nanospheres) may not experience enough vacuum friction to exhibit observable outcomes unless it is in a sensitive setup with very low vacuum pressure [3,23]. V. DISCUSSION AND CONCLUSION Our results show that due to YIG magnon polariton resonance and the dominance of magnetic LDOS over electric LDOS in the vicinity of metallic or magnetic materials at GHz frequencies, spinning YIG nanospheres can exhibit orders of magnitude larger vacuum radiation and frictional torque compared to any metallic or dielectric nanosphere.By investigating the case of a YIG nanosphere spinning at 1 GHz speed, we have shown that the effect of colossal vacuum fluctuations can be observed in an experimentally accessible setup.Our results set a new perspective for observing and understanding radiation and frictional torques from vacuum fluctuations.Furthermore, our discussions of magnetic LDOS near YIG interfaces under various bias fields pave the way for magnetometry [40] and spin measurement [41] applications. In this appendix, we provide detailed derivations of the radiation power P rad from a spinning YIG nanosphere and its spectral density Γ H (ω) due to magnetic fluctuations.Using an approach similar to that taken by Abajo et.al [18,30], we can write the radiated power due to the magnetic fluctuations of dipoles and fields as, where H ind is the induced magnetic field due to the magnetic dipole fluctuations m fl of the particle and m ind is the induced magnetic dipole in the particle due to the fluctuations of the vacuum magnetic field H fl .Note that all of these quantities are written in the lab frame.For the sphere spinning at the rotation frequency Ω, we can write, where the primed quantities are written in the rotating frame.Performing a Fourier transform as m ′ fl (t) = dω 2π e −iωt m ′ fl (ω), we can write in the frequency domain where ω ± = ω ± Ω.We can similarly write for the magnetic fields Thus, using the fact that, with being the magnetic polarizability tensor of the YIG sphere biased along the z axis, we find in the lab frame where Note that we have used an expression similar to Eq. (A3) but written for the induced magnetic dipole moments.Expression for α m,⊥ (ω) and α m,g (ω) are given in Appendix D. Using the fluctuation-dissipation theorem (FDT) [42], with ) defined as the equal-frequency magnetic Green's function of the environment defined through the equation, we find the second term in Eq. (A1) employing Eqs.(A4) and (A5): where n 0 (ω) = 1/(e ℏω/k B T0 − 1) is the Planck distribution at the temperature of the lab T 0 .Writing FDT for the fluctuating dipoles, we find the first term in Eq. (A1) employing Eq. ( A3) and where n 1 (ω) is the Planck distribution at the sphere temperature T 1 .Taking the inverse Fourier transform, adding Eqs.(A11) and (A13), taking the real part of the radiated power, and changing integral variables, we find In this derivation, we have used the property α m (−ω) = α * m (ω).The expressions for Green's functions in different YIG and aluminum interface arrangements are given in Appendix B. Plugging these expressions into Eq.(A14), we obtain Eq. (1) in the main text. Appendix B: Green's Function Near an Anisotropic Magnetic Material In this appendix, we provide the Green's function near a half-space of magnetic material, which would change due to the anisotropy of the material.We study two cases when the interface is the x − y plane and x − z plane, as shown in Fig. 4. We can write the electric and magnetic fields in the vacuum as where ŝ± , p± , and k± /k 0 form a triplet with and , and k 0 k z is the z component of the wavevector.Similarly, we can write the electric and magnetic fields inside the magnetic material as where Note that κ is the same in the two media due to the boundary conditions.Also k′ ± × p′ ± = ŝ′ ± .We can write Maxwell's equations in the magnetic material in matrix form as [43] where Setting the det(M + M k ) = 0 we get the solutions for k ′ z in terms of κ and ϕ [43].From these solutions and applying the boundary conditions, we can find the values of r ss , r sp , r ps , r pp for a given κ and ϕ.Note that different bias directions for the magnetic field of the YIG slab change the μ tensor and thus change the reflection coefficients r ss , r sp , r ps , r pp . In the following, we first provide the expression for the magnetic dyadic Green's function ḠH for a source at z ′ = d when the interface is in the x − y plane (Fig. 4(a)).Here, we take the spinning sphere to be at the origin to simplify the derivations and move z = 0 to z ′ = d.This would not change the Fresnel reflection coefficients.The incident magnetic Green's function at the location of the source is thus, The reflected magnetic Green's function at the location of the source is where k x = κ cos ϕ and k y = κ sin ϕ.Note that here the Fresnel reflection coefficients generally depend on the incidence angle ϕ.For the special case of magnetization along the z axis, they become independent of ϕ.Using Eq. (B4) and dropping the terms that vanish after integration over ϕ, we can write the total magnetic Green's function at the location of source as, (B9) Note that the electric Green's function can be obtained by changing r ss to r pp , r pp to r ss , r ps to r sp and r sp to r ps and dividing by ϵ 0 .In general, the non-diagonal parts of the Green's function are not zero.Using this equation, we find, Re where ρ 0 = ω 2 /π 2 c 3 is the vacuum density of states and, +p sin ϕ cos ϕRe e 2ik0pd (r ps − r sp ) Plugging Eq. (B10) into Eq.(A14), we find, with, For the case when the YIG interface is the x − z plane (Fig. 4(b)), we find the radiated power by exchanging the axes x → ẑ, ŷ → x, and ẑ → ŷ in Eq. (B9).In this case, we have where g H ⊥,1 , g H ⊥,2 , and g H ∥ given by Eq. (B11).For the xy and yx component of the Green's function, however, we get and thus we have for the case when the YIG interface is the x − z plane, with g H ⊥,1 , g H ⊥,2 , and g H ∥ given by Eq. (B11) and g H g,2 by Eq. (B16).This is the same as Eq. ( 1) in the main manuscript. Appendix C: Dominance of Magnetic Local Density of States Although the expressions found in the previous sections for the radiated power P rad are not, in general, exactly proportional to the local density of states (LDOS), they are proportional to terms of the same order as the LDOS.The expression for LDOS is given by [29], where the Tr represents the trace operator.Using the expressions of the previous section, it is easy to see that the LDOS at the location of the nanosphere is given by, where the expressions for g H ⊥,1 , g H ⊥,2 , and g H ∥ are given by Eq. (B11) and the expression for the electric Green's functions are found from the magnetic ones by replacing s → p and p → s and dividing by ϵ 0 .As discussed before, the magnetic Green's functions are about eight orders of magnitude larger than the electric ones at GHz frequencies, and thus, the LDOS is dominated by the magnetic LDOS.This shows that the magnetic field fluctuations dominate the vacuum radiation, vacuum torque, and LDOS simultaneously. Appendix D: Magnetic Polarizability Tensor of YIG In the appendix, we provide derivations of the YIG polarizability tensor.We consider the Landau-Lifshitz-Gilbert formula to describe the YIG permeability tensor [36], where and ω 0 = µ 0 γH 0 is the Larmor precession frequency with γ being the gyromagnetic ratio and H 0 the bias magnetic field (assumed to be along ẑ direction), ω m = µ 0 γM s with M s being the saturation magnetization of the material, and α is the YIG damping factor related to the width of the magnetic resonance through ∆H = 2αω/µ 0 γ.In the main text, we considered M s = 1780 Oe and ∆H = 45 Oe [36] in our calculations. When the magnetic field is reversed (along −ẑ direction), we can use the same results by doing the substitutions which gives Using the method in Ref. [44] for the polarizability tensor of a sphere with arbitrary anisotropy, we find the polarizability tensor of YIG with the permeability tensor described by Eq. (D1), Therefore the magnetic polarizability terms in Eqs.(B13) and (B17) are given by, where µ ⊥ and µ g are frequency dependent terms give by Eq. (D2). It is important to note that magnetostatic approximation has been assumed in the derivation of the magnetic polarizability.This is similar to the electrostatic approximation used for the derivation of the electric polarizability [45], where, using the duality of electromagnetic theory, the electric fields and electric dipoles have been replaced by the magnetic fields and magnetic dipoles.In this approximation, the fields inside the sphere are assumed to be constant. One can apply the Mie theory to find the magnetic polarizability to the first order in the Mie scattering components.This, however, is mathematically challenging due to the anisotropy of the magnetic material.For the purpose of our study, the magnetostatic assumption is enough to find the polarizability properties of YIG since the size of the sphere is much smaller compared to the wavelength, and the polarizability is dominated by the magneto-static term. For metals, however, higher order terms are important for finding the magnetic polarizability since the magnetostatic terms are zero and only higher order terms due to electric dipole fluctuations give rise to the magnetic polarizability of metals [30].We provide derivations based on Mie theory for the polarizability constant of an aluminum particle in Section S1 in the supplementary material. Appendix E: Barnett Effect In the simplest models of magnetic materials, electrons are assumed to be magnetic dipoles with the moments µ B spinning about the magnetization axis determined by the applied magnetic field H 0 with the Larmor precession frequency ω 0 = µ 0 γH 0 , where γ is the gyromagnetic ratio of the material [36].Barnett showed that the spontaneous magnetization of a material with the magnetic susceptibility of χ is given by [35] where Ω is the rotation frequency of the magnetic material.This magnetization can be assumed to be caused by an applied magnetic field H rot which is We thus get the Larmor frequency due to rotation, Therefore, the Larmor frequency of a spinning magnetic material is the same as the rotation frequency.We thus can write the total Larmor frequency of spinning YIG as We use this expression to find the permeability tensor of a spinning YIG nanosphere discussed in Appendix D. Appendix F: Non-local Model for Aluminum Since the sphere is spinning in close proximity to material interfaces, the non-local effects in aluminum electromagnetic response can become important.Here, we employ the non-local Fresnel reflection coefficients from Ref. [46]. where p = √ 1 − κ 2 , and with the longitudinal and transverse dielectric permittivities given by with k 2 = (ω/c) 2 q 2 + κ 2 , u = (ω + iΓ)/(kv F ), and These expressions give the non-local reflection coefficients at a metallic interface for the semi-classical infinite barrier (SCIB) model.The SCIB model is accurate as long as z = k 2k F ∼ 0, where k F = mv F /ℏ with m being the free-electron mass.For example, for aluminum with v F ≃ 2.03 × 10 6 m/s, we have k F ≃ 1.754 × 10 10 and k = ω/c ≃ 20, which shows that for our case the SCIB model is valid. Appendix G: Vacuum Frictional Torque In this section, we provide the derivations of the vacuum frictional torque exerted on the spinning YIG nanosphere due to vacuum fluctuations.The torque on a magnetic dipole is given by Since we are interested in the torque along the rotation axis (z direction), we can write the torque as using the Fourier transform, we get Through a similar approach as that used in Appendix A, after some algebra, we find which can be written as For an interface in the x − y plane Γ H M is given by which is the same expression for the radiated power minus the term related to the axis of rotation z.For an interface in the x − z plane, on the other hand, Γ H M we have This expression is the same as Eq. ( 1) in the main manuscript, with the difference that it does not have the last term involving the term n 1 (ω) − n 0 (ω).Compared to the vacuum radiation expression, vacuum torque has an extra minus sign in Eq. (G5), indicating that this torque acts as friction rather than a driving force, as expected. Other components of torque In the previous section, we only derived the z components of the torque exerted on the nanosphere.The x and y components can be written as Using a similar approach as that used in the previous section and section A, incorporating the torque due to the electric field fluctuations of vacuum and the magnetic dipole fluctuations of the YIG sphere, we find for the x component of torque, and for the y component, We can find the x and y components of frictional torque by plugging magnetic Green's function expressions into Eqs.(G9) and (G10).Remarkably, we find that the spinning YIG nanosphere can experience a large torque along the x or y direction when the YIG interface is biased by external magnetic fields in the x or y direction.This means that in these cases, the sphere can rotate out of the rotation axis and start to precess.This will change the validity of the equations found for the vacuum radiation and frictional torque along the z axis since it has been assumed that the sphere is always rotating around the z axis and is also magnetized along that axis.However, this torque is still small enough compared to the driving torque of the trapping laser and it will still give enough time to make the observations of vacuum fluctuation effects.In Section S2 in the supplementary material, we present the plots of these torques when the interface is the x − y or x − z plane and provide more detailed discussions. Recoil torque Another contribution to the torque comes from the case when the induced dipole moments on the YIG sphere re-radiate due to the vacuum electric field fluctuations.This causes a recoil torque on the sphere and can be written as where H sc is the scattered fields from the dipole and are given by, which shows that this term is of higher order contribution.We find that this recoil torque is much smaller than the torque derived in Eq. (G5) for YIG spheres spinning near YIG or Al interfaces and can thus be ignored in all studied cases.We provide detailed derivations of M rec and quantitative comparisons in Section S2 in the supplementary material. Appendix H: Experimental Analysis In this section, we present the analytical steps for finding the experimental prediction plots provided in the last section of the main text. Effects of drag torque due to imperfect vacuum In the real system of a spinning sphere, the environment is not a pure vacuum.This causes an extra torque on the spinning sphere from air molecules in the imperfect vacuum.The steady-state spin of the sphere happens when the driving torque of the trapping laser is equal to the drag and vacuum friction torques.In the case when there is no interface present, the only important counteracting torque is the drag torque given by [47] where a is the sphere radius, µ is the viscosity of the gas the sphere is spinning in, λ is the mean free path of the air molecules, and Ω is the rotation frequency.We further have for gases [48], where p gas and m are the pressure and the molecular mass of the gas, respectively.Thus, we get the drag torque, For 1 GHz rotation of a sphere, the balance between the drag torque and the optical torque M opt happens at about p gas = 10 −4 torr.Therefore we get, at room temperature and for a molecular mass of 28.966gram/mol, 2πm and thus [3], This is important for studying the effects of vacuum torque on the rotation speed of the sphere.As shown in the main text, we find that for vacuum pressures of about 10 −4 torr, changes in the balance speed of the YIG nanoparticle when it is closer to material interfaces are detectible in the power spectral density (PSD) of the nanosphere [3]. 2. Effects of negative torque and shot noise heating due to the trapping laser When the trapping laser is linearly polarized, it can exert a negative torque on the spinning particle.The torque on the sphere due to the laser is given by M opt = 1 2 Re{p * × E} [3], where p is the dipole moment of the sphere, given by p = ᾱeff • E, with ᾱeff being the effective polarizability of the sphere as seen in the frame of the lab, and E is the electric field from the laser.As derived in Section S3 in the supplementary material, in the case when the laser is linearly polarized, the negative torque from the laser is proportional to Im{α(ω 0 + Ω)} − Im{α(ω 0 − Ω)}, where ω 0 = 1.21 × 10 16 is the frequency of the laser, and Ω = 6.28 × 10 9 is the rotation frequency.Since Ω ≪ ω 0 , we get α(ω + ) ≃ α(ω − ) and thus the second term is negligible.We can thus ignore the negative torque coming from the laser when the laser is linearly polarized. Another effect from the trapping laser is the heating of nanoparticles due to the shot noise.The rate of temperature change due to shot noise heating can be determined by the laser frequency, the power of the laser per unit area, the mass of the particle, and the scattering cross section for the nanoparticles [39].For YIG nanospheres of density 5110kg/m 3 and radius 200 nm, and trapping laser of 1550 nm wavelength and of 500 mW power focused on an area of radius 0.7566µm, we find that the temperature change due to shot noise is small compared to the time scale of the rotation, which is 1 ns.Therefore, the thermodynamic equilibrium condition for the FDT is valid.We provide further details for the derivations and calculations of negative torque and shot noise heating due to the trapping laser in Section S3 in the supplementary material. S1 Non-Electrostatic Limit and Magnetic Polarizability due to Electric Fluctuations In this section, we provide derivations for the magnetic polarizability of metallic nanoparticles due to the electric dipole terms based on Mie theory.If a sphere is placed in the direction of a plane wave polarized along x direction and propagating along z direction E i = E 0 e ik0r cos θ x, (S1) The scattered fields are given by [2], where N emn = z n (kr) kr cos mϕn(n + 1)P m n (cos θ)r + cos mϕ dP m n (cos θ) dθ N omn = z n (kr) kr sin mϕn(n + 1)P m n (cos θ)r + sin mϕ dP m n (cos θ) dθ the superscripts (1) for M and N indicate that the Bessel functions are the Hankel functions of the first kind h (1) (kr), E n = i n E 0 (2n + 1)/n(n + 1), and a n and b n are the Mie scattering coefficients.On the other hand, the radiated fields due to an electric dipole are given by Using the facts that The scattered fields to the first order of n become Assuming that the dipole is along x direction p = p 0 x, the dipole fields become (S9) In the low-frequency limit when kr = 2πr λ ≪ 1, the scattered fields are dominated by terms of the order (kr) −3 .Thus, we can neglect the contribution from the M terms or the b 1 terms in Eq. (S8).In this limit, the fields of the dipole and the scattered fields become equivalent, if we take or in other words, the sphere takes the polarizability where with x 0 = k 0 a, x 1 = k 1 a, and k 1 = ω √ µ 1 ϵ 1 , and µ 1 and ϵ 1 being properties of the sphere.Now, we look at the scattered magnetic fields.We have to the first order Again, we can ignore the second line or, in other words, a n in this expression for low frequencies.Then, comparing this expression with the magnetic fields of a magnetic dipole polarized along ŷ direction m = m 0 ŷ, e ikr , (S14) Taking H 0 = k0 ωµ0 E 0 , we find that the two are equivalent if we have or if the sphere takes the magnetic polarizability where In the low-frequency limit, we have and Therefore, we have in this limit j 1 (x) We thus get for the polarizabilities which are exactly equal to the results derived using the electro-static and magneto-static approximations method.For a non-magnetic material, b 1 becomes which gives for the magnetic polarizability, Other components of torque In this section, we provide further discussions of components of the torque other than the z component exerted on a spinning nanosphere near YIG slabs under different bias fields.The x component of torque, and for the y component, In the case when the interface is in the x − y plane, we have and the expressions for the real and imaginary parts of G H zy and G H yz are the same as the ones for G H xz and G H zx , respectively, for when the interface is in the x − y plane as given in Eq. (S26).We can find the x and y components of torque by plugging these expressions into Eqs.(S24) and (S25) for the two cases when the interface is the x − y or x − z plane.We present the plots of these torques at the end of this section. S2.2 Recoil torque There is also another contribution to the torque from the case when the induced dipole moments on the YIG sphere re-radiate due to the vacuum electric field fluctuations.This causes a recoil torque on the sphere and can be written as where H sc is the scattered fields from the dipole and are given by, which shows that this term is of higher order contribution and is thus smaller than the torque discussed in the main text.Repeating a similar procedure used before and plugging in all of the induced terms and writing them in terms of the fluctuations, we find after some algebra, where we have defined and have used the facts that α eff m,⊥⊥ (ω) and α eff m,gg (ω) are real, and α eff m,⊥g (ω) = α eff m,g⊥ (ω) * . Note that we have dropped the frequency dependence as well as the H superscript of the Green's function in Eq. (S31) for simplicity.For the special case when the substrate material is isotropic, the non-diagonal elements of the Green's function become zero, and we get Note that the expressions for the real and imaginary parts of G xz and G yz are given by Eqs.(S26),(S27), and (S28) for the two possible interface directions while the imaginary parts of G xx and G yy are given by equations in Appendix B. Also note that Re G H yx for when the interface is the x − y plane is the same as Re G H xz for when the interface is in the x − z plane given by Eq. (S28).Also Re G H yx for when the interface is the x − z plane is the same as Re G H zy for when the interface is in the x − y plane given by Eq. (S27).Thus, the only new term is Re{G yy − G xx } which is given by when the interface is the x − z plane. S2.3 Plots of torque terms In this section, we present the components of torque derived in previous sections for YIG slabs with various bias magnetic fields and for the two cases when the slab is the x − y and x − z planes. Figure S2 shows the plots of M x , M y , M z , and M rec derived in the previous sections for the magnetic and electric fluctuations.The expressions for the torques due to the electric fields and dipoles fluctuations are found by changing s to p and p to s in r ss , r pp , r sp , and r ps , in the expressions for the Green's functions.Moreover, magnetic polarizability is replaced by a simple isotropic electric polarizability, assuming a simple dielectric polarizability scalar for the YIG and Al interfaces.The results are for three directions of the bias magnetic field for the YIG interface labeled as x−, y−, and z−bias.The meaning of these bias directions is demonstrated in Fig. S1 when the YIG slab is the x − y and x − z planes.It is interesting to note that in Figs.(S2a), (S2e), and (S2g), the sphere can experience a large value of torque along x or y directions for the x− or y−biases.This means that in these cases, the sphere can rotate out of the rotation axis and start to precess.This will, of course, change the validity of the equations found for the vacuum radiation and frictional torque along the z axis since it has been assumed that the sphere is always rotating around the z axis and is also magnetized along that axis.This torque is still small enough compared to the driving torque of the trapping laser and it will still give enough time to make the observations.A more careful investigation of these components of torque is out of the scope of this study and will be explored in the future.Figures S2i-S2p show the axial torque M z as well as the recoil torque M rec for all orientations of the bias magnetic field and YIG slab.As expected, the recoil torque is much smaller than M z since it is a second-order term. Figure S3 shows the results for M z and M rec for the case when the Al interface is placed in the vicinity of the spinning sphere.Because Al is an isotropic material, M x and M y vanish for both orientations of the interface and thus are not included in the plots of the torques.Note that similar to the YIG interface results, M rec is much smaller than the M z for all cases of the Al interface.These results show that the recoil torque M rec can be ignored in all studied cases. S3 Experimental Considerations In this section, we present details of the experimental analysis regarding negative torque and shot noise heating due to the trapping laser discussed in Appendix H. S3.1 Effect of torque due to the trapping laser When the trapping laser is linearly polarized, it can exert a negative torque on the spinning particle.The torque on the sphere due to the laser is given by M opt = 1 2 Re{p * × E} [1], where p is the dipole moment of the sphere, given by p = ᾱeff • E, with ᾱeff being the effective polarizability of the sphere as seen in the frame of the lab, and E is the electric field from the laser.As shown in Appendix A, the polarizability tensor of the sphere when it is spinning in the x − y plane is given by ᾱeff where with α(ω) being the electric polarizability of YIG at the laser frequency.Note that here, we have assumed that the polarizability of the YIG is scalar in the range of frequencies around 1550 nm.Plugging these into the equation for Figure S3: Plots of M z and M rec in the vicinity of the YIG slab when the slab is the x − y plane (first row) and when the slab is x − z plane (second row).Note that due to the isotropy of Al, the other components of torque, including M x and M y , vanish. The first term is proportional to the spin of the electromagnetic field and causes a positive torque on the particle.This is the term for the transferring of angular momentum from the laser to the particle.The second term is negative and thus causes a negative torque on the sphere.In the case when the laser is linearly polarized, this negative term is proportional to Im{α(ω 0 + Ω)} − Im{α(ω 0 − Ω)} where ω 0 = 1.21 × 10 16 is the frequency of the laser, and Ω = 6.28 × 10 9 is the rotation frequency.Since Ω ≪ ω 0 , we get α(ω + ) ≃ α(ω − ) and thus the second term is negligible.We can thus ignore the negative torque coming from the laser when the laser is linearly polarized. S3.2 Effect of heating due to the shot noise The particle can heat up due to the shot noise of the trapping laser [4].In this section, we calculate the rate of temperature change due to the shot noise and vacuum radiation, respectively.The rate of energy change in the nanosphere due to the shot noise is [4], ĖT ℏk where L = is the laser I L is the power of the laser per unit area, M is the mass of the particle, and σ is the cross section of scattering where, which is equal to σ = 8π .For the range wavelengths around visible and the limit is valid for particles of radii a smaller than 50nm.Since the radius of the particle in our case is 200 nm, this expression may not valid.Mie scattering parameters be used evaluate the scattering cross section.Assuming the trapping laser wavelength of λ = 1550 nm and using the Mie theory, the rate of energy change of YIG with refractive index n = 2.21 [3] is close to that of the diamond with n = 2.39 in the Rayleigh limit [4].Therefore, we get the energy change rate in the sphere where A = πR 2 L is the area of the beam where the laser with the power P 0 is focused on, and ρ is the mass density which for YIG is ρ = 5110kg/m 3 .For a laser power of 500 mW focused on an area of radius 0.7566µm, we find ṪL = 15.45K/s. (S41) This is a very small temperature change compared to the time scale of the rotation, which is 1 ns.Therefore, the thermodynamic equilibrium condition for the FDT is still valid.This temperature change gets damped by the radiated power of the sphere due to the rotation.For a YIG sphere spinning at about 0.5µm from the aluminum interface, the rate of change due to vacuum radiation at the equilibrium temperature T 0 = 300 K is, ṪR = −362.973K/s,(S42) which is much larger than the temperature rise due to the shot noise of the laser, and this shows that the sphere will cool down.Note that this energy heats the aluminum instead.In this derivation, we have not included the heating due to the noise in the aluminum or YIG interface.The value found in Eq. (S42) is much smaller at lower temperatures. FIG. 1 . FIG. 1.(a) A YIG sphere trapped in the laser beam and spinning at 1 GHz rotation frequency in the vacuum.The stopping time for the sphere is on the order of the age of the universe.(b) YIG sphere spinning in the vicinity of an Aluminum or YIG interface exhibits colossal vacuum radiation.The stopping time, due to the presence of the interface, is reduced to about 1 day.(c, d) Number of photons emitted per second per radiation frequency, defined as 1 ℏω dP/dω = Γ(ω) − Γ(−ω), for a YIG (blue solid curve) or Aluminum (dashed orange curve) nanosphere of radius 200 nm at distance d = 0.5 µm from (c) an aluminum slab or (d) a YIG slab at room temperatures.For the Al slab, a non-local model has been used.The YIG slab in panel (d) is biased along the y direction (panel (a)) with a magnetic field of H0 = 812 Oe. FIG. 2 . FIG. 2. The negative vacuum frictional torque experienced by the YIG and aluminum nanosphere with a radius of 200 nm at room temperature.(a) Torque experienced by a YIG sphere in the vicinity of the YIG slab (solid blue curve) and in vacuum (dashed black curve).(b) Torque exerted on an Al sphere in the vicinity of the YIG slab (solid orange curve) and in vacuum (dashed black curve).(c), (d) the same as (a) and (b) with the YIG slab replaced by an Al slab.The YIG slab is biased along the y direction with H0 = 812 Oe (see Fig. 1(a)).A non-local model is used for the Al slabs.The distance between the spinning spheres and slabs is d = 0.5µm for all cases.Placing the YIG or Al interface in the vicinity of spinning nanospheres results in about 12 orders of magnitude increase in the exerted vacuum torque. FIG. 3 . FIG. 3. Experimental considerations of the setup.(a) Proposed experimental setup with nanosphere trapped inside a ring.(b) Balance rotation speed Ω b for Al sphere (red curve) and YIG sphere in the presence of Al (blue curve) and YIG (pink curve) interfaces, as a function of distance d from the interface for a 200 nm radius sphere at 10 −4 Torr vacuum pressure.The values are normalized by the vacuum balance rotation speed Ω0.(c) Characteristic stopping time as a function of distance from the interface at 10 −6 Torr vacuum pressure.(d) Balance temperature of the YIG sphere Ts at d = 500 nm distance from Al (blue curve) and YIG (pink curve) interfaces as a function of lab temperature T0, at 10 −4 Torr vacuum pressure.For Al spheres, there is no final temperature as the temperature keeps rising with time. FIG. 4 . FIG. 4. Schematic of the problem for the two cases of when the interface is in (a) x − y plane and (b) x − z plane. Figure S1 : FigureS1: Schematics of different bias directions for the YIG interface for the two cases of the interface being the x − y (top row) and x − z planes (bottom row).The green arrow shows the direction of the bias magnetic field applied to the slab of YIG. 1 Figure S2 : 2 2 2 = 1 2 FigureS2: Plots of M x and M y (first two rows) and M z and M rec (second two rows) in the vicinity of the YIG slab when the slab is the x − y plane (first and third rows), and when the slab is x − z plane (second and fourth rows).The plots show the results for various magnetic field directions.The meanings of x−, y−, and z−bias are demonstrated in Fig.S1for the two orientations of the interface.
2024-01-19T06:45:32.673Z
2024-01-17T00:00:00.000
{ "year": 2024, "sha1": "2c40813ee978d847f007ab233b24fb946a839561", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1367-2630/ad3fe1/pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "2c40813ee978d847f007ab233b24fb946a839561", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
263747436
pes2o/s2orc
v3-fos-license
Consumers’ Perception of Risk Facets Associated With Fintech Use: Evidence From Pakistan Studies illustrate progress in financial technology in Pakistan; nevertheless, the uncertain obstacle that prevents clients from adopting financial technology remains unclear. The research on the perceived risk, particularly in using financial technology in Pakistan, is limited. Therefore, this research bridges this gap. Two hundred ten members partook in this exploration. We have used the structural equation modeling approach to probe the acquired information and hypothesis. Empirical results show that three of eight perceived risk factors: performance risk, financial risk, and overall risk, have a substantial adverse effect on the intention to utilize financial technology. The highest impact was performance risk, followed by financial risk and overall risk. Whereas the other five risks: social risk, time risk, security risk, legal risk, and psychological risk, statistically have no substantial adverse effect on intent to utilize financial technology. The outcomes help experts better conceptualize and diminish hazard boundaries in planning for the disturbance of financial technology (fintech). Experts are likewise encouraged to focus on fintech’s operational aptitudes and utilitarian framework execution in fintech administrations. Introduction The fusion of ''finance'' and ''technology'' in the form of fintech has recently generated significant excitement and attention (Arner et al., 2015).EY alludes that fintech is ''an organization that combines innovative business models and technologies to realize, enhance, and disrupt financial services'' (EY, 2019).From a broader standpoint, fintech refers to institutes that provide solutions for financial software to their clientele.Fintech companies are categorized into various classes: insurance, investment, management, trade services, governance technology, and incentive program.The global adoption rate of fintech has exceeded expectations in 2019 (EY, 2019;C. Li et al., 2023). Fintech businesses have given the financial industry a considerable boost in recent years.A recent World Bank study shows that fintech companies operate in over 189 countries globally (Adrian & Pazarbasioglu, 2019).Many financial institutions benefit from collaborating with fintech businesses, such as operational cost savings, reduced client costs, and speedier service, all of which contribute to improved market value and financial performance.Given these benefits, numerous financial professionals and academics have identified certain hazards linked with fintech businesses due to their activities (Magee, 2011;Serrano-Cinca et al., 2015).The International Monetary Fund (IMF), for example, has listed data privacy, cybersecurity, and difficulties inside fintech businesses as concerns.According to reports, nearly 79% of fintech organizations are exposed to cyber-security threats (Adrian & Pazarbasioglu, 2019).Furthermore, the Financial Stability Board (FSB) identified a crucial danger in the fintech business, exacerbating stock volatility, and leading to a financial catastrophe. Concurrently, the global reach of fintech knowledge is remarkably high, with 89% of payments being made through computers or smartphones devices, and services related to non-bank money transfers and peer-to-peer payment systems accounting for 82% (EY, 2019).Moreover, several studies have been conducted to probe the determinants influencing virtual transactions.But there is no work on the restraints and risk determinants that frustrate users' aim to utilize fintech (C.Li et al., 2023;Y. H. Li & Huang, 2009).Thus, it is crucial to explore the perceived risk components that influence the expectations of Pakistani customers regarding fintech. We created our first research question based on these viewpoints; fintech users may face a larger risk in terms of performance risk, financial risk, social risk, time risk, security risk, legal risk, psychological risk, and overall risk. Financial Technology in Pakistan Pakistan is a cash-based economy with the world's sixth most populated nation.A lack of access to capital has long hampered Pakistan's economy (World Bank, 2017).According to the report, 93% of the grown-up populace doesn't have bank accounts (Rizvi et al., 2017).Pakistan has a lower economic inclusion position than regional and worldwide norms (Nenova & Niang, 2009).Small and medium-sized businesses lack finance because of high intermediary costs, large loan charge splits, financial illiteracy, strict security requirements, and restricted loaning rates (Lukonga, 2021).This higher rate of financial restrictions makes people and organizations defenseless against pay stun and expands their working expenses such as operational expenses and hoses forecasts.Technology can be bridled to expand the topographical effort.Physical access to finance may be considerably enhanced by innovative mechanical arrangements such as branchless banking and mobile banking (Kemal, 2016).Collaboration and good relations among banking institutions and informal vendors might enable their administrations' services and operations to be topographically accessible, not so much perplexing, but relatively more effectively reasonable for buyers (Ali & Abdullah, 2020).Pakistani buyers' overall judgment concerning the insignificance of formal money in their daily lives, troublesome financial systems, low effort, and not admissible items give a chance to fintech to configure customized items.Microfinance Institutions (MFIs) in Pakistan are confronted with more noteworthy subsidizing to develop and incorporate with monetary business sectors; however, they have tremendous ability to grow outreach.New businesses' technology utilization and associations with financial technology will permit these institutions to boost outreach.The financial industry's current shortcomings can fill in as a chance for virtual monetary institutions to provide answers for effort. Compared to less advanced countries, the financial system in developed countries is more accessible.According to Kendall et al. (2010), 81% of individuals in industrialized nations have official bank accounts, but just 28% of persons in developing countries have one.According to Demirg€ uc x-Kunt and Klapper (2013), seven countries, including the Philippines, China, Bangladesh, Pakistan, Vietnam, India, and Indonesia, account for almost 92% of the 1.5 billion unbanked individuals in growing economies in developing Asia. Financial Technology Regulations in Pakistan In reality, Pakistan's State Bank has shown itself as a digital financial leader.The state bank's efforts to encourage electronic and branchless banking (such as mobile payments) have been meticulously recorded.One research looks at the past and frameworks of online banking to see how far the sector has developed and changed the country's traditional banking systems in Pakistan (Rizvi et al., 2017) This study differs from the other studies, such as more latest studies have identified various variables that determine fintech adoption.Only a little research has been done on the limitations and risks that deter customers from using fintech.As a result, it's critical to look at the risk perception variables that influence Pakistani customers' willingness to utilize fintech.Most of the current fintech literature in Pakistan is based on the technology acceptance model (TAM).Yet, no one has thoroughly investigated the perceived risk variables.Agha and Saeed (2015) used TAM and included one social risk factor to know the customers' acceptance of the technology.We characterize perceived risk in fintech applications or transactions such as online banking as the emotionally decided assumption of a loss while reviewing given online transactions.This paper has examined risk factors with respect to payment applications for the online transfer of money in Pakistan.Data is collected from highly educated, literate consumers with financial literacy, and we used a direct impact of risk facets on the intentions to use fintech.The proposed model is not used before with respect to fintech usage in Pakistan. On the other hand, establishing a risk-free fintech transaction environment is much more complicated than providing client privileges.Consequently, fintech companies must seek a risk-reduction strategy to gain greater trust in potential customers.Surprisingly, our findings vary from those of the other researchers.Second, providing professionals with a better knowledge of consumers' risk perceptions is vital.It can then be utilized to develop risk-reduction methods and trust-building processes to improve and promote users' online trade adoption, particularly in the growing field of online payments.Third, regulating the associated risk aspects broadens the study area of economic repercussions based on fintech use.In short, it is critical to grasp the effect of risk variables on the adoption of fintech by Pakistan's whole public. The following are the objectives of this research: 1. To see whether perceived risk elements affect consumers' intentions to utilize fintech.2. To determine which elements have a more significant impact on the intention to utilize fintech and are associated with risk.3. To determine if PRT plays a vital role in investigating fintech uptake in Pakistan. The remainder of the paper is organized as follows: Section 2 covers related research in the conceptual framework and presents the hypothesis; Section 3 explains the methodology, Section 4 results; Section 5 contains the discussion of the results, implications, limitations, and future studies, and Section 6 includes a conclusion. Conceptual Framework Fintech isn't merely restricted to monetary services.It includes financing, making new plans, and designing business models (P2P lending and crowdsourcing).It also performs business tasks, offers assistance, and conveys items as an option compared to conventional financial sectors.(Arner et al., 2015;Puschmann, 2017).By and large, fintech, in general, is a novel and troublesome technical service offered by present-day nonfinancial establishments (D.K. C. Lee & Teo, 2015).Fintech also alludes to utilizing IT, such as mobile technologies, data analytics, and cloud technology, to boost services and management efficiency and extend financial assistance (Hu et al., 2019;Y. H. Li & Huang, 2009).This way, fintech can be considered an innovation that enhances consumers' experience and competitiveness in financial services.Fintech is characterized as the technology innovation of financial operations and services by nonmonetary ventures.Fintech assists clients in partaking in an assortment of portable climate administrations.The vast advantages of fintech give consumers a chance to acquire a climate of enhancement and straightforwardness, diminish costs, make financial data more open, and eliminate intermediaries. From the customer's perspective, the aim to utilize it is still questionable and unsure, even though fintech has pulled in much consideration.It is accepted that a more pessimistic individual will negatively influence such conduct (Ryu, 2018;Singh et al., 2020).Buyers might be hesitant to utilize fintech fundamentally because the risks are considerable and cannot be ignored (Liao et al., 2010).These unexpected risks of fintech usage can hurt clients, impeding their utilization.Along these lines, this has prompted this research on the consumer's perceived risk aspects of fintech utilization.The theory of reasoned action (TRA) concerning behavior or intentions, and the perceived risk theory (PRT) concerning risk, play a vital role in technology adoption. Theory of Reasoned Action Fintech usage intent is constrained by fintech users' attitudes toward using fintech, which is attained by employing the theory of reasoned action (TRA) in the fintech ecosystem.It is accepted that consumers will be indulged in considering accessible services and selecting services (Kim et al., 2008;Roh et al., 2023;Rossmann, 2021).As buyers might be hesitant to utilize fintech because of danger contemplations, it is vital to comprehend the perceived risk facets when creating and advancing the utilization of fintech.Subjective norms and the actual use of the fintech is not included in this study as this study only focuses on the risk facets and their impact on consumers' intention to use.Cox (1967), as referred to by Ryu (2018) and Mitchell (1999), explains perceived risk as the inevitable sensation when the outcome is very unfavorable.Perceived risk influences individuals' trust and confidence in their choices.In prior consumer research studies, perceived risk was characterized as the apparent vulnerability or uncertainty in a buy circumstance.Perceived risk has been utilized to clarify and comprehend consumer behavior.Bauer (1960), as referred to by Quintal et al. (2006), presented perceived risk and considered it the impact that led to the total perceived value of buying behavior.Cox (1967), as referred to by Ryu (2018) and Mitchell (1999), alludes to perceived risk as the inevitable sensation if the outcome is very unfavorable.Perceived risk is a sort of subjective predicted loss.According to Featherman and Pavlou (2003), perceived risk is the possibility of loss while seeking the desired result.Cox (1967) noticed that perceived risk comprised the extent of the possible loss (e.g., at stake) if the demonstration's aftereffects weren't positive; and the individual's personal feelings of certainty that the results would not be proper.In traditional consumer decision-making, promising research has analyzed the effect of Risk (Lin, 2008). Perceived Risk Theory Many scholars asserted that users' perceived risk is a multi-dimensional construct.In any case, perceived risk components may change per the product class (Featherman & Pavlou, 2003).Ryu (2018) indicated six perceived risk dimensions: financial, time or opportunity, social, performance, psychological, and safety.According to Luo et al. (2010), financial, performance, psychological, time, privacy, social, and overall risk are the main risk elements in the wireless Internet's early adoption phase.Web-based banking doesn't endanger human existence; therefore, the physical risk was excluded from this study.The components of perceived risks are characterized in Table 1. Fintech Risk Perception There are various definitions of perceived risk.In the data framework setting, perceived risk negatively affects the adoption of the information system or information technology (Ryu, 2018).Ryu (2018) alludes that perceived risk relates to services or products in utilizing technological innovation.Perceived risk is characterized as ''customers' impression of weakness, vulnerability and the possible negative outcomes related to fintech.''Given Ryu (2018) measurement of perceived risk and the fintech setting, the study distinguished seven components of perceived risk.They are financial risk, privacy/security risk, operational risk, legal risk, social risk, psychological risk, time/convenience, and overall risk, which may influence buyers' fintech reception expectations (see Fig. 1) are described as follows: Performance Risk/Operation Risk.Operational risk alludes to misfortunes caused by inadequacies or breakdowns of web-based financial sites such as online banking (Barakat & Hussainey, 2013).Featherman and Pavlou (2003) observed frequent site failure and disconnection recurrence restraining electronic-services assessment.Luo et al. (2010) characterized operational risk as a performance risk.Users won't intend to utilize financial technology because of the high-risk possibility of operation and financial system of fintech organizations.The absence of operational and quick reaction capabilities, structural problems, and a weak or a lack of internal procedure will prompt users' doubt and frustration (Dvorsky´et al., 2018;Ola´h et al., 2017).This will hinder intentions from utilizing fintech.Hence, this investigation hypothesizes that operational risk negatively affects the use of fintech.Therefore, it follows that: H1: Performance risk negatively affects intention to use fintech. Dimension Definition Performance risk Performance risk is the probability of the object malfunctioning and not processing as planned and publicized along these lines neglecting to convey the desired benefits (Barakat & Hussainey, 2013;Kuisma et al., 2007) Social risk The possible loss of self-worth in someone's social gathering because of adopting a product or service that seems absurd or out-of-style (S.M. Forsythe & Shi, 2003). Financial risk The likelihood that a buy brings about cash deficiency and the additional maintenance expenses of the services or product (S.Forsythe et al., 2006;Ryu, 2018). Security risk/privacy risk The possible loss of control over home data/personal data is when data about you is utilized without your insight or consent.An outrageous case is when a customer is ''spoofed,'' which means a criminal uses their credentials to conduct fraudulent exchanges, wire transfers, or trades (Featherman & Pavlou, 2003;Reavley, 2005). Time risk Buyers may waste time settling on a lousy buying choice by searching or buying, figuring out how to utilize goods or services possibly to replace them if it doesn't meet the desired anticipation or demand (Bellman et al., 1999;Featherman & Pavlou, 2003;M.-C. Lee, 2009). Psychological risk The maker or product's performance or selection may negatively affect the shopper's tranquility, contentment, or self-discernment (Cox, 1967;Mitchell, 1992). Overall risk The overall risk is a generic assessment of perceived risk where all rules, criteria, or conditions are assessed together (Featherman & Pavlou, 2003;Jacoby & Kaplan, 1972). Legal risk Legal risk refers to the dubious legitimate status/situation and the absence of complete rules or guidelines/ procedures for Fintech/Fintech users (Ryu, 2018;Tang et al., 2020). Financial Risk.Financial risk alludes to the chance of monetary losses in monetary transactions, such as financial transactions led by fintech (S.Forsythe et al., 2006;Gai et al., 2018).It is possible for money-related misfortune because of transaction errors, exchange mistakes, or account misuse.Previous data systems research has demonstrated that perceived FR is the main factor embraced by cell phones and networks (Ryu, 2018).Ryu (2018) clarified that fintech's monetary misfortunes are the danger posed by the monetary exchange framework, money distortion, moral threat, misrepresentation, and the risk of extra trade charges related to a preferred value.These financial risk factors adversely influenced the aim to utilize fintech.Previous studies state monetary dangers have expanded and incorporated the chance of the reoccurrence of economic misfortunes in financial services because of misrepresentation (Luo et al., 2010;Najaf et al., 2021).In this manner, financial risk negatively affects fintech use intents. H2: Financial risk negatively affects intention to use fintech. Social Risk.Social risk is a negative self-image when buying or utilizing specific services or products that a particular portion of society considers unsuitable (S.M. Forsythe & Shi, 2003).It insinuates the probability that using financial technology may bring about dissatisfaction with one's companions, family, or workmates (Franks et al., 2014).Individuals likely have different perspectives toward financial technologies, such as online banking, influencing their views on its adopters.Social buzz and contribution behavior are essential for interaction with any online platform (Thies et al., 2014).On the other hand, not adopting web-based banking may negatively or positively affect social status.In line with technology acceptance research, Davis (1989), Fishbein and Ajzen (1975) concern the assessment and opinion of referents (companions, family, colleagues) to one's activities as subjective norms.Given these studies, it's sensible to believe that social risk could have a negative impact on consumers' attitudes to fintech use.Subsequently, it follows that: H3: Social risk negatively affects intention to use fintech. Time Risk.Time risk refers to the extra time used and difficulty or inconvenience brought about by the deferrals of payment or some other trouble of route (finding suitable services or hyperlinks).There are two driving causes for disappointing online experiences that might be considered a time risk: unorganized or befuddling webpages that are too delayed when browsing (S.M. Forsythe & Shi, 2003).Bellman et al. (1999) provided that ''harried'' buyers were bound to shop online to save time.These time-conscious clients are likely to be wary of the risk of hitting time and are less likely to choose an e-service with high transition, installation, and upfront costs (Featherman & Pavlou, 2003).A time factor is not only a concern for an individual, but time risk has a more significant impact on the global financial market (Elsayed et al., 2020;Le et al., 2021).According to S. M. Forsythe and Shi (2003), time risk is a considerable obstruction to web-based buying, and it is thus hypothesized that: H4: Time risk negatively affects intention to use fintech. Security Risk.Security risk is defined as a danger that makes a situation, condition, or occasion riskier, such as demolition, revelation, alteration of information, refusal of service, misrepresentation, waste, and misuse (Macedo, 2018).It has been expressed in various studies that gaining consumers' confidence in security and privacy problems would be a big hurdle for the online banking market (Degerli, 2019).Ryu (2018) contended that the utilization of fintech is generally accompanied by more considerable potential loss, for example, secrecy, individual data, and exchange.It likewise adds to the development of the perceived risk of fintech utilization.Accordingly, the Security risk is estimated to affect the utilization of fintech negatively.It is hence speculated that: H5: Security risk negatively affects intention to use fintech. Legal Risk.Legal risk refers to the dubious legitimate status/situation and the lack of regulations, standards, and procedures for fintech or fintech users (Tang et al., 2020).Fintech is novel in many markets globally; hence, the lack of regulations, guidelines, or procedures for consumers on fintech's cash and security-related problems has led to dread, doubt, and anxiety among clients (Tang et al., 2020).Customers' data, privacy, and the financial system's protections are all examples of legal risk.In this regard, fintech businesses still operate in an unclear zone when determining if their operations need special permission or licenses from the appropriate authorities.Uncertainty about regulatory requirements is still dangerous for fintech companies (Ng & Kwok, 2017).Aside from regulatory uncertainty, fintech businesses are being forced to exit some markets due to the high cost of compliance.Besides the regulatory uncertainty, the considerable cost of compliance also forces fintech firms to withdraw from particular markets, ultimately affecting consumers' intentions to use fintech.In this way, legal risk is estimated to affect fintech utilization intention negatively. H6: Legal Risk has a negative effect on fintech use intention. Psychological Risk.Psychological risk is the possibility that the brand or product's quality, performance, or selection will negatively affect the shopper's peace of mind, contentment, or self-perceptions (Mitchell, 1992).Cox (1967) defined psychological risk as the expected damage to self-esteem or ego from the disappointment of not accomplishing a purchasing objective.According to the survey, a psychological risk impacts the P2P lending market (Wang et al., 2022). H7: Psychological risk negatively affects intention to use fintech. Overall Risk.The overall risk is a generic assessment of perceived risk where all rules and parameters are assessed.End-user responses to an electronic service gateway design are often essential to understand since they can be interpreted as indicators of overall service quality.The seminal work of Bauer (1967) was used by Jacoby and Kaplan (1972) to derive a general metric of perceived risk.After a risk ''tradeoff'' behavior, he theorized it to be made up of many different types of risk.A large car, for example, can minimize physical/safety risks while increasing financial risk.This assessment of overall risk perception is also put to the test (Figure 1).H8: Overall risk negatively affects intention to use fintech. Methodology In this survey, Pakistanis were chosen as the responders, older than 18, and possessed personal bank accounts that meet the legitimate age of legally contractual capability.Respondents got self-managed questionnaires via a website (Google structure) with URLs messaged to them.Five Likert ratings were used to assess each construct, extending from 1 (strongly disagree) to 5 (strongly agree). Sampling and Data Collection The sample responses were collected from 210 people with different backgrounds and conducted via an online survey for over 3 months between 1st February to 1st May 2021.The demographics of the responders are shown in the given Table 2.The proposed hypotheses were tested with all of the replies. Table 2 shows that males constituted 61.40% of the collected data, while 38.60% were females.The majority of the responders, 34.80%, were people aged between 25 and 29.Regarding the respondents' education, 40% were in the master category occupying the most positions in the survey.Furthermore, concerning respondents' occupation, 43.30% were students, the highest percentage.About 32.86% had a higher income level than PKRs 50,000.In Pakistan, the majority of the population is from Punjab.From Table 2, we can also see that 80% are from Punjab.We found that 74.30% of respondents had fintech experience. Measurement Development The survey experiment was intended to incorporate a questionnaire having two parts, as shown in Appendix A, with nominal scales in the first part and five-point Likert scales in the second, extending from ''strongly disagree'' (1) to ''strongly agree'' (5).As a result, the first section is comprised of fundamental facts.This part of the questionnaire was developed to gather respondents' descriptive information, such as gender, age, education, employment, and fintech experience. The questionnaire's second section was built around the ideas of performance risk, financial risk, social risk, time risk, security risk, legal risk, psychological risk, overall risk, and intention to utilize fintech.The performance risk and financial risk were operationalized by considering Featherman and Pavlou (2003) and M.-C. Lee (2009), containing 3 and 5 elements, respectively.The social risk scale was operationalized from Featherman and Pavlou (2003), M.-C. Lee (2009), and Wu and Chen (2005), containing two items.The assessment of security risk and time risk followed Featherman and Pavlou (2003), including three items for each.The measurement of legal risk was adopted from Ryu (2018), including six items.The psychological risk was taken from Cox (1967) assessment, including two items.The overall risk was formulated by considering Featherman and Pavlou ( 2003) and Jacoby and Kaplan (1972), including five items, and the intention was adopted from Cheng et al. (2006). We conducted a pre-test to verify the instrument before performing the main survey.Ten people with more than 3 years of fintech expertise took part in the pre-test.Respondents were invited to give feedback on the instrument's length, structure, and scale language.As a result, the instrument's content validity has been established. Results We used Anderson and Gerbing (1988) two-step technique for assessing the data we obtained.We looked at the measurement model for convergent and discriminant validity first.The structural model was then investigated to determine the direction and strength of the connections between the constructs.This paper used the statistical structural equation modeling (SEM) approach to measure and analyze the relationships between observed and latent variables.Regression analyses are similar, but generally, SEM is more powerful.We may create complex path models with direct and indirect effects using SEM.Smart PLS-SEM has also provided more accurate results when evaluating validity and reliability. Measurement Model Assessment The assessment scores in Table 3 showed that each construct had a high level of internal reliability.The indicators' computed coefficients were significant on their theorized underlying construct factors, used to determine convergent validity.The measuring scales were examined using the three criteria proposed by Fornell and Larcker (1981): all indicators' factor loadings should be significant and more than 0.5.Construct reliabilities should be more than 0.8.The average variance extracted (AVE) for each construct should be greater than the variance attributable to measurement error (e.g., AVE should be more than 0.5). The construct reliabilities varied from 0.849 to 1.00 (see Figure 2).The AVE, which ranged from 0.594 to 1, was more significant than the variance due to the measurement error; as a result, all three convergent validity criteria were fulfilled. Discriminant validity measures how different one idea and its indicators are from another idea (Bagozzi et al., 1991).In any two constructs, the correlations between items, according to Fornell and Larcker (1981), ought to be smaller than the square root of the average variance shared by items within a construct. The square roots of the variance between a construct and its items were more extensive than the correlations between the construct and any construct in the analysis (Table 3), demonstrating discriminant validity defined by Fornell and Larcker (1981).All diagonal values surpassed the inter-construct correlations.Consequently, we determined that our instrument had enough construct validity. Internal Consistency Reliability and Convergent Validity.Figure 2 shows the composite reliability (CR) is greater than 0.70. Figure 3 shows the average variance extracted (AVE) is higher than 0.5.Figure 4 shows that all constructs and indicators follow the reflective measurement criteria, that is, all indicators' loadings are higher than 0.7, except for one indicator, FR3, which has a loading of 0.665, which is very close to 0.7.Finally, the findings indicate that all indicators are accurate, convergence validity is ensured, and the internal consistency of the data is obtained. Discriminant Validity.Fornell and Larcker (1981) allude, in the model, the loading of own constructs should be greater than those of other constructs to achieve discriminant validity.In Table 3, all constructs fulfill this criterion.The results of the discriminant analysis method that compares cross loads between structures are provided in Appendix B. The discriminant validity results are tested using the Heterotrait-Monotrait (HTMT) correlation criteria shown in Table 4.All obtained values meet the HTMT 1 criterion (Palacios et al., 2011), confirming discriminant validity. Structural Model Assessment.This paper used a bootstrapping procedure with 5,000 sub-samples to assess the structural model and validate the stated assumptions.The structural model is assessed concerning the estimates and hypothesis tests regarding the causal relations among variables specified in the path diagram (Figure 4). Collinearity.Correlations among constructs are relatively robust, ranging from .7 to 1, shown in Table 3.The possible multicollinearity issue can be formally examined in the regression analysis framework.A typical metric of multicollinearity in regression analysis is the variance inflation factor (VIF), which measures the degree to which one predictor variable is described by other predictor variables (Hair et al., 1998).It's typical to recommend a threshold VIF of less than or equal to 10 (i.e., tolerance .0.1) (Hair et al., 1998).We used Smart PLS 3 in our study, and the AVE values are less than 10.Table 5 shows the VIF values of all constructs in the model.The results show that the VIF values of all constructs are less than 5, demonstrating that the structural model has no collinearity issues. Assess Path Coefficient.The path coefficient was examined using a bootstrapping procedure with 5,000 subsamples.Table 6 summarizes the testing results of the hypothesis, indicating that performance risk (H1), financial risk (2), and overall risk (H8) are all adversely associated with using fintech.Meanwhile, the associations between social risk (H3), time risk (H4), security risk (H5), legal risk (H6), and psychological risk were discovered to be insignificant in the given relationship. Impact of Risk Factors According to the findings, overall risk, financial risk, and performance risk are all adversely connected to utilizing fintech.In other words, Pakistanis' user of fintech H2 determines whether FR has a negative impact on IN.According to the findings, FR has a strong negative influence on IN to utilize fintech.As a result, H2 was supported (b = 2.197, t b = 3.513, p = .000).This research is in line with Ryu (2018). H3 determines whether Social risk (SR) negatively relates to IN.According to the findings, SR had no significant negative influence on IN (b = 2.085, t = 1.589, p = .113).The results show that social risk has no impact on intention.As a result, H3 is not supported.Our study is also consistent with M.-C. Lee (2009). H4 evaluates whether time risk (TR) is negatively related to IN.The results revealed that TR has no significant negative impact on IN (b = 2.073, t = 1.201, p = .230).Consequently, H4 is not supported.Our results contradict the study of Featherman and Pavlou (2003), who researched that the time factor is one of the most influencing aspects in changing consumers' intentions. H5 determines whether Security risk (SSR) negatively relates to IN.According to the results, SSR had no significant negative impact on IN (b = .013,t = 0.268, p = .789).As a result, H5 isn't supported.However, this conclusion is similar to the findings of M.-C. Lee (2009) research found that Malaysian consumers' perceptions of electronic payment had no association with perceived security (Teoh et al., 2013).It may be shown by deploying stringent security mechanisms in network information transmission and fintech applications, such as digital signatures, encryption, and a double-check for verification.It reduces the perceived security risk as a significant deterrent to using fintech. H6 states that legal risk (LR) negatively relates to IN.As (b = 2.023, t = 0.439, p = .661),the data demonstrated that LR had no significant influence on IN.As a result, H6 isn't supported.In the context of fintech, the legal risk relates to the legal position/status/situation of fintech, which is ambiguous and for which no general regulation exists.Relevant fintech regulatory and security concerns are assured before being executed regarding a legal risk. H7 states that psychological risk (PPR) has a negative relationship with IN.According to the findings, PPR had no significant detrimental influence on IN (b = .004,t = 0.072, p = .942).As a result, H7 is rejected.Psychological risk had low path loadings for this sample and setting, which was less worry than performancerelated risk variables.This study's research findings matched Featherman and Pavlou (2003). Finally, H8 states that overall risk (OR) has a negative relationship with IN.H8 is supported by its significant result (b = 2.510, t = 8.339, p = .000).Table 6 summarizes the findings.Our findings are consistent with previous research (Featherman & Pavlou, 2003). Discussions The present study examines the elements that influence fintech usage intentions.This research proposes an integrated model for describing consumers' intention to use fintech based specifically on PRT.Our study yielded several interesting findings reported in one category: negative predictors.The discriminant analysis results reveal factors that negatively affect consumers' usage intention; performance risk is the greatest of factors, followed by financial risk and overall risk.This study's results have many crucial implications for fintech practitioners and researchers. According to the findings, PR has a significant negative influence on consumers' intentions to embrace fintech.As a result, whether there are continuous transactions failure, unfinished or failed transactions/ deals/processes, or a lack of operational expertise and solutions to fintech-related difficulties, the intention to utilize fintech decreases.As a result, lowering the risk of website failure may boost customers' willingness to undertake transactions online.Our findings are in line with M.-C. Lee (2009) research.This finding of H2 suggests that possible financial damages, such as fraudulent activities, the failure of trade frameworks, and monetary misrepresentation, impact customers' willingness to utilize fintech.Financial risk is an important consideration when deciding whether or not to employ fintech.Economic activities will be secured when fintech service providers can offer solid systems and services and complete protection.Once consumers perceive financial risk, they are less prone to use fintech applications.Even damage or overcharging consumers will create a threat in their minds.That's why fintech service providers should always pay attention to these monetary misrepresentations, such as hidden charges to buyers.Our study is in line with Ryu (2018). The fact that social risk does not impact intention to use fintech demonstrates that consumers are unconcerned about internet banking's cultural conditioning from their friends, family, or coworkers.One view is that fintech has already been widely used, and most respondents had positive experiences with it via friends or family.Another rationale is that using fintech is a personal choice rather than a requirement.This is in line with the results of M.-C. Lee (2009) also showed that social risk is insignificant.Our results have challenged the research of Agha and Saeed (2015), who found that social risk negatively affects the consumers' acceptance of the technology as Pakistan has a collectivist culture.Pakistani culture is a collectivist culture where social values are on priority; still, the social risk is not found to have a negative effect on consumers' usage intention in this study.This shows that trends are being changed.On the other hand, Venkatesh and Davis (2000) found that social risk has little impact on the consumers' intentions.They found that social norms considerably influence intents to utilize in a mandatory-usage situation but have minimal impact in a voluntary-usage case.In the modern world, now the trend is being changed.Fintech users are more worried about fintech companies' performances, not social issues. In previous studies, time risk has been demonstrated to have a negative impact on the intention to use internet banking (S.M. Forsythe & Shi, 2003;M.-C. Lee, 2009).This means that online banking customers may be worried about payment delays and the time spent waiting for the website or learning how to use it.However, our research proved that time risk does not affect intentions to use fintech because technology now is relatively faster than before.Our study, to some extent, is in line with Andreoni and Sprenger (2012), who found that consumers are more worried about the risk of financial matters than time; when there is a financial matter, buyers can wait.On the other hand, everyone has some indirect sense of feature about technology relating to time, that technology saves time, so it is not going to affect them.Consumers are not worried about the delays in fintech.As fintech is far faster than traditional dealings, for example, during covid-19, the use of fintech overall increased (Fu & Mishra, 2022).Now, fintech organizations should focus on other main issues that users face.The rejection of the hypothesis of time risk indicates that fintech has already won consumers' trust with respect to time management. Security risk is essential for consumers' online transactions (M.-C.Lee, 2009;Ryu, 2018).Interestingly, our study shows that Pakistani consumers are less worried about security.It shows that it is not easy to hack any online system nowadays.Thus, this hypothesis that security risk negatively affects fintech intention use is insignificant.This conclusion is in line with research findings that found no link between consumers' perceptions of electronic payment and perceived security (Tang et al., 2020;Teoh et al., 2013).It may be shown by deploying stringent security mechanisms in network data transmission and fintech applications, such as encryption, digital signatures, and two-step verification, reducing the perceived security risk as a significant deterrent to using fintech. According to previous studies, consumers are less likely to adopt fintech when legal risks rise.Interestingly, our results contradict Ryu (2018), who found that customers are apprehensive about legal issues and unwilling to use fintech.The majority of respondents are unconcerned about the restrictions around fintech.Our research contradicts Tang et al. (2020), who presented a model to describe the influence of legal concerns on users' intents.Now, these days, fintech is in its boom position.Countries have rules and regulations.Even if consumers are unaware of the fintech regulations, they still trust the fintech companies prevalent in the markets.Though where legal issues have arisen, consumers will not use fintech.The current study found that consumers are less worried about the legal issues because all the fintech companies have to register themselves to get a license from the government's recommended organizations, for example, the state bank of Pakistan.The government is also trying to digitalize the country by allowing more fintech companies to work (PSPs, 2014;Rizvi et al., 2017). Psychological risk is found to have a low-risk impact on the fintech usage intention and is insignificant.Psychological concerns are found less critical.Now consumers are more mature than olden days when technology was new, and consumers were more worried about each step of using fintech.Now, consumers are more in touch with fintech and better understand fintech usage.However, some other factors may increase psychological risk if there is poor performance of the fintech services or financial risk, which may ultimately increase consumers' psychological risk.Our research is in line with Featherman and Pavlou (2003), who found no significant impact of psychological risk on the intention to use fintech. The overall risk is found to be significant.It negatively affects the intention to use fintech.Our study is consistent with Bauer (1967) and Jacoby and Kaplan (1972).However, five hypotheses are rejected, meaning there is no negativity with the intention of fintech usage.According to the results, consumers think overall, it is risky, but they are still using fintech.It shows that fintech is now a consumer's need.Despite the fact it is a risker, consumers are still using fintech. The facts above highlight the risks of implementing current IT technologies that alter management and accounting preconditions, change information exchange, aggregation, and distribution methods, and establish a new financial structure.Changes in the IT industry may substantially impact an accounting system's postulates and categories.Although implementing creative IT advancements in accounting allows for processing enormous amounts of data in the fastest time possible, the risk concerns must also be acknowledged. Research Implications The research findings give insight into several vital aspects surrounding customer intentions toward fintech that have been overlooked in prior research.This method is expected to result in a steady evolution of theory.Consequently, the suggested approach significantly the growing fintech literature.This study's finding has wideranging ramifications for subsequent fintech research.The empirical results imply that the risk element has a more significant effect on consumers' judgment than the gain component, meaning that risk takes precedence over benefit for online banking clients when considering fintech. Furthermore, the empirical findings demonstrate that the proposed model has strong explanatory power.Information technology (IT) acceptance research, for example, by Venkatesh and Davis (2000), has produced several competing models, each with its own set of acceptance determinants, such as social cognitive theory (SCT), innovation diffusion theory (IDT), and expectation confirmation model (ECM).This finding is expected to inspire further research that combines these opposing theories to create a unified one. Practical Implications The findings of this study give insight into certain key concerns surrounding consumer intentions toward fintech adoption that have not been addressed in earlier research.The perceived risk significantly affects whether or not consumers want to embrace fintech.This conclusion is especially relevant for managers deciding how to deploy resources to keep and grow their current customer base.On the other hand, building a risk-free online transaction environment is much more challenging than providing consumers with advantages.As a result, electronic business organizations must look for risk-reduction measures that will help them inspire high levels of trust in prospective consumers.According to the findings, they should prevent infiltration, fraud, and identity theft.Building secure firewalls to prevent incursion, inventing techniques to increase encryption, and certifying websites to avoid scams and identity theft are just a few actions that should be implemented.Effective risk-reducing strategies may include money-back guarantees and prominently advertised customer satisfaction assurances to offset financial and performance-based risk concerns.Consumers may be ready to accept the perceived risk if they trust the service provider's commitment to them. Limitations and Future Study The research is restricted in scope and only looks at risk variables as they are viewed.It looks at how perceived risk variables affect Pakistani consumers' willingness to utilize fintech.Future studies should examine perceived advantages and risks in understanding fintech adoption intentions.This study also didn't include the complete cognitive models of TRA and UTAUT and only considered the negative elements of risk theories that affect the fintech usage intention.Future academics should also perform more analysis to analyze the actual use of financial technology in their study framework.Aside from that, researchers may look at financial literacy and economic issues to see the intentions. Conclusion The components of perceived risk factors have been intensively researched in various domains.This research seeks to contribute to the corpus of information on the usage of consumer-related fintech systems in Pakistan, mainly to assist professionals in effectively conceptualizing, reducing risk barriers, and preparing for fintech upheaval.The findings of this research are, to some degree, similar to prior research in that performance risk/operation risk, financial risk, and overall risk are all critical issues that deter people from using fintech. According to the findings, security, time, psychological, and legal risks had no statistically significant influence on fintech usage intention in Pakistan.The results indicated that the performance, financial, and overall risk aspects were the primary reasons for worry for this sample and context, resulting in lower system assessment and adoption.After identifying the most significant risk aspects, the focus may shift to determining the maximum acceptable risk levels for each perceived risk facet.These thresholds may serve as a guideline for deciding how low-risk perceptions should stimulate adoption in each target market.Several simple risk-reduction measures may be implemented into the user interface to counteract customer worries. Finally, operational skills/technical expertise and system functional performance must be considered while offering services.Consumer discontent and distrust will result from insufficient or failed financial services operations, creating hurdles to fintech adoption.SAGE Open . In 2008, the State Bank of Pakistan (SBP) supported the branchless banking sector by issuing branchless banking laws and developing a regulatory mechanism strategy.Since 2008, the SBP and other government agencies have promoted banking technology.The creation and enactment of laws of PSPs (Payment Service Providers) and PSOs (Payment System Operators) in 2014 were the most relevant and clear actions taken by SBP to support and facilitate fintech (PSPs, 2014).The SBP enacted Laws for Mobile Banking Interoperability4 in May 2016.Fintechs should achieve long-awaited Transactional Interoperability under these rules, allowing users to transfer payments across mobile accounts and service providers.The other form of interoperability, Account-to-Account Interoperability (A2A Interoperability), has been available to consumers and Fintech service providers since 2014. Note.Diagonal elements are the square root of AVE. Figure 4. Results of structural equation model.
2023-10-08T15:13:00.699Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "34f90494ab00b1ffdeb0828f264e376362c004be", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21582440231200199", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "4e735d55e7cd2195728030b3d265221c1450552f", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
17383712
pes2o/s2orc
v3-fos-license
Early brain injury after aneurysmal subarachnoid hemorrhage: a multimodal neuromonitoring study Introduction There is a substantial amount of evidence from animal models that early brain injury (EBI) may play an important role for secondary brain injury after aneurysmal subarachnoid hemorrhage (aSAH). Cerebral microdialysis (CMD) allows online measurement of brain metabolites, including the pro-inflammatory cytokine interleukin-6 (IL-6) and matrix metalloproteinase-9 (MMP-9), which is indicative for disruption of the blood-brain barrier. Methods Twenty-six consecutive poor-grade aSAH patients with multimodal neuromonitoring were analyzed for brain hemodynamic and metabolic changes, including CMD-IL-6 and CMD-MMP-9 levels. Statistical analysis was performed by using a generalized estimating equation with an autoregressive function. Results The baseline cerebral metabolic profile revealed brain metabolic distress and an excitatory response which improved over the following 5 days (P <0.001). Brain tissue hypoxia (brain tissue oxygen tension of less than 20 mm Hg) was common (more than 60% of patients) in the first 24 hours of neuromonitoring and improved thereafter (P <0.05). Baseline CMD-IL-6 and CMD-MMP-9 levels were elevated in all patients (median = 4,059 pg/mL, interquartile range (IQR) = 1,316 to 12,456 pg/mL and median = 851 pg/mL, IQR = 98 to 25,860 pg/mL) and significantly decreased over days (P <0.05). A higher pro-inflammatory response was associated with the development of delayed cerebral ischemia (P = 0.04), whereas admission disease severity and early brain tissue hypoxia were associated with higher CMD-MMP-9 levels (P <0.03). Brain metabolic distress and increased IL-6 levels were associated with poor functional outcome (modified Rankin Scale of more than 3, P ≤0.01). All models were adjusted for probe location, aneurysm securing procedure, and disease severity as appropriate. Conclusions Multimodal neuromonitoring techniques allow insight into pathophysiologic changes in the early phase after aSAH. The results may be used as endpoints for future interventions targeting EBI in poor-grade aSAH patients. Introduction Aneurysmal subarachnoid hemorrhage (aSAH) is a medical emergency with high mortality and morbidity [1,2]. The contribution of delayed cerebral ischemia (DCI) on outcome is undisputed, although the relief of cerebral vasospasm in the subacute phase after aSAH failed to improve functional outcome [3]. Despite advances in neurointensive care, the underlying mechanisms of secondary brain injury remain incompletely understood. Animal data support the importance of pathophysiologic mechanisms in the very early phase after SAH with changes including early vasospasm, inflammation, and global cerebral edema (GCE) [4]. Early brain injury (EBI) is now being recognized as an important cause of mortality and disability after SAH in humans and may be associated with DCI [5]. So far, pathophysiologic mechanisms related to EBI are under-investigated in humans, and no treatment is available to adequately address these processes. Although difficulties exist in translating findings from the experimental setting to the patients' bedside, animal data convincingly provide evidence of neuronal damage within minutes after SAH triggered by brain tissue hypoxia, cerebral inflammation, blood-brain barrier (BBB) breakdown, and others [4]. Monitoring of such events in the very early phase in humans is challenging; however, invasive multimodal neuromonitoring devices allow continuous data acquisition for intracranial pressure (ICP), brain tissue oxygen tension (P bt O 2 ), cerebral blood flow, and at least hourly information on brain metabolism already within the first 24 hours after aneurysm bleeding [6]. Using multimodal neuromonitoring data, we previously showed derangement in cerebral metabolism and increased episodes of brain tissue hypoxia in the first days after aSAH in patients with radiologic evidence of GCE compared with those without GCE [7]. The proinflammatory cytokine interleukin-6 (IL-6) in the cerebral microdialysate as a marker for neuroinflammation has been shown to be associated with DCI and unfavorable outcome following aSAH [8][9][10]. Matrix metalloproteinases (MMPs) are involved in vascular remodeling, neuroinflammation, BBB breakdown, and neuronal apoptosis [11][12][13]. In the experimental setting, MMP-9 potentiates EBI and was associated with apoptosis of hippocampal neurons of rats [11]. In patients with SAH, MMP-9 was associated with disease severity and the development of cerebral vasospasm [14,15]. The goal of the current study was to study pathophysiological events involved in the development of EBI in poor-grade aSAH patients by investigating brain hemodynamics-ICP, cerebral perfusion pressure (CPP), and P bt O 2 -and brain metabolic changes in combination with the local inflammatory response by cerebral microdialysis (CMD)-IL-6 and the function of the BBB by CMD-MMP-9 in the brain extracellular fluid. We intended to focus on the early phase after aSAH and relate these findings to clinical course and outcome. Patient selection and care Between 2010 and 2012, 26 consecutive poor-grade aSAH patients admitted to the Neurological Intensive Care Unit at Innsbruck Medical University requiring multimodal neuromonitoring (Glasgow Coma Scale Score of not more than 8) were studied. One third of our patients presented with Hunt and Hess (H&H) grade 1 to 3 at hospital admission and were eligible for neuromonitoring secondary to early neurological worsening (n = 4/26, 15%) or secondary brain swelling (n = 4/26, 15%). The clinical care of aSAH patients conforms to guidelines set forth by the American Heart Association [16]. All patients were followed with transcranial doppler sonography (TCD) (DWL Doppler-Box system; Compumedics, Singen, Germany) and received continuous intravenous nimodipine. All patients were comatose and treated with continuous sufentanil or ketamine and midazolam drips (or both) to facilitate mechanical ventilation. Acceleration of TCD mean blood flow velocity (mBFV) of more than 120 cm/s in the middle or anterior cerebral artery or daily change in mean TCD velocities greater than 50 cm/s was suggestive of cerebral vasospasm. A catheter cerebral angiogram was performed in patients with severe vasospasm (TCD-mBFV of more than 200 cm/s) refractory to hypertensive therapy (CPP target of more than 80 mm Hg) and treated with intra-arterial nimodipine. Cerebral infarction from DCI was defined as appearance of new infarction on head computed tomography (CT) that was judged by an independent radiologist (PR) to be not attributed to other causes [17]. GCE was defined by an independent neuroradiologist (PR) on the basis of the initial head CT scan as previously described: (1) complete or near-complete effacement of the hemispheric sulci and basal cisterns and (2) bilateral and extensive disruption of the hemispheric gray-white matter junction at the level of the centrum semiovale, which was due to either blurring or diffuse peripheral 'finger-like' extension of the transition zone between gray and white matter [18]. Data collection, neuromonitoring, and ethical approval All admission variables and hospital complications were prospectively recorded in our institutional SAH outcome database, as approved by the local ethics committee (Medical University Innsbruck, AN3898 285/4.8, AM4091-292/4.6). Functional outcome was assessed at 3 months post-bleeding by using the modified Rankin Scale (mRS), and poor outcome was defined as mRS of more than 3. Based on clinical and imaging criteria, patients underwent monitoring of cerebral metabolism, P bt O 2 , and ICP according to the local institutional protocol, which is in compliance with the Helsinki Declaration and has been approved by the local ethics committee (UN3898 285/4.8). Written informed consent was obtained according to federal regulations. Through a right frontal burr hole, a triple-lumen bolt was affixed to insert a Licox Clark-type probe (Integra Licox Brain Oxygen Monitoring; Integra NeuroSciences, Ratingen, Germany) and an ICP parenchymal probe (Neurovent_P-Temp; Raumedic, Münchberg, Germany). In addition, a high-cutoff brain microdialysis catheter (CMA-71; M-Dialysis, Stockholm, Sweden) was tunneled and inserted into the brain parenchyma for hourly assessment of brain metabolism. Isotonic perfusion fluid (Perfusion Fluid CNS; M-Dialysis, Stockholm, Sweden) was pumped through the system at a flow rate of 0.3 μL/minute. Hourly samples were analyzed with CMA 600 and Iscus flex (M-Dialysis, Stockholm, Sweden) for cerebral extracellular glucose, pyruvate, lactate, and glutamate concentrations. At least 1 hour passed after the insertion of the probe and the start of the sampling in order to allow for normalization of changes due to probe insertion. After routine analysis, samples were kept at −80°C. Monitoring devices were inserted into the parenchyma of the vascular territory of the parent vessel of the aneurysm and the location confirmed by brain CT immediately after the procedure and classified as placed in morphologically 'normal' tissue or 'perilesional' (less than 1 cm from the lesion). Brain metabolic distress was defined as lactate-to-pyruvate ratio (LPR) of more than 40, and brain tissue hypoxia as P bt O 2 of less than 20 mm Hg [19]. All continuously measured parameters were saved on a 3-minute average interval by using our patient data management system (Centricity* Critical Care 7.0 SP2; GE Healthcare Information Technologies, Dornstadt, Germany). Analytical methods In all patients, IL-6 and MMP-9 levels could be measured in a single microdialysis sample collected over a period of one hour. Analysis of CMD-IL-6 and CMD-MMP-9 was performed by enzyme-linked immunosorbent assays as described by the manufacturer (Aushon Custom Chemiluminescent Array Kit: 2-plex; Aushon Bio-Systems, Billerica, MA, USA). Calibrated protein standards (50 μL) and cerebral microdialysate (6 μL) diluted in 50 μL of buffer were added to pre-coated wells and incubated for 150 minutes. The wells were incubated for 30 minutes with biotinylated antibodies and then 30 minutes with streptavidin-horseradish peroxidase conjugate. Finally, the SuperSignal Chemiluminescent Substrate was added. All incubation steps were performed on a shaker at room temperature, and all wells were washed after every incubation step. The luminescent signal was detected by using a CCD (charge-coupled device) imaging and analysis system. The concentration of each sample was quantified by comparing the spot intensities to the corresponding standard curves calculated from the standard sample results by using SearchLight® Analyst Software (Aushon Bio-Systems). CMD-IL-6 detection limit was 0.4 pg/mL. Statistical analysis Continuous variables were assessed for normality. Normally distributed data were reported as mean and standard error of the mean, and non-parametric data were reported as median and interquartile range (IQR). Categorical variables were reported as count and proportions in each group. Hourly recorded concentrations in the cerebral microdialysate were matched to continuously recorded parameters (ICP, CPP, and P bt O 2 ) averaged over the sampling period (as shown in Figures 1, 2, and 3). Figure 4 displays the percentage of patients with at least one episode (hourly averaged data matched to microdialysis sampling time) in the abnormal range. CMD-derived metabolic parameters and P bt O 2 were categorized as previously defined according to international accepted definitions to associate with CMD-IL-6 and CMD-MMP-9 levels. Time series data were analyzed by using a generalized linear model using a normal distribution and identity-link function and were extended by generalized estimating equations (GEEs) with an autoregressive process of the first order to handle repeated observations within a subject [20]. Data were transformed (log for CMD-IL-6 and CMD-MMP-9) to meet assumptions of normality. In these GEE models, outcome was the dependent variable and important covariates were included (age and admission disease severity). For all tests, significance level was set at a P value of less than 0.05. All analyses were performed with IBM-SPSS V20.0 (SPSS Inc., Chicago, IL, USA). General characteristics Clinical characteristics, hospital complications, and outcome data are summarized in Table 1. Aneurysm was secured within the first 36 hours in all patients by endovascular coiling (n = 8, 31%) or surgical clipping (n = 18, 69%). In half of the patients (n = 13, 50%), CMD catheters were located perilesional; in all other patients, catheters were located in normal appearing brain tissue. Six patients (23%) developed DCI and four patients died during hospitalization (15%). Cerebrovascular hemodynamics and brain metabolism Neuromonitoring started at a median of 22 hours after ictus. Mean ICP, CPP, and P bt O 2 were 8 ± 1 mm Hg, 73 ± 2 mm Hg, and 16 ± 3 mm Hg, respectively. ICP remained less than 20 mm Hg and CPP significantly increased from neuromonitoring start to a maximum of 80 ± 2 mm Hg 6 days after ictus (P <0.001) ( Figure 1A and B) in parallel to mean arterial pressure (P <0.001, data not shown). P bt O 2 significantly increased from baseline over the monitoring time (P <0.001) ( Figure 1C) with at least one episode of brain tissue hypoxia occurring in 63% of patients when neuromonitoring was initiated and decreasing to 29% and 12%, 48 and 96 hours later ( Figure 4). CMD-MMP-9 levels were significantly elevated in the first hours of neuromonitoring in patients who lost consciousness at ictus (P = 0.005), H&H grade 5 patients (P = 0.002), and patients with initial brain tissue hypoxia (P = 0.03) after adjusting for probe location aneurysm repair method and disease severity as appropriate. All other admission and hospital complication were not associated with higher CMD-MMP-9 levels. Discussion EBI is increasingly recognized to play a key role in pathophysiologic changes contributing to poor functional outcome and mortality after aSAH. Here, we report evidence of brain metabolic derangement, brain tissue hypoxia, neuroinflammation, and BBB disruption in the first 72 hours of neuromonitoring in patients with poorgrade aSAH. Discovering mechanisms of EBI in humans may open the opportunity to target specific treatment endpoints in the early phase after SAH. Neuroinflammation is increasingly recognized as an innate cerebral response to primary brain injury [21]. In the present study, we did not find an association between higher CMD-IL-6 levels and systemic inflammation, supporting the idea of compartmentalization of the central nervous system. Pro-inflammatory cytokines may enhance brain edema through disruption of the BBB and induce neuronal apoptosis and therefore directly contribute to early brain damage [22,23]. Cerebral IL-6 has an estimated half-life of several hours and is produced by microglia, astrocytes, and neurons [24]. In previous studies using cerebral microdialysis, the pro-inflammatory cytokine IL-6 was associated with SAH disease severity, the development of DCI, and poor outcome [8][9][10]. In the present study, we furthermore found an association with admission GCE, metabolic derangement, and a CPP of less than 70 mm Hg. The association between high CMD-LPR and a CPP<70mmHg has been previously reported in SAH patients with admission GCE [7]. Defining the optimal CPP in the early phase after SAH remains a challenge without having predefined brain physiologic endpoints even after aneurysm securing. Brain multimodal monitoring data may be used to target endpoints on the cellular level. In a series of 30 patients with poorgrade SAH, a CPP of less than 70 mm Hg was associated with metabolic distress and brain tissue hypoxia; however, these data cannot be extrapolated to the first 72 hours after SAH [25]. A higher CPP was associated with improved brain metabolism reflected by a lower LPR in a retrospective analysis of aSAH patients with admission GCE [7]. Improving substrate delivery especially in the early phase after SAH may be beneficial in patients with increased need. As shown in patients with traumatic brain injury, CPP augmentation may translate into increased P bt O 2 and a reduction in oxygen extraction fraction [26]. However, a beneficial effect on brain metabolism was not observed. Defining the optimal CPP in the early phase after SAH and identifying patients who may benefit from early augmentation of CPP remain important issues for future research and should include multimodal neuromonitoring data as treatment endpoints. Another potential treatment target in the early phase after aSAH is to suppress neuroinflammation by the application of systemic anti-inflammatory drugs. Potential benefits in patients with SAH have been postulated [27][28][29] and are furthermore supported by the improvement of cerebral edema and decreasing neuronal cell apoptosis in experimental SAH models [30]. With the limitation of associated hemodynamic side effects [31] when applied as a rapid infusion, a continuous low-dose infusion may be considered [32]. We found an early upregulation of CMD-MMP-9 in our study population, and higher levels were associated with disease severity, loss of consciousness at ictus, and early brain tissue hypoxia. Loss of consciousness at ictus is highly correlated with poor clinical grade and the development of early or delayed brain edema [18]. MMP-9 contributes to endothelial basal membrane damage, neuroinflammation, and apoptosis and therefore plays a pivotal role in EBI [11][12][13]. Serum-MMP-9 levels were elevated in patients who developed cerebral vasospasm, although both an initial upregulation and a sustained prolonged increase have been described [15,33]. This again supports the importance of local measurements in the brain as serum markers may reflect a dilution of the innate cerebral response or exaggerated systemic levels originating from multiple organ systems [14,21]. Antagonizing MMP-9 diminished cortical apoptosis, was associated with improved outcome after experimental SAH [34,35], and was recently postulated as potential therapy in ischemic stroke [36]. Bedside analysis of standard metabolic parameters in the cerebral microdialysate revealed a high LPR and an increased release of the excitatory amino-acid glutamate into the extracellular compartment. LPR expresses the redox state of the cell, which is determined by oxygen availability and oxidative metabolism. Glutamate levels were highest at the start of monitoring and gradually returned to near normal baseline values, which has been nicely documented in experimental SAH models [37]. This parallel increased level of LPR is indicative for tissue ischemia and therefore strongly suggestive of global cerebral ischemia in our poor-grade population. However, our monitoring devices were implanted when cerebral recirculation already occurred. Based on pyruvate levels in the normal range in combination with a high LPR, the metabolic profile may also suggest post-ischemic mitochondrial dysfunction, especially in the absence of brain tissue hypoxia 48 hours after ictus. Mitochondrial dysfunction may be diagnosed bedside by using standard metabolic data derived from CMD and was recently investigated in 55 patients with poor-grade SAH [38]. The authors describe a more-than-sevenfold-higher incidence of episodes of mitochondrial dysfunction compared with episodes of cerebral ischemia as cause for disturbed cerebral energy metabolism in patients with SAH [38]. Although no specific treatment to improve mitochondrial dysfunction is currently available, further research is warranted as mitochondrial dysfunction may increase tissue sensitivity to secondary adverse events such as vasospasm and decreased cerebral blood flow. We observed improvement in brain metabolism and P bt O 2 over the monitoring time most likely secondary to the parallel increase in CPP. Brain extracellular glucose concentrations significantly decreased to a critical level in a substantial amount of patients, whereas systemic glucose levels remained constant and this is suggestive of increased cerebral glucose consumption. Achieving normal cerebral glucose levels should be recommended as neuroglucopenia is associated with metabolic distress and poor outcome after SAH [39]. Quantifying brain metabolism and neuroinflammation may be of importance as both were associated with poor functional outcome. All statistical models were corrected for important covariates, including probe location, as in half of our patients the microdialysis catheter was within 1 cm from the lesion. 'Perilesional' probe positioning implies that the microdialysate was collected adjacent to radiological damaged brain tissue, where cell necrosis, blood compounds, autophagy, and apoptosis may alter brain metabolism and ameliorate cytokine release into the extracellular compartment. Our study was designed as a pilot study and included only a small number of patients and this is a potentially limiting factor. Moreover, early pathophysiologic changes described in the present study may be relevant for patients with poor-grade aSAH and not be generalizable to all clinical grades. We were not able to define specific treatment targets based on the following limitations: (1) a localized metabolic information using cerebral microdialysis technique (2) the small sample size and (3) local treatment strategies which may differ from other institutional protocols and substantially influence longitudinal brain physiologic data. Importantly, patient-and disease-specific data were prospectively documented, and statistical models were corrected for important covariates. Conclusions EBI is believed to substantially contribute to secondary brain injury and to cause significant morbidity and mortality following aSAH. The present study proves that multimodal neuromonitoring techniques can provide insight into pathophysiologic changes in the early phase after aSAH. In our series of 26 patients, catheters were placed within the first 36 hours, revealing metabolic derangement and (to a certain degree) hemodynamic instability, a pro-inflammatory cerebral response, and BBB breakdown. Multimodal neuromonitoring data may assist the neurointensivist in defining treatment targets on the cellular level, eventually opening the door for specific treatment options to minimize early brain injury in patients with aSAH. Key messages Early brain injury (EBI) is common after subarachnoid hemorrhage (SAH) and is associated with poor outcome. Pathophysiologic mechanisms of EBI include bloodbrain barrier breakdown, brain tissue hypoxia, neuroinflammation, and excitotoxicity leading to brain edema and metabolic derangement. Neuromonitoring techniques may identify underlying pathophysiologic mechanisms occuring in the early phase after aSAH and therefore help to understand mechanisms of EBI. Multimodal neuromonitoring data may assist the neurointensivist in defining treatment targets on the cellular level, eventually opening the door for specific treatment options to minimize early brain injury in patients with aneurysmal SAH. Competing interests The authors declare that they have no competing interests. Authors' contributions RH was involved in the idea, study design, interpretation of data, statistical analysis, and writing of the manuscript and final revision of the manuscript. AS made substantial contributions to the design and data acquisition, analysis, and interpretation and performed laboratory analysis of brainderived biomarkers. RB and BP made substantial contributions to the idea, study design, data analysis and interpretation, and final revision of the manuscript. ES made substantial contributions to the idea, study design, data interpretation, and final revision of the manuscript. AD, APA, FS, MF, and CH contributed to the design, data acquisition, and interdisciplinary data interpretation and performed laboratory analysis of brain-derived biomarkers. WOH made substantial contributions to the design, data acquisition (multimodal neuromonitoring high-frequency data), and data interpretation. PL substantially contributed to the study design, statistical analysis, and data interpretation. PR performed radiographic analysis as independent radiologist and substantially contributed to the data interpretation. CT substantially contributed to the design and data interpretation and performed placement of multimodal neuromonitoring devices. All authors critically reviewed, drafted, and approved the final version of the manuscript and agree to be accountable for all aspects concerning the work.
2017-06-22T11:36:12.435Z
2015-03-09T00:00:00.000
{ "year": 2015, "sha1": "4abb96ac970f56409476c1a49d0128aa80037b3e", "oa_license": "CCBY", "oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-015-0809-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4abb96ac970f56409476c1a49d0128aa80037b3e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3303066
pes2o/s2orc
v3-fos-license
Proportionate methods for evaluating a simple digital mental health tool Background Traditional evaluation methods are not keeping pace with rapid developments in mobile health. More flexible methodologies are needed to evaluate mHealth technologies, particularly simple, self-help tools. One approach is to combine a variety of methods and data to build a comprehensive picture of how a technology is used and its impact on users. Objective This paper aims to demonstrate how analytical data and user feedback can be triangulated to provide a proportionate and practical approach to the evaluation of a mental well-being smartphone app (In Hand). Methods A three-part process was used to collect data: (1) app analytics; (2) an online user survey and (3) interviews with users. Findings Analytics showed that >50% of user sessions counted as ‘meaningful engagement’. User survey findings (n=108) revealed that In Hand was perceived to be helpful on several dimensions of mental well-being. Interviews (n=8) provided insight into how these self-reported positive effects were understood by users. Conclusions This evaluation demonstrates how different methods can be combined to complete a real world, naturalistic evaluation of a self-help digital tool and provide insights into how and why an app is used and its impact on users’ well-being. Clinical implications This triangulation approach to evaluation provides insight into how well-being apps are used and their perceived impact on users’ mental well-being. This approach is useful for mental healthcare professionals and commissioners who wish to recommend simple digital tools to their patients and evaluate their uptake, use and benefits. IntrOductIOn Mobile health (mHealth) involves using handheld and typically internet-connected digital devices, such as smartphone and tablets, for the purpose of healthcare. These devices run a wide range of software applications (apps). Evaluation of digital technologies for young people's mental health has focused largely on internet and computer-delivered cognitive behaviour therapy (eCBT) for depression and anxiety and computerised treatments for diagnosed conditions, for example, attention deficit hyperactivity disorder. 1 Researchers have typically adopted traditional health technology assessment approaches for eCBT, such as randomised controlled trials (RCTs), because of the need to demonstrate the promised benefits and to justify the healthcare resources that are required to deliver them. While the literature does regularly cite that eCBT is more cost-effective compared with traditionally delivered CBT, findings from the recent Randomised Evaluation of the Effectiveness and Acceptability of Computerised Therapy trial found little cost-effectiveness differences between two eCBT programmes and GP treatment as usual. 2 Because of the resources and time needed to plan, undertake and implement findings from traditional evaluation assessments, RCTs and other 'big' trial designs are also considered to be out of proportion to the rapid development and obsolescence of digital technologies, 3 4 especially selfhelp tools (often 'apps') which can be accessed directly by users and may not be used in conjunction with clinical services. Several methods have been used to evaluate apps, 4 and it is unlikely that one methodology will fit all. It is also argued that formal mHealth trials may not represent their intended real-world use, and evaluations should appraise technologies within the settings where they are intended to be used. 3 In particular, the emergence of relatively unregulated (non-medical device) apps aimed at public mental health and well-being indicates the need for alternative approaches to evaluation. The focus of this article is evaluation methods for simpler digital tools, such as mobile apps for assisting well-being, which are widely and publically available and intended for use without direct clinical supervision. 5 These products may have potential in helping young people overcome some of the traditional barriers to accessing support and reducing stigma. 6 7 Although there has been discussion of the need to stratify evaluation methods of healthcare apps based on complexity and risk, 8 there has been little research investigating how best to evaluate examples of simple, self-help digital tools. Conceptualisation of engagement with digital health interventions has recently been recognised as an important area for research. 9 We propose that the quality and value of these digital tools can be assessed through analysis of real-world usage data and assessment of user experience, methods which match the relative simplicity of these tools and anticipated size of effect on users' mental health. Furthermore, the gap being addressed is the need for better quality early evaluation of new digital health products, whether this is an initial summative evaluation of an established product to help decide whether it is worth adopting, or as in this case the formative evaluation of an app during its development. Using the context of an independent evaluation of In Hand, a mental well-being smartphone app, this paper aims to show how a proportionate evaluation can be realised while using elements of web-based and app-based software design that can be readily introduced by product developers into existing applications. In Hand (www. inhand. org. uk , launched 2014) is an app developed by young people with experience of mental health problems to support well-being through focusing the user on the current moment and bringing balance to everyday life. In Hand is a simple, free-to-download digital self-help tool publicly available on iOS and Android, intended to be used independently of healthcare services. Using a traffic light inspired system, the app takes the user through different activities depending on how they are feeling (see figure 1). Its development was led by a UK arts organisation, working with a digital agency and a public mental health service provider. The project's clinical lead drew on principles of cognitive behaviour therapy and Five Ways to Well-being 10 during the development process, but the primary influence on the content arose from needs derived in the co-design process that explored coping strategies used by young people at times of stress or low mood. NIHR MindTech HTC was commissioned to undertake an independent evaluation of a suite of digital resources produced by Innovation Labs 11 Original article Hand was one of these products. As these digital resources were about to be publically launched as the evaluation stage commenced, the team proposed to observe how users engaged with the tools by capturing background usage data and seeking feedback directly from users through embedding user elicitation into the tools. This approach would enable the capture of insights from naturally occurring users and gain understanding of how people interact with it in the 'real world'. MethOds research design We wanted to evaluate how In Hand was engaged with in real life, and what kind of benefits users gained through use. Three methods of data collection were used to gain insight into naturalistic use of In Hand: (1) sampling and analysis of quantitative mobile analytical data (eg, number of individual user sessions, number of interactions with each section of the app); (2) a user survey with questions adapted from a validated well-being measure (the Short Warwick-Edinburgh Mental Well-being Scale 12 (SWEMWBS)) and (3) semistructured individual interviews with a subsample of survey respondents. Procedure Mobile analytical data Flurry Analytics (now part of Yahoo!, http:// developer. yahoo. com/ analytics), a tool that captures usage data for smartphone/tablet apps, was used to securely access anonymous, aggregated data about users' interactions: this captured time spent using the app, frequency of use, retention over time and key app-related events such as visits to specific content. No identifying information for any user was directly available to the research team, and interactions across sessions by individual users could not be tracked. Data were captured and analysed for the iOS and Android versions of In Hand between May and October 2014. User survey A software update to In Hand was implemented in August 2014 to add an invitation to complete the survey into the app. On first opening the app after the update, users were presented with a 'splash' page which invited them to feed back on the app, which was also added to the app's menu (see figure 2). Once users clicked this, they were taken to the In Hand website (via internet connection outside the app), where further information about the survey was presented. Interested participants then clicked a URL to access the survey (hosted on SurveyMonkey) and consented to complete it. As an incentive to participate, respondents could opt into a prize draw to win a shopping voucher on survey completion. The online survey was open for 6 weeks. Semistructured interviews At the end of the online survey, users could enter their email address to register their interest in participating in a semistructured interview about their experience of In Hand. Twenty-four users registered their interest and were emailed an information sheet that included a request for an informal chat with a researcher about the interview procedure and to answer any questions. From this, eight people chose to participate: an arrangement for an interview was made and an online consent form emailed to them and completed prior to interview Six interviews were carried out by telephone and were audio recorded, with two asynchronously conducted by email. Participants received a £15 voucher as an inconvenience allowance. sampling and recruitment The target group for In Hand is young people (aged up to 25 years), but given it was publicly available, people of all ages could access and use the app and therefore take part in the evaluation. All In Hand users (aged ≥16 years) were eligible to participate in the survey and interview. Those who entered their age as ≤15 (meaning parental consent would be required) at the start of the survey were automatically directed to an exit web page. One hundred responses to the survey were sought: using data from another digital tool evaluation, 11 a conservative estimate of 10% of all app users completing the survey was made, suggesting that the survey needed to run for 6 weeks to achieve 100 responses. For the interviews, a purposeful sample of 12 respondents with varied demographic characteristics was sought to gain a range of perspectives. Survey respondents who opted in to the interviews were sampled according to their age, gender, ethnicity, sexuality, disability, geographical location and their use and experience with In Hand. survey and interview guide design The user survey and semistructured interview topic guide were developed specifically for this study. Young people involved with Innovation Original article Labs and In Hand development collaborated with the NIHR MindTech Team in generating the study's design and areas to be explored in the survey and interview, reviewing and testing out the online survey and interview schedule and finalising the study materials. To explore the kinds of benefits In Hand had to users, the survey questions were based on the SWEMWBS, 12 an evidence-based measurement tool with seven dimensions of mental well-being which has been shown to be valid and reliable with young people. 13 In this real world, observational evaluation, it was not possible to assess participants' well-being before and after use of the app, but rather, users were asked to rate to what extent In Hand had helped them on specific dimensions of mental well-being. During the co-design process, one of these dimensions was reworded to be more accessible to young people ('feel optimistic' changed to 'have a positive outlook'), and three other dimensions were selected for inclusion in the survey to reflect the young people's experiences ('feel ready to talk to someone else', 'feel less stressed', 'feel more able to take control'). These dimensions were also used to guide the interview topics. data analysis Aggregated analytical data from Flurry were tabulated and summary statistics calculated. To assess users' engagement, the In Hand team were asked to advise on the time a user would need to spend on the app to have a 'meaningful engagement': that is, open the app, make a selection of how they were feeling and perform at least one activity based on their response to the front screen (eg, take a photo). Flurry gives data on session length in fixed ranges (see table 1). It was likely that many sessions recorded as less than 30 s would have been too short for the user to have completed an activity. Therefore, user sessions in the range 30-60 s and above were classed as 'meaningful engagement'. Survey data were downloaded from SurveyMonkey and imported into a database. Descriptive statistics were calculated using SPSS V22 (Chicago, Illinois, USA). For the demographic description of the sample, the whole dataset is reported (n=131), but for data relating to whether In Hand helped with mental well-being, only data from respondents who ran the app once or more are reported (n=108). Audio-recorded interviews were transcribed verbatim. A top-down, deductive thematic analysis 14 was taken which focused on: how the user discovered In Hand, their use of In Hand, whether it was helpful, any risks in using In Hand and potential areas for improvement. Each transcript was reviewed and codes assigned to content which reflected the defined areas of interest. These codes were reviewed to identify any overlap between codes and to group them into overarching themes. FIndIngs Analytics From launch of In Hand on 14 May to 31 October 2014, there were 22 357 user sessions on In Hand across both mobile platforms (14 981 on iPhone; 7376 on Android). Seventy-five percent of these were returning users. Sixteen per cent remained active 1 week after first use (likely to be at installation), 7% after 4 weeks and 2% after 20 weeks. Around half of the users (52%) opened In Hand once a week, with 34% using it 2-3 times per week, 10% 4-6 times and 4% more than six times per week. Table 1 shows engagement with In Hand measured by the length of time of each user session (data were provided by the analytics in the ranges shown). More than half of users' sessions (58%) were in the 'meaningful engagement' ranges of 30-60 s or longer and a further fifth Original article of sessions (20%) were somewhere in the range of 10-30 s where some users may have had time to have meaningful engagement. Around a fifth (22%) were in the lower ranges of 10 s or less session length. Less than 10% of sessions were in a range over 3 min. Overall, the median session duration was 43 s for iPhone users and 40 s for Android users (Flurry reports the average as the median, rather than the mean). The opening screen of In Hand presents the question 'Hello, how are you feeling?' with four options: 'Great', 'So-So', 'Not Good' and 'Awful' and associated suboptions (see figure 1). The most frequent selection on this opening screen was 'So-So' (11 751 occurrences), followed by 'Not Good' (9958 occurrences), 'Great' (9048 occurrences) and ' Awful' (8614 occurrences). In general, it was seen that users accessed the entire set of suboptions but with some obvious preferences based on which of the four options they chose, for example, reading multiple inspirational quotes, viewing a personally loaded photo or using the 'Jot it down' function (akin to a journal where users could type their thoughts into the app). Findings from user survey and interviews The survey information web page was accessed 592 times, with 131 users following through to complete the survey-a response rate of 22% from those who accessed the survey information ( figure 3). The sample was predominantly female (n=100, 76%), ethnically white (n=122, 93%), and 75% (n=98) were aged 16-25 years (total range 16-54 years; mean 23±10 years). Three-fifths (n=79, 60%) reported they had experienced mental health problems. Table 2 summarises how survey respondents reported In Hand helped their mental well-being. The 10 mental well-being dimensions, including the seven from the SWEMWBS, are presented in rank order. For seven dimensions, over 60% of the survey respondents reported that In Hand had offered them some help, with three dimensions reported as being helpful by half or less of the respondents. For four dimensions, almost three-quarters (74%) or more reported In Hand was helpful to them-'More able to take control, 'Think clearly', 'Feel relaxed' and 'Deal with problems well'. These top four dimensions relate most closely to the primary purpose of In Hand. For all of the dimensions, most respondents reported that In Hand had helped them 'a little bit', rather than 'a lot'. The eight interviewed participants were mostly female (n=7), aged 16-44 years (mean 25±9 years), white (n=7) and had experience of mental health problems (n=7). Interviewees further highlighted how In Hand helped the dimensions of their mental well-being as revealed in the survey, including describing how it helped them to feel relaxed or less stressed: "I can get a bit panicky quite quickly, so if I just stop and go on something like In Hand, the ease of using it and the colours and everything, sort of calms you and takes your mind off what you are feeling… looking at a quote helps you to feel more calm and relaxed." (Interviewee 8) Helped to think clearly and more able to take control: "In Hand is nice, so simple and it's just common sense, the things it asks you about how you are feeling. But that then leads you on to thinking about a lot of things. So for me, it gives me more independence with my emotional well-being." (Interviewee 7) And helped to facilitate a positive outlook: "The little sayings like 'keep going' I found helpful because it's something a friend might say if they were supporting you and it makes you realise that you can't just give up." (Interviewee 3) Original article The interviewees further expanded on why In Hand was useful to them. They talked about In Hand being discreet, private, and not requiring them to be in specific locations to access it. Interviewees described how the anonymity and perceived non-judgmental nature of a digital tool was important to them. It gave them the ability to think about how they were feeling at any time without having to involve other people: "There's no-one to trust on your app -it's just asking you how you are feeling. There's no kind of come back or no-one's going to say anything back." (Interviewee 1) dIscussIOn strengths of the evaluation approach First, we believe the approach to evaluation set out in this paper is useful as it was intended to be proportionate to this type of digital tool and its anticipated impact on health outcomes-a simple, non-clinical tool intended for personal, unsupervised use as one part of an individual's self-management strategies for their emotional well-being. The evidence generated by this approach, informed by the principles of Health Technology Assessment, has provided quantifiable insights into the app usage and patterns of engagement, an assessment of how the app has supported users with their mental well-being and identified some descriptive insights to how the tool works to support users. The approach has value because it goes beyond user ratings in app stores, while being timely and cost-efficient to implement. Second, the approach was able to gather data from actual users of the tool in real world settings. By accessing the analytical data of all users within a specified time period, we were able to analyse, in aggregate, how people used the tool and how this changed over time. This may be different (or similar) from how people interact with a tool within a controlled setting: digital interventions in formal trials tend to have greater adherence than naturalistic evaluations. 15 Moreover, the survey and interview respondents were people who had selected the tool independently from a range available (via the app stores) and used it in naturally occurring ways, rather than a sample of volunteers using the tool under experimental, controlled conditions. Third, the evaluation plan (along with the app) was guided by a team of young people and proved achievable within a timescale to fit with the development cycles of digital tools. Evaluation commenced at the end of beta testing when the app first became available from app stores; data and analysis were communicated to the development team for implementation at their 6-month review point. The co-design process assisted in ascertaining that the evaluation methods were understandable and used language and a design familiar to its target audience. The methods adopted did not require extensive digital development-which would have been outside the resources of the development and evaluation teams-and good response rates were achieved. Nor did it require retention of participants over time to generate useful insights. Fourth, while the approach used could be criticised for inability to determine the extent to which any changes to mental health are attributable to using In Hand, we would argue that our approach was not intended to measure effect, but rather assess the type of impact In Hand may have on users' mental well-being. A measurement of the effect of In Hand and the resources this would require, is, we would argue, out of proportion to the nature of the tool and anticipated effect on mental well-being. In Hand is a self-help tool, commensurate with other self-help tools, such as books or online information, rather than a clinical intervention, treatment or psychological therapy, and In Hand is not a medical device. As such the level of evidence required to provide assurances of quality should not be expected to be as extensive as would be required for clinical interventions or medical devices. In this regard, we believe the methods adoptedincluding a validated measure of mental well-being to explore the nature of the effect-were sufficient to demonstrate this and can be considered a strength of the approach used. This assertion is supported by data from an associated evaluation of another simple digital tool-DocReady (www. docready. org)-which used the same standardised measure in a similar manner. 11 This evaluation found that DocReady was reported as useful in different domains of mental well-being to In Hand: the top three rated domains were 'Able to think clearly', 'Ready to talk to someone else' and 'More able to take control', which reflects how DocReady aimed to benefit its users through changing preparatory intentions and behaviour in seeking out help from a GP. As with In Hand, most users rated the help as 'a little bit' rather than 'a lot', confirming that both these tools have a limited, specific effect and, as would be expected, one mHealth app would not provide all the functions required for overall mental health. 16 Limitations of the evaluation approach First, in common with other evaluation methods, the sample for the survey and interviews was reliant on people opting in to participate, so is a self-selecting sample. These people are those most likely to have a positive experience with the tool or those wanting to feedback their dissatisfaction-the so-called 'TripAdvisor effect'-and people who 'fall in the middle' may not be providing feedback. Likewise, it may be that users who found benefits of using the tool were more likely to be those that have 'stuck with' the tool over time. In addition, the sample is relatively homogeneous-predominantly white females (the target audience for In Hand was young people aged up to 25 years, so the limited age range was as to be expected). This gender difference is observed in other research of similar tools, [16][17][18] but whether this is a mHealth usage or research participation bias is not clear. Second, while responses to the survey were good and numbers exceeded our initial target (see figure 3), this probably represents a small proportion of overall users: because usage of the app is recorded in user sessions, rather than individuals, it was not possible to accurately estimate the response rate to the survey. Third, as figure 3 shows, interest in taking part in the interviews was very limited. In the event, we interviewed all those that agreed and achieved only eight in total: this was lower than our target number and it was not possible to sample from the specified criteria of interest (eg, gender, different experience with the tool). The interview findings are therefore to be interpreted with caution as they are based on a small number of experiences with a homogenous sample. Users of other backgrounds and demographics may have different perspectives on In Hand. Fourth, the time period between users first interacting with In Hand and the completion of the survey was not recorded. Therefore, an assessment of the influence of recall bias on the reliability of the results is not possible. However, users would have accessed the survey through the hyperlinks within the app, so it is probable that they completed it during an app session. Finally, the Flurry analytics software is designed for developers, rather than a research tool, which brings limitations. In particular, data were aggregated into predefined numerical ranges which acted to limit the analysis. For example, Flurry categorises each individual user session into time ranges, rather than providing the actual time spent on the app. The precise format of data returned by Flurry depended on the smartphone operating system and did not always align well between these. Moreover, it was not possible to track an individual user's interactions with the app over time, for example to assess how consistent the usage was. The importance of using a more sophisticated individualised metric combining different aspects of user engagement (an 'App Engagement Index') is a current topic in the mHealth literature 19 which could be adopted in future research. evaluation of In hand As a result of this evaluation, we can describe how people used In Hand and the nature of its benefit on users' mental well-being: Original article ► Each interaction with the app was brief, but the majority of interactions were long enough to allow an active interaction with the app. ► The majority of users interacted with the app more than once and although use did taper off over time, there was a level of sustained use. ► In Hand supported users' mental well-being through changing attitude and point of view, helping with decreasing feelings of stress and increasing feelings of relaxation and supporting clear thinking. ► Users described these supports to their mental well-being were encouraged because: the app prompted users to actively reflect on their current mental state; provided an easily available strategy to ease anxiety and helped build confidence and empowerment in how users coped with their mental health day to day. These usage patterns are similar to those seen in other studies of digital interventions, 20 especially the low retention rate for many, but sustained use for some, 4 which is similar to the 1-9-90 rule observed in online forums. 21 This rule suggests that in participating in online communities, the majority (90%) are passive users ('lurkers' who observe but do not actively participate in the online community), a minority (9%) are occasional contributors, and an even smaller proportion (1%) are the biggest participators who are responsible for the majority of contributions to the online community. 21 While other study designs can identify whether an effect is achieved, for example, increased emotional self-awareness, 18 the methods adopted in this evaluation provide an understanding of how users perceive the benefits of a digital tool. The next stage in our proposed proportionate approach would be a closer examination which would explore any adverse effects, as well as further evaluation of value and benefit, if uptake of the tool proved sufficient to warrant it. cLInIcAL IMPLIcAtIOns This paper has demonstrated how simple, self-help digital tools can be evaluated in a proportionate and practical way. Triangulating data sources provided an understanding of how In Hand was used and the ways it can support mental well-being. In particular, the survey of naturally occurring users provided insights into the value of the app from the user perspective and, in this case provided evidence of the app having the intended effect for users. This is important for healthcare decision-makers who need to be assured of the quality of an app before recommending to patients. In addition, analytical data provides evidence of how and when a tool is being used (or not), which enables health providers and commissioners to make a judgement on the value of a tool in order to ensure any cost is justified (not applicable for in this case as the tool is free to use). Furthermore, we have demonstrated cost-efficient, timely methods, which can be easily incorporated into digital tools, thus providing scope for audits and service evaluations.
2017-10-19T08:39:09.585Z
2017-10-09T00:00:00.000
{ "year": 2017, "sha1": "409cce90369fb25186babadc5f543ec5214c5c1e", "oa_license": "CCBYNC", "oa_url": "https://ebmh.bmj.com/content/ebmental/20/4/112.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1ed5ace39eb23ac451da32040d64779b964d60ed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
198194287
pes2o/s2orc
v3-fos-license
Monocular Robust Depth Estimation Vision System for Robotic Tasks Interventions in Metallic Targets † Robotic interventions in hazardous scenarios need to pay special attention to safety, as in most cases it is necessary to have an expert operator in the loop. Moreover, the use of a multi-modal Human-Robot Interface allows the user to interact with the robot using manual control in critical steps, as well as semi-autonomous behaviours in more secure scenarios, by using, for example, object tracking and recognition techniques. This paper describes a novel vision system to track and estimate the depth of metallic targets for robotic interventions. The system has been designed for on-hand monocular cameras, focusing on solving lack of visibility and partial occlusions. This solution has been validated during real interventions at the Centre for Nuclear Research (CERN) accelerator facilities, achieving 95% success in autonomous mode and 100% in a supervised manner. The system increases the safety and efficiency of the robotic operations, reducing the cognitive fatigue of the operator during non-critical mission phases. The integration of such an assistance system is especially important when facing complex (or repetitive) tasks, in order to reduce the work load and accumulated stress of the operator, enhancing the performance and safety of the mission. Introduction Maintenance of equipment in scientific research organisations, like the European Centre for Nuclear Research (CERN), is critical in order to ensure the correct operation of the experimental infrastructure. However, people's access to experimental facilities is not always possible due to their hazardous characteristics such as the presence of radiation, high magnetic field and possible lack of oxygen in the context of underground areas. Telerobotic platforms can perform some of the maintenance tasks in a safer and more reliable manner. As a matter of fact, up until now, the CERNBot robotic platform [1] has been used in more than one hundred real interventions, which have been very successful, and has enabled the accumulation of experience in order to improve future works. Besides this, it is important to take into consideration the fact that, due to the huge amount of equipment to be maintained, the operations can be repetitive. As an example, in the Large Hadron Collider (LHC) accelerator there are around 4500 BLM sensors, which have to be checked regularly to assure the good performance of the system. In these situations, the use of standard and According to the acquired experience on robotic interventions and taking into account their growing complexity, further steps need to be performed in order to guarantee safety and efficiency. For this, reliable computer vision, object recognition and grasping modules need to be studied and integrated in the system, providing the expert operator with more sophisticated tools in order to reach the expected quality during operation, while also increasing safety and accuracy. The computer system also needs to be rapidly configured and installed, so the context of this paper focuses on the use of monocular cameras for on-hand robotic vision control, which can be easily installed in specific places of the gripper or tools. Moreover, the vision system needs to be reliable under occlusions, reflections and on metallic surfaces with lack of features, also providing specific added-values such as the automatic calculations of depth. These are in fact the main goals of the vision system described in this paper. State of the Art In the scientific literature a great pool of computer vision systems providing the position and orientation of targets with respect to the current camera situation can be found. Some of these systems incorporate robotic actuators with an on-hand 2D camera [7], attached in conjunction with sensors such as a laser [8,9], or a sonar [10]. Others are based on the use of single monocular cameras [11]. Different hardware setups can be used according to the issue to be addressed, such as the installation of cameras on the mobile robot, or on the scene (in a fixed position) [12,13], in order to provide an environmental third view, using also eye-in-hand techniques [14,15]. Sensor fusion systems can make use of cameras with depth information like Kinect [16] and RealSense [17]. Additionally, some related works make use of stereo cameras to track a target, in which both cameras are placed at a predetermined distance and rotation, calculating a whole 3D reconstruction of the scene [18][19][20] using Epipolar Geometry [21]. A single-camera system can also be deployed [22] simulating a stereo system, either using markers [23], or previously establishing the separation parameters among two images [24], knowing the relationship between the key-points of both images, necessary to build the Epipolar Geometry. Nobakht and Liu [25] proposed a method to estimate the position of the camera with respect to the world, using a known object to compute the position by Epipolar Geometry. Although time-of-flight (ToF) cameras [26] procure a set of key-points (based upon low modulation infrared light (20 MHz)) for a 3D reconstruction by the camera intrinsic parameters, they depend largely on the object material. All the systems presented above lead to a significant growth in the robotic platform hardware, for which the accuracy depends to a great extent on the environmental light, on the reflection against the target, and on the material. Only a few have faced the problem to recognise metallic objects, giving some promising results by the use of neural networks, while still presenting errors above 10% [27]. The present paper provides an step forward in order to allow a remotely supervised robotic system to recognise, track and estimate the position of metallic targets in real industrial and scientific scenarios. Results are very promising, which have been tested and validated in real interventions in the CERN tunnel facilities, during maintenance operations. Problem Formulation According to the current state of the art and the real necessities to be solved in robotic operations at CERN, to the best of the authors' knowledge, there is no vision-based system that allows reliable object recognition and tracking of metallic targets in scenarios with partial occlusions, reflections and luminosity constraints, permitting also the calculation of distances to the object using a simple monocular camera, which can be installed in a specific position of the gripper. As a matter of fact, for further experimentation and user operator efficiency, it is also necessary to provide a grasping determination module, which can approach and guide the robot to the target in a simple and safe manner. Moreover, considering the necessity to use such an Real-Time (RT) tracking vision system for both the operator's feedback and the robot arm control to assist teleoperation, we deemed fit to focus on these techniques, which provide the robot position and orientation in relation to an image pattern taken from the scene and optimising them by adding the utility of computing the depth information of metallic objects (e.g., screws, connectors, etc.), focusing on the robustness and efficiency of the algorithm. As stated in the results, the integration of such tracking techniques have already been done and validated within the structure of the CRF (CERN Robotic Framework), with the purpose of using them in the interventions that are currently being carried out. With that in mind, the proposed solution contained in this paper exploits the transformation matrix of the robot to localise the current camera position (eye-in-hand configuration) and to determine the distance to the target (i.e., depth estimation). After the integration of the system in the CERN's HRI [28] (see Figure 1), this can trigger the tracking and depth estimation for any of the objects present in the scene. The vision system also includes an object recognition module (deep neural network-based), which does not only accept the Region of Interest (ROI) input over an unknown target selected manually by the operator but also searches autonomously for the objects, letting the operator interact with the robot by referring to objects instead of bounding boxes, and enabling further experimentation to carry out semi-autonomous tasks on the recognised metallic pieces (e.g., motion planning and grasping execution). It is very important that the vision system works at a high performance, since most remote operations rely on real time visual feedback to the operator, although the 100% real-time capability cannot be really fulfilled because the 4G network does not provide such feature. This information is the main link between the human-expert and the robot, which is being operated remotely. The visual feedback needs to be provided to the user at a minimum delay, in order to avoid move-and-wait human-robot interactions, which would affect the efficiency of the system. Thus, when the teleoperation is carried out, the visual feedback has to work concurrently with the estimation and guidance process, which provide information that can be represented in the user interface using augmented reality techniques. On the other hand, for the autonomous and semi-autonomous tasks that do not require the RT feedback to the user, it is possible to gather visual data for further analysis. In summary, this paper presents a novel solution to extend the capabilities of a supervised HRI in order to improve the guidance of a robotic arm (see Section 4). For this, the system includes a novel depth estimation solution, as well as a deep learning-based Faster-Regions with Convolutional Neural Network (RCNN) Features model [29] with Resnet-101 object recognition, which work efficiently in metallic surfaces, having unexpected reflections, partial occlusions and lack of visibility. Besides this, the vision system has been designed to work on board, without needing special hardware, also enabling the use of a broad range of monocular cameras in the market (e.g., black&white, endoscope and large full-High Definition (HD) Pan-Tilt-Zoom (PTZ) cameras), as can be seen in Table 1. This also enables the increase of the number of available cameras to the operator, facilitating the operation task. Preliminary Experiments As a preliminary step, in this section a comparative of different tracking algorithms using the above cameras and their calibration procedure is presented. Tracking Algorithms Comparison First of all, several tracking algorithms from the scientific literature were tested in order to better understand their performance in real robotic intervention conditions. The results can be summarised as follows: • The Boosting algorithm [30] uses a set of techniques that mixes several weak classifiers algorithms to create a more robust solution. It showed the fastest performance when evaluating the features, while presenting a very low accuracy. • Babenko et al. [31] present a robust object tracking Multiple Instance Learning (MIL)-based algorithm [32], which, although it was showing high precision, the computational time was higher too, due to the fact that it considers a set of training samples that can be ambiguous, as a single object can have many alternative instances that describe it. • The Tracking-Learning-Detection (TLD) algorithm [33] tries to localise all the similarities within the scene. Thanks to this behaviour, it is capable of facing temporal occlusions, but it obtains a large number of miss-detections in scenarios with metallic parts, as well as higher computational time consumption. • A version of the Kernelized Correlation Filters (KCF) algorithm [34] has been implemented. This algorithm, which is based on Histogram of Oriented Gradients (HOG) [35,36], has shown good computational performance and the greatest accuracy by tracking different kinds of objects. This is the algorithm that has been used as the basis for the solution implementation presented in this paper. In fact, Hare et al. [37] compared their own tracking algorithm with the recently released ones, obtaining interesting results regarding those arising from HOG descriptors, instead of others derived from Haar-like features [38][39][40] such as Boosting, since HOG describes the object shape by way of edges detection or its distribution of intensity gradients after histograms concatenation from a set of small connected regions that were split from the main ROI. Then, in order to gain an improvement that is invariant to shadows and luminosity, it increases the accuracy, normalising these histograms. Therefore, since our work is based on the tasks' execution on metallic surfaces, the appearance of a large number of reflections can be readily detected, forcing us to dismiss the Haar-based algorithms as they are grounded in the pixel's light intensity instead of HOG, which provides a solution more in accordance with our requirements. Camera Calibration The calibration of the set of cameras that were used in this project becomes a critical preliminary step in order to obtain the necessary parameters that allow a proper execution of the robotic task. The camera calibration can be performed in two steps, by calculating both the intrinsic and extrinsic parameters: • Intrinsic parameters: The OpenCV solution [41] was used for this purpose, by applying the classical black-white chessboard, obtaining the distortion coefficient and the camera matrix (see Equation (1)). Although the well-known distortion present in current pinhole cameras, this does not present an issue for the aim of this work, as it is possible to discard the distortion coefficient. However, the camera matrix provides the essential values for this aim, where f x and f y are the focal length in X and Y axis, respectively, and c x and c y are the optical centres expressed in pixels coordinates. • Extrinsic parameters: Unlike the intrinsic parameters, this calibration provides the camera position and orientation in regards to the frame (i.e., the base of the robot). In Reference [42] a fast technique to carry out the task is presented, which is fully implemented in the ViSP library [43]. Due to the fact that the robotic system has been designed to be modular and easily re-configurable, including tools, actuators and sensors re-positioning, this calibration technique has been demonstrated to be very appropriate, due to the fact that the camera selection, as well as its position, changes assiduously (see Table 2). Table 2. Examples of grippers set and configuration of the cameras. Setup Description Default CERNBot's end-effector with 7 cm fingers length and eye-in-hand mono-camera attached to the Schunk GP15 gripper 2 monocular cameras TCP system: red box shows an eye-in-hand camera; green circle shows an end-effector endoscope camera on the pneumatic angular screwdriver key held by the Schunk GP15 gripper Finally, the transformation of the camera, with respect to the end-effector, is added to the robot matrix as the last joint of the robot configuration. System Overview A new vision-based set of software engineering tools, consisting of a tracking algorithm, in conjunction with an object recognition module and depth estimation system, has been deployed to enhance the usability of a multimodal User Interface, which is in fact the user expert entry point to the system. The vision system acts as a server, listening for requests from the user interface, triggering the method shown in Figure 2. The algorithm is based on tracking traces and computes the distance between any kind of mono-camera attached at the end-effector of the robotic arm (as shown in Tables 1 and 2), to a selected metallic object with a lack of vision features, partially occult, or under reflections. The procedure needs, as an input, a region of interest to be tracked. The ROI is extracted from the object recognition module and confirmed by the selection of the user. This ROI is split into four isolated and coordinated tracking areas (i.e., four trackers), which will depend on a parent one. Then it will lay a virtual set of key-points from the centre of each tracker and these key-points will be used as a correlation between the pair of images. The whole procedure shall be taken as a reference and replicated for every current frame, with the aim of triangulating the position of the target regardless of the movement performed on the X and Y-axis. Taking advantage of the transformation matrix of the robot, once a pair of frames with different robot TCP positions have been analysed by the system, a normal distribution begins to be fed, with the purpose of deciding on the estimation of the target's depth, looking for an error under a minimum pre-established experimental threshold (0.05% by default). If so, all the data (depth estimation, the percentage of error, and the distance between the ROI and the centre of the scene) shall be shown through the GUI by the AR module. If the system detects any tracking problems triggered by the implemented thresholds, or is taking a long time for such an estimation, it will restart the triangulation with a new reference. It is important to clarify that current robot architecture uses three main computers: (1) the server on the robot where the cameras are connected, (2) the HRI operator computer and (3) the object recognition module computer where the neural network is being executed. In fact, the object recognition server is provided with a NVidia GTX1080 GPU and 32 GB of RAM, in order to improve its performance. The tracking and depth estimation loop is going to be enhanced by a Grasping Determination module that is under development, enhancing previous experiments on fast 2D/3D grasping determination and execution. This module will allow the calculation of a list of stable grasping points that can be used by the operator to perform a picking task in a safe, accurate and supervised manner. Target Tracking, Surrounding and Approach One of the most important steps on the human-supervised telerobotic interventions is to track the target, maintain the camera's focus on it and help the operator to bring the robot to an approached position in order to prepare the required interaction. For that purpose, the computer vision system maintains the target in the field of view and assists the operator while approaching the target, in a remote controlled supervised manner. The main difficulties to be solved in order to accomplish the task are the following: • Track the target: The tracking system must be performed in a reliable and close to real-time manner in order to avoid adding extra time, resulting in a delay, to the telerobotic task. Also, the ROI of the tracked object has to be well adjusted to the target contour in order to obtain better performance and accuracy. For this, it must be taken into account that the KCF algorithm is not invariant to scale. Therefore, when the camera approaches the lens, the ROI should be increased accordingly, avoiding losing the tracking that would otherwise occur. Likewise, when the camera is moving away from the target, the ROI has to be decreased, avoiding to track a wrong area, since the depth of the whole unstructured environment (where the robot is often used to perform the interventions) could generate errors. In summary, the tracking must be invariant to scale, orientation, translation, reflections due to metallic parts, lack of luminosity and partial occlusions. • Surround the target: During intervention, according to the expert telerobotic human operators' experience, it is very common to have to turn around the target once it is detected, due to the fact that the location of the components in an unstructured environment might need to dribble obstacles and study the best trajectory to reach the goal. Meanwhile, the tracking system has to be able to follow the ROI, helping to keep the target at the centre of the view. The way to fulfil all the requirements listed above is to develop a system in which both KCF and a feature detector and extractor algorithm, work in a coordinated manner. In Reference [44] the most significant algorithms for that purpose were tested, the SURF [45] being the one that better suits our needs. The unified developed vision algorithm presents a greater tracking enhancement in terms of performance and accuracy. In fact, when KCF needs to adapt the ROI dimensions, this is rescaled by making use of the SURF-based homography estimation, by adjusting the ROI to the dimensions of the SURF bounding-box (see Figure 3). Instead, taking advantage of the fact that, broadly speaking, tracking algorithms try to follow what is being shown inside their ROI area frame by frame (unlike those feature extraction algorithms such as SIFT [46] or SURF among others, which focus on the search for an existing pattern), it allows to turn around the target without losing it. With that in mind, when SURF has problems to detect the target, KCF (that follows the object throughout the rotation and/or translation) provides a new pattern using the current ROI square, thus updating the existing one, allowing to the homography to improve its performance under the new visual orientation. As seen in Figure 4, the first two screenshots (both in the upper row) are sharing the same pattern and yet, the two in the second row use updated patterns, which have been obtained from the ROI's covered area as soon as the homography begins to have issues with the area/object detection. Tracking-Based Depth Estimation The proposed solution to retrieve the depth estimation while tracking metallic objects must work close to RT with the aim of fulfilling the mission requirements. For this, aspects such as the visual operator feedback (critical to avoid the delay) and the data collection for both autonomous and semi-autonomous tasks must be taken into account. Besides, the robustness of the algorithm is a highly relevant key point, due to the fact that it is used in real robotic interventions on harsh and costly environments, where the safety of humans and scientific material is crucial. As a first step, the correlation between the key-points drawn from the pair of images has to be calculated, so as to triangulate the target position (see Figure 5), which will serve in order to: (1) adapt the robot velocity to the necessities with respect to the measured distance, and (2) to carry out the calculation of an adaptive trajectory to approach and reach the target, which is under development at the moment of writing. For this purpose, it is mandatory to compute the camera world coordinate position at every time, which is achieved by means of the use of the forward kinematics [47] through at least 6 DoF robotic arms [48,49], which provides the current position of its end-effector (where the camera is attached) with regards to its frame (its base). Hence, by applying a last homogeneous transformation to the matrix of the robot (as explained in Section 2.2), it is possible to get the exact position and orientation of the camera with regards to the robotic arm base, which leads to the transformation matrix calculation. Once the system starts, the ROI of the first frame will be used as reference, and the second area of interest of the peer will be obtained from the current frame. For determining the correlation points set, the movement of the camera is calculated by the difference among the inverse of the initial homogeneous transformation matrix of the camera coupled to the end-effector (as explained above), and its current transformation matrix, getting the translations in X and Y-axis. In order to make balance on the correlation system regarding to the possible rotations done, Euler [50] is applied to get the angular of these from the Equation (3). Having calculated the correlation of the key-points inside of the ROIs, the Sinus Theorem (see Algorithm 1) is applied to achieve the triangulation for each key-point (for the purpose to extract their average, due to the fact that the piece/area might no be a flat surface), in which the estimation of the distance among the target and camera is based on the translation and rotation of the camera (see Equations (4)-(6)) where: P 1 and P 2 are the projection of the key-points on the origin and current picture respectively, as well as hypo 1 and hypo 2 are the hypotenuses for each image, both the focal length ( f x,y ) and the center point (CP) come from the intrinsic parameters of the camera (see Equation (1)), and T (translation) and R (rotation, always the opposite to the translation) are the intrinsic parameters determined by applying the Equation (3). To perform this estimation, it has been mandatory to carry out the camera calibration beforehand, by obtaining the focal length from its intrinsic parameters (explained on the intrinsic parameters layer of the Section 2.2), which is strictly required to display the key-points projection on the 2D plane that is generated by each image of the triangulation system (see Figure 5). Considering that the system design is made to allow free movement in space, it does not need to know the original position of the system reference frame and the camera rotation in X and/or Y axes during the motions. Metallic Pieces Detection With the aim of offering to the user a higher level of interaction with the system, a deep learning-based module for object recognition [51] has been integrated (see Figure 6), which allows metallic object recognition in a robust manner (e.g., connectors, sockets and patch panels) upon non-textured attributes. The module is based on Faster-RCNN already pre-trained in COCO [52]. The neuronal model that showed greater accuracy for this technique, by detecting a large number of metallic parts of our interest, is the ResNet-101 [53], which obtained total losses below 0.05%, better results than other models such as Inception-v2 [54] and ResNet-50, where the score was over 0.1%. In total, 500 pictures of 6 different objects of interest (see at Section 7.2) were used to train the method along 100,000 steps. However, the loss function already converges for classification and box estimation in step 30,000, by using the same COCO parameters. The performance of this solution is capable of detecting objects at the remote robotic site in under 1 s (network dependency), delivering a bounding box for each detected object to the HRI, which will be offered to the operator, allowing him to directly choose the object to be tracked, starting the depth estimation procedure. Also, it is important to clarify that the metallic pieces detection using the neural network techniques is applied when required by the operator, normally when the target is faced to the robot and before the intervention starts. On the other hand, the tracking system is working continuously on the robot side. We cannot tell the system is working in real time because the 4G network that connects the robot to the surface is not providing this capability. Anyway, the system looks for the best performance in order to improve the efficiency of the intervention. Features-Extractor-Based Key-Points Correlation As a first approach, the system has been designed using the SURF-algorithm as a basis, which has been used to obtain an adequate set of key-points for finding the correlation between the origin and current images. The ensemble of features provided by SURF have to be treated appropriately in order to be useful to the next step of the algorithm. Since the set of key-points are extracted from the ROI of the scene (reducing the computing time), these must be reallocated on the plan (looking for the real position upon the view) increasing X and Y with regards to the ROI origin coordinates. Then, the outliers are filtered, which include both the points out of the bounding box, as well as those that, according to the Euclidean-distance [55], are greater than the experimental threshold established previously (see Figure 7). In order to help the robot operator reach the proper position to perform the task, or to guide the mobile manipulator autonomously to attain the target, the system must send the robot's next position meanwhile the estimation of the distance to the target is carried out. Because of the instability of the homography performed by SURF in these kinds of scenarios where the lack of features and metallic surfaces are the most common situation, KCF was used instead, with the goal of smoothing the current position of the interest area, tracking it and estimating the next position. The use of the vision system and depth estimation has also demonstrated the improvement the focus and quality feedback sensation of the operator, avoiding undesired cognitive fatigue. Besides this, the robotic arm work-space and reach-ability [56][57][58] was considered in the algorithm, since the limits of the robot movements or the positions reached due to singularities [59,60] can affect directly the estimation, being necessary in those undesired situations to correct the arm position and restart the assessment. Tracking-Based Key-Points Correlation Despite the fact that SURF-based results show an excellent performance in terms of accuracy, they also show instability in the required scenarios, where the object and its surroundings are metallic, with very poor texture features, and the possibility of glares and partial occlusions. Due to this situation, it has been necessary to redesign and find out an extended solution to fix the weakness of the approach exposed above (Section 5.2). KCF has been used as a replacement for SURF as key-points supplier with the aim of gaining this necessary stability, sacrificing such essential characteristics as the homography and the partial occlusions that feature extractor algorithms commonly offer, facilitating the correlation task. In order to take advantage of the enhancements made with the use of KCF and with the goal of overcoming the above-mentioned disadvantages, the algorithm proposed uses five tracking regions instead of one (see Figure 8). The main frame represents the whole ROI, which is divided into four mini-trackers, of which the centre will be considered as the key-points. Then, each little square will work independently, tracking its own area, correlating the key-points (within our selected screen region), between the points of the original and current images. Because of this, the problems presented by the partially hidden targets and the constraints from the invariance to rotations are arranged, allowing full freedom movements to the camera, crucial to the proper system performance, which joins the properties of each algorithm, compensating the weakness of one another. On the other hand, due to the featureless ROIs covered by the trackers, a Euclidean-distance-based threshold has been established (see Algorithm 2), which compares the behaviour of each tracking with each other, trying to predict the performance of each tracker. This allows the detection of the potential misbehaving of the squares. This guarantees the supply of a set of data with best key-points correlations, after filtering the outliers and avoiding the disturbance of the estimation made by the triangulation. The estimation can also be restarted if necessary. Algorithm 2 Euclidean-based threshold to avoid the wrong behaviour of the squares, where rR f is the initial reference position of each ROI, and thresholdErr the threshold set by the user. function System Testing and Commissioning Object tracking and depth estimation have been integrated and tested in both autonomous and supervised systems, which are already fully integrated into CERN's robotic framework, endowing it with an artificial intelligence capable of guiding the operator (i.e., supervised performance) or performing the task by itself (i.e., autonomous behaviour). The testing and commissioning on real interventions have been carried out under the use of different kind of industrial cameras, from normal webcams to endoscopic ones (see Table 1 in Section 1.2). In fact, in this section the following use cases are going to be described: (1) example of vision-based autonomous behaviour, where the tracking and depth information is used to automatically perform an intervention in the panel of a machine present at the LHC; (2) example of a semi-autonomous vision human-supervised task, where the operator uses the vision system to assist in an intervention task with a connector; and (3) the contingency behaviours added to the vision algorithm to enhance its safety and accuracy for real interventions. Example of Vision-Based Autonomous Behaviour This system is a state machine developed with the aim of detecting a switch on a Heater Discharge Power Supply (QPS) to turn it on/off, which is largely present at the LHC tunnel. The system has shown very good results, being a nice example of how powerful the ecosystem created by SURF+KCF can become (see Figure 9). The state machine is composed by the following states: • PTZ-Camera Visual Servoing Robot Control: A visual servoing system has been deployed to drive the robot (see Figure 10) to the target through an Axis PTZ-camera (see Figure 11), which is in charge of finding out the QPS, making use of the SURF algorithm, and a set of patterns previously loaded. Due to the fact that the camera's framework uses the internet network protocols, a request and response communication-based Python controller has been embedded on the system (see Listing 1) to guide the platform and position it in front of the target in a proper distance, so that this can be reached. • Vision control for arm orientation: With the robot arm already approaching the target, the robotic arm is triggered to a specific position and the gripper camera is switched on, while the PTZ-camera remains disabled. Thus, the orientation of the robot with regards to the target device is calculated by the homography provided by SURF through the gripper camera. For that purpose, the intersection of the opposite corners of the square-homography gives the current orientation, as seen in Figure 12. Figure 12. Use of the square-homography intersection to fix the orientation. The squares meaning is: the left one needs to turn right, the one at the centre is well oriented, the right one needs to turn left. • Depth Estimation: insofar as the switch detection is done (by using the split left side of the frame, since the switch location is perfectly known), the depth estimation presented in this document is launched, providing the distance to the camera and placing the gripper towards the switch. • Fine-grained approach to the target: Apply T Z (see Equation (7)) translation upon approaching direction by inverse kinematics to reach the switch, where d Z c e is the distance from the camera to the end-effector, taken from the approach parameter on Equation (8) Example of Semi-Autonomous Vision Human-Supervised Task Taking into consideration that the vision system can work autonomously in controlled environments, it is worth mentioning that, in order to perform such an intervention on unstructured hazardous environments and expensive scientific facilities, it is necessary to keep an operator always in the loop, which can supervise the semi-autonomous behaviours, stop them if necessary and even take manual control of the robots due to unexpected situations. The proposed human-supervised control solution has been integrated in the CRF, including both the server controller and the client Human-Robot Interface. The CRF gets the required ROI from the HRI, as explained in Section 5, either from an area of interest selected by the operator, or running some object detection solutions, after the required training and setup. Then, the HRI will show to the operator the sensors feedback provided from the system, through a multimodal and augmented reality module and it shall adapt the robot velocity to the perceived depth. Besides this, if the operator considers it necessary and safe, the automatic tracking can take control of the arm to approach the target in an smooth manner, trying to avoid mistakes on the approaching time and keeping the goal centred in regards to the frame (see Figure 13). It is worth noting that the visual feedback to the operator runs according to the frame rate of the camera used, since the frame shown on the GUI shall be the current one, although the information goes relative to computational load, avoiding bad sensations to the operator as well as dealing with the possible tiredness. Contingency Behaviours Bearing in mind the robustness and stability that the system must show working in costly and unstructured scenarios, the contingency plans become primordial by anticipating the possible losses of the targets generated by the uncontrollable situations (i.e., full target occlusion, ROI disappearance due to the robotic platform has gone through an obstacle/crack, etc.). If so, the estimation and guidance stop, switching on SURF (which will deal with the retrieval of the tracker) making use of a pattern that was previously extracted from the main ROI, which can be traced back from Figure 14 (binary pictures) when the tracking updates its situation. Meanwhile, the robot gets stabilised afterwards in order to get out from the crack (or the operator turns the view to the target again) and the system looks for the reference in the scene, which shall allow the continuation of the estimation and the guidance that was being carried out. Hence, applying a feature extractor (such as SIFT or SURF) collaboratively with the enhanced tracker exposed on the document, generates a greater robust ecosystem, able to deal with undesired situations as well as to self-heal from the issues (Figure 15). Accuracy Experiments The presented module, which is currently being used in real robotic interventions within CERN's facilities in a successful and robust manner, is capable to run in both harsh and featureless environments, providing guidance and surrounding (either for a robotic platform as for the operator in charge), and a depth estimation by correlating a set of key points in a novel way and be close to RT performance. The system that is integrated in the CERN's HRI, achieves an accuracy of over 90% in the depth estimation measurements (see Figure 16), with under one centimetre of error. With these matters, the set of current tracking algorithms tested and described above (see Section 2.1) has been integrated into the system developed to prove its performance, and thus demonstrate why all those whose properties do not provide what is necessary for the development of our tracking system have been rejected. Therefore, as seen in Figure 17, KCF showed the best performance in consideration of both computational time and accuracy. Figure 18 shows the yield of the novel algorithm that is proposed in this document, where the depth estimated by SURF and KCF solutions are compared against known distances, which present large differences in terms of stability. Regardless of the high accuracy that both solutions have proved (see Table 3) with an average error under 0.5 cm, is the KCF-based one that represents the higher level of stability, achieving more than 94%. This percentage comes from the amount of frames that provides the correct measurement. Once the distance is achieved, it keeps the camera in the final position, removing the frame outliers per thousand samples. Furthermore, joining the opposite corners from the isolated little squares, the system overcomes to the well-known rotations constraints that the tracking algorithms show (see Figure 8 in Section 5.3) and the partial occlusions (see Figure 14) that could happen in the time frame that the robot moves carrying out the interventions, besides allowing the possibility to calculate the homography for those systems with missing matrix of the robot. In addition, the algorithm endows maximum freedom of movement to the robotic arm, translating and rotating the camera either in X and Y-axis. Due to this, the translation must be increased in regards to the rotation done (see Figure 19), compensating it and avoiding the drop of the set of key-points outside of the scene or the exchange of their position within the triangulation system. Figure 19. Relationship among translation and rotation (X and/or Y axes) to achieve the triangulation. Metallic Targets Data-Set for Tracking and Object Recognition Benchmarking In order to enable further tracking and object recognition experimentation on metallic targets, the used data-set is provided, which is available at (https://cernbox.cern.ch/index.php/ s/08vGzLeQ1w9CFac). In Table 4 the list of objects that have been used to train the object recognition neural network can be found . Videos Some videos of the experimentation are also provided in this section. • Contingency behaviour: this video shows a safety contingency procedure used in the tracking and depth estimation algorithm, to avoid the robot to move once the tracking has been lost, and also helping it to recover the track once the object is facing the camera. For this, once the tracked object is lost, the last tracked ROI is used by a tracking thread to explore the next camera frames, which allows the system to better recover the track according to the new reflections and luminosity target state. (https://cernbox.cern.ch/index.php/s/kEIIK6hdPwnUdDk) • Depth Estimation: in this video a robotic arm with on-hand camera facing a pool of metallic connectors (i.e., targets) is presented. First of all, the video shows the selection of the ROI by the operator, which enables the tracking and depth estimation procedure. Also, in the second part of the video the connectors are recognized by the deep learning algorithm. Then, once the operator selects to object to track, the system calculates its depth. (https://cernbox.cern.ch/index.php/s/ Qguw2RMNLr0SwuO) Conclusions and Future Work This paper has presented a tracking based depth estimation system including the recognition of metallic objects, which has been successfully developed (see Figure 20) and validated at CERN to perform real telerobotic interventions in radioactive environments. The system permits the calculation of the depth at which a metallic target is located, once this has been detected by either an operator selection, or using a deep learning algorithm, with the aim of assisting the expert operator during human-supervisory control of the robot platform, including semi-autonomous vision-based interventions. The information obtained from the vision system is represented in the HRI in a multimodal and augmented reality manner. Due to the necessity of using the system in real interventions in the LHC tunnel, which is a huge responsability in terms of equipment where the operation has to be accomplished, the priority of the system is to guarantee the safety, while providing efficiency and reliability. For this, the vision system needs to work appropriately in the presence of reflections, light constraints and in partially occluded scenarios. As shown in Figures 21 and 22, current and further work will focus on enhancing the vision system in order to calculate the grasping determination of the target in a fast and reliable way. Having this objective in mind, the contour extracted from the tracking and object recognition algorithm is going to be processed by calculating the list of stable grasping points by using the algorithm explained in Reference [61], according to the symmetry knowledge. The 2D grasping determination can also be adapted in 3D using a more sophisticated extension of the algorithm, as explained in Reference [62]. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
2019-07-25T13:03:54.705Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "80a7ba9647152af7473863b84dee22c1b9ec09fe", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/19/14/3220/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80a7ba9647152af7473863b84dee22c1b9ec09fe", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
219123834
pes2o/s2orc
v3-fos-license
Structural connectivity predicts functional activation during lexical and sublexical reading A critical question in neuroscience is the extent to which structural connectivity of the brain predicts localization of brain function. Recent research has suggested that anatomical connectivity can predict functional magnetic resonance imaging (fMRI) responses in several cognitive domains, including face, object, scene, and body processing, and development of word recognition skills (Osher et al., 2016; Saygin et al., 2016). However, this technique has not yet been extended to skilled word reading. Thus, we developed a computational model that relates anatomical connectivity (measured using probabilistic tractography) of individual cortical voxels to fMRI responses of the same voxels during lexical and sublexical reading tasks. Our results showed that the model built from structural connectivity was able to accurately predict functional responses of individual subjects based on their structural connectivity alone. This finding was apparent across the cortex, as well as to specific regions of interest associated with reading, language, and spatial attention. Further, we identified the structural connectivity networks associated with different aspects of skilled reading using connectivity analyses, and showed that interconnectivity between left hemisphere language areas and right hemisphere attentional areas underlies both lexical and sublexical reading. This work has important implications for understanding how structural connectivity contributes to reading and suggests that there is a relationship between skilled reading and neuroanatomical brain connectivity that future research should continue to explore. Structural connectivity predicts cortical activation during lexical and sublexical reading A fundamental question in cognitive neuroscience is the extent to which structural brain connectivity contributes to cognitive processing. Of particular interest to the current work, reading is a relatively recent human invention that, unlike other cognitive skills (such as speech), requires effortful, explicit instruction in order to be successful. Reading ability has been shown to rely on dissociable, but overlapping, functional and structural networks of brain regions that subserve lexical and sublexical processing. Interestingly, however, while there might be great variability in reading instruction, writing systems, and even processing modality (e.g., alphabetic scripts versus Braille), the cognitive and neural architecture of the reading network appears to develop approximately the same across individuals (e.g., Perfetti, 2011;Rueckl et al., 2015). This suggests that there is a consistent underlying functional and structural neural architecture that potentiates the development of skilled reading. However it is also clear that, even amongst skilled readers, there is heterogeneity of reading processes (Andrews, 2012) that are associated with consistent differences in neural activation (Welcome and Joanisse, 2012). This suggests that there is not a simple, unitary definition of skilled reading, and, instead, that skilled reading may come in many different forms. Functionally, orthographic processing is associated with a ventral occipitotemporal circuit of brain regions consisting of the left inferior temporal gyrus, lateral extrastriate regions, and left fusiform gyrus that encompass the anatomical 'visual word form area' (VWFA; e.g., Dehaene and Cohen, 2011;Glezer et al., 2009;Stevens et al., 2017;Taylor et al., 2013). Lexical reading has been shown to recruit this ventral system, whereby whole-word reading processes can be optimally promoted through reading of exception words (EWs; words with irregular spelling to sound correspondences, e.g., 'bowl'; Borowsky et al., 2006;Borowsky et al., 2007;Cummine et al., 2015Cummine et al., , 2013. Further, sublexical reading has also been shown to activate the VWFA, as well as recruit a dorsal temporoparietal circuit consisting of the angular and supramarginal gyri and posterior superior temporal gyrus (Taylor et al., 2013). Sublexical reading can be promoted through reading of pseudohomophones (PHs; non-words that when decoded phonetically sound like real words, e.g., 'bohl'; Borowsky et al., 2006Borowsky et al., , 2007, which have similar phonology and semantics as real-words. Phonological representation is associated with the posterior superior temporal gyrus, angular gyrus, and supramarginal gyrus as well as the left inferior frontal gyrus (IFG, i.e., Broca's area; Taylor et al., 2013). The importance of visuospatial attention in reading has recently been stressed. For example, during reading development, spatial attention plays an important role in remediating reading impairments, whereby spatial attentional training (via action videogames) leads to significant improvements in reading ability (Franceschini et al., 2012;Franceschini et al., 2017). In skilled readers, Ekstrand et al. (2019aEkstrand et al. ( , 2019b found evidence that lexical and sublexical reading strategies are differentially associated with the attentional orienting regions outlined by Corbetta and Shulman (2002). Specifically, lexical reading relies more strongly on ventral, reflexive attentional orienting areas (i.e., the right temporoparietal junction, TPJ), whereas sublexical reading relies more strongly on dorsal, voluntary orienting areas (i.e., the right superior parietal lobule, SPL, and intraparietal sulcus, IPS). Thus, spatial attentional regions of the brain appear to play an integral role in both reading development and skilled reading. Skilled reading relies on adequate communication between these regions, as well as visual encoding and motor output regions via a network of white matter pathways, which can be reconstructed using diffusion tensor imaging (DTI). Previous research has shown that anatomical connectivity can be used to predict fMRI activation for several cognitive processes, including face, object, scene, and body processing. Seminal work by Saygin et al. (2012) examined the ability of voxel-wise DTI connectivity to predict face selectivity in the right fusiform gyrus. Results from this study were robust and suggested that fMRI activation to faces could be accurately predicted from DTI connectivity. DTI connectivity also predicted fMRI activation better than group fMRI average models, suggesting increased sensitivity of this technique to identify individual differences in task-based fMRI activation. Osher et al. (2016) extended these findings to four visual categories (faces, objects, scenes, and bodies) across the entire brain. Their results indicated that models built from DTI connectivity outperformed group fMRI average models (whereby the researchers argue that group average data is currently the only alternative means of predicting voxel-wise neural responses in a new participant) and were able to successfully predict functional responses across the four visual processing categories. This suggests that individual DTI connectivity can be used to predict brain responses to cognitive functions. Of particular interest to the current study, previous research has suggested that voxel-wise anatomical connectivity is strongly associated with reading ability, particularly during reading development. Saygin et al. (2016) examined white matter connectivity to the VWFA in children pre-literacy (age 5) and after reading acquisition (age 8) to see if early VWFA connectivity could predict subsequent reading acquisition. To achieve this, the researchers identified the location of the VWFA at age 8 and created a model that utilized the DTI connectivity of the child at age 5 to predict activation in the VWFA at age 8. Their results showed that even prior to functional selectivity in the VWFA for words (i.e., at pre-literacy), there is a distinctive pattern of structural connectivity that is able to predict subsequent reading regions. However, the relationship between voxel-wise anatomical connectivity and skilled reading has yet to be explored using a computational modeling approach similar to Osher et al. (2016). Examination of the relationship between structure and function of skilled reading will be essential for identifying biomarkers of skilled reading, which in turn may inform the assessment of literacy skills and interventions. Thus, the current study seeks to investigate the extent to which underlying white matter connectivity (measured via DTI tractography) is able to predict fMRI activation during both lexical reading and phonetic decoding in skilled readers. To do this, we will use a similar technique to Osher et al. (2016) that models the relationship between whole-brain structural DTI connectivity and task-based fMRI activation during both lexical and sublexical reading. In line with Saygin et al. (2016), we hypothesize that there will be a strong relationship between structural connectivity profiles and brain function that will generalize to reading in skilled readers. Therefore, using computational modeling techniques, individual structural connectivity should predict fMRI activation in reading tasks, particularly in areas such as the left fusiform gyrus (including the anatomical VWFA), IFG (i.e., Broca's area), and supramarginal and angular gyri, as well as spatial attentional areas that may contribute to reading from Corbetta and Shulman's (2002) model, including the right TPJ, SPL/IPS, IFG, and frontal eye field (FEF). Further, we will be able to examine which connections are significant for predicting task-based activation, thus uncovering structural networks associated with lexical and sublexical reading. Participants Thirty participants (mean age 27.1, 15 males) performed DTI scans and lexical (EW) and sublexical (PH) reading tasks during fMRI. All participants spoke English as their first language and reported normal or corrected-to-normal vision. The participants gave written informed consent to participate in the study and all testing procedures were approved by the University of Saskatchewan Research Ethics Board and have therefore been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. Raw data were generated at the University of Saskatchewan. Data and relevant code from this study are available upon direct request by contacting the corresponding author, R.B. DWI acquisition parameters and tractography All imaging was conducted using a 3T Siemens Skyra scanner. Wholebrain anatomical scans were acquired using a high resolution magnetization-prepared rapid acquisition gradient echo (MPRAGE) sequence consisting of 192 T1-weighted echo-planar imaging (EPI) slices of 1-mm thickness (no gap) with an in-plane resolution of 1  1 mm (field of view (FOV) ¼ 256; repetition time (TR) ¼ 1900 ms; echo time (TE) ¼ 2.08 ms). DTI data were acquired using 195 EPI slices of 4-mm thickness (no gap) with an in-plane resolution of 1.72  1.72, (FOV ¼ 220; TR ¼ 3700 ms; TE ¼ 95 ms; diffusion weighting isotropically distributed along 60 directions; b-value 1000 s mm À2 , with a b 0 volume interspersed every 10 diffusion directions). The top two coil sets (16 channels) of a 20-channel Siemens head-coil were used, with the bottom set for neck imaging (four channels) turned off. Preprocessing included alignment to the b 0 images using FSLs eddy-correct tool (http://fmrib. ox.ac.uk/fsl) to correct for head motion and eddy current distortions, removal of non-brain tissue using the Brain Extraction Tool (BET) from FSL (Smith, 2002), and registration to the high resolution anatomical (T1-weighted structural) scans using FSLs flirt (FMRIB's Linear Image Registration Tool; Jenkinson, Bannister, Brady and Smith, 2002;Jenkinson and Smith, 2001). Next, the GPU version of FSLs bedpostx (Bayesian Estimation of Diffusion Parameters Obtained using Sampling Techniques; Hern andez et al., 2013), ran on a NVIDIA GTX 1070 GPU with 8 GB of RAM, was used to build sampling distributions of the diffusion parameters at each voxel necessary for probabilistic tractography. Tractography proceeded as follows. First, each of the 268 regions from the Shen et al. (2013) atlas were transformed into diffusion space using FSLs flirt and were checked and corrected for registration errors (if necessary). The DTI-registered parcels were then used as seed and target regions for fiber tracking. Fiber tracking was performed using the GPU version of FSLs probtrackx tool (Hernandez-Fernandez et al., 2016), which uses probabilistic tractography to create a connectivity distribution at each voxel in the seed region (5000 streamline samples per voxel) to each of the target regions, with the distance correction option. This procedure results in a vector of connection probabilities from each voxel in the seed region to all other brain regions. FMRI protocol For each of the functional tasks, T2*-weighted single shot gradientecho EPI scans were acquired using an interleaved ascending EPI sequence, consisting of 65 vol of 25 axial slices of 4-mm thickness (1-mm gap) with an in-plane resolution of 2.65-mm  2.65-mm (FOV ¼ 250) using a flip angle of 90 . The top two coil sets (16 channels) of a 20-channel Siemens head-coil were used, with the bottom set for neck imaging (four channels) turned off. Acquisition slices were positioned to prioritize complete coverage of the cortex. Additional foam padding was used to reduce head motion. In order to acquire verbal behavioral responses, we used a sparse-sampling (gap paradigm) fMRI method that allows the participant to respond during a gap in image acquisition (TR ¼ 3300 ms, with a 1650 ms gap of no image acquisition; TE ¼ 30 ms; flip angle ¼ 90; e.g., Borowsky et al., 2007Borowsky et al., , 2013. Participants responded vocally (and were instructed to minimize jaw and mouth movements) during the regular, periodic 1650 ms gap in the image acquisition that followed the offset of each volume of image acquisition, which allowed the participants to respond with no noise interference from the MRI. Lexical (EW) and sublexical (PH) reading tasks occurred in two separate runs in the same session. Stimuli and procedure Stimuli were presented using a PC running EPrime software (Psychology Software Tools, Inc., http://www.pstnet.com) via MRI compatible goggles (Cinemavision Inc., http://www.cinemavision.biz). Continuous synchronization between the MRI and the experimental paradigm was maintained by detection of the leading edge of the fiberoptic signal emitted by the MRI by a Siemens fMRI trigger converter at the beginning of each acquisition volume that was then passed to the EPrime PC via the serial port. The order of the EW of PH reading conditions was counterbalanced between participants. Reading tasks. The trial progression for each of the reading tasks was as follows. Participants were presented with 30 target stimuli (either EWs or PHs depending on the task) in a random order with five stimuli presented per block (for a total of six blocks), interspaced with 6 blocks of relaxation. The order of the EW and PH tasks was counterbalanced between participants. The list of stimuli can be found on OSF (https://osf. io/grn9c/). A black central fixation cross (0.6 in height) on a white background was presented. Following this, a jitter of 100, 200, 300, 400, or 500 ms (presented randomly) occurred before presentation of the EW or PH stimulus. This jitter was included to provide more accurate estimates of activation across conditions by staggering the temporal relationships between trial types, thus sampling different components of the hemodynamic response function (Amaro and Barker, 2006). Participants were asked to read the stimulus aloud as quickly and accurately as possible during the gap in acquisition when the stimulus was presented (1650 ms). EW and PH stimuli were matched on several of the characteristics available from the E-Lexicon Database (http:// elexicon.wustl.edu/), specifically length (t(48) ¼ 0.436, p ¼ 0.665) and log10 base word frequency (t(48) ¼ -.176, p ¼ 0.861). In line with Ekstrand et al. (2019aEkstrand et al. ( , 2019b, phonetic decoding of PHs was used to examine sublexical reading as opposed to pseudowords based on PHs identical phonological representation to their word counterpart and identical meaning. Thus, PHs offer the greatest experimental control for examining differences between lexical and sublexical reading by ensuring that differences in activation are due solely to differences in decoding strategy, not phonology or semantics. During relaxation, a central fixation cross was presented on the screen. FMRI analyses Prior to analysis, the functional scans for the EW and PH reading tasks were merged (i.e., concatenated) across time to create a single functional volume, whereby the PH trials were appended to the EW trials, in order to allow for comparison of EW versus PH reading at the individual subject level. FMRI preprocessing and analysis was performed using FSL's FEAT (FMRI Expert Analysis Tool) protocol Version 6.0 (FMRIB, Oxford, UK, http://www.fmrib.ox.ac.uk/fsl/). Preprocessing included MCFLIRT linear slice-time/motion correction (Jenkinson et al., 2002), BET brain extraction (Smith, 2002), spatial smoothing using a Gaussian kernel of full-width half maximum 5-mm, grand-mean intensity normalization of the entire 4D dataset by a single multiplicative factor, high-pass temporal filtering (0.01 Hz; Gaussian-weighted least-squares straight line fitting, with sigma ¼ 16.0s), and normalization to the Montreal Neurological Institute (MNI) 152 T1 2-mm template (however all modeling was performed in the participant's native space). For more accurate registration, the fMRI images were registered first to the high-resolution MPRAGE scan for each participant (6-df linear registration) before registration to the MNI 152 template (12-df linear registration). Functional images were then resampled using 2-mm isotropic voxels and smoothed with a Gaussian kernel of 5 mm full-width at half-maximum into standard space. Individual subject level comparisons of EW and PH reading, as well as EW greater than PH and PH greater than EW contrasts were performed for each participant, resulting in four t-statistic images. Specifically, firstlevel analyses for each participant were performed using a sinusoidal double-gamma hemodynamic response function convolution that modeled the 5 stimuli over each block versus rest for the EW and PH runs, as well as an additional contrast that modeled the EW blocks versus the PH blocks. Time-series statistical analysis was carried out using FILM with local autocorrelation correction (Woolrich et al., 2001). The t-statistic images were standardized using the same technique as Osher et al. (2016). Specifically, we subtracted the mean functional value of the whole brain from the functional response at each voxel and divided it by the standard deviation of the whole brain. All modeling was then performed on these standardized t-statistic images. We then transformed the whole-brain standardized t-statistic images into DTI space using FSLs flirt by first registering the original functional images to the DTI image and then applying this transformation matrix to the t-statistic images (Jenkinson and Jenkinson et al., 2002). Next, the standardized t-statistic images were masked with the same Shen et al. region masks into 268 anatomical parcels that were the same size as the DTI connectivity images. Overall group fMRI analysis We also performed an analysis of the overall group activation patterns for all 30 participants using the first-level analyses for each participant. Group analyses were performed using FSL's FLAME 1 (FMRIB's Local Analysis of Mixed Effects). Results from the whole-brain analyses were thresholded by Z > 3.1 and a corrected cluster significance threshold of p < 0.05 (Worsley, 2001). Modeling methods Our modeling approach was comparable to the approach used by Osher et al. (2016) and was implemented using in-house Python code. Participants were divided into two groups whereby modeling for Group 1 (N ¼ 15) was validated using leave-one-out cross-validation (LOOCV) and modeling for Group 2 (N ¼ 15) was performed by applying the final model from all of the participants in Group 1 to each of the participants in Group 2 to evaluate how well the model can generalize to new data. Each participant's anatomical brain was divided into 268 cortical regions using the Shen et al. (2013) atlas in their native space, allowing for individual anatomical variations during modeling. Group 1 For Group 1, to predict function from connectivity, we employed the LOOCV approach, whereby the connectivity and functional data of a single participant was excluded and a model was trained on the remaining participants before being applied to the left-out subject. We repeated the LOOCV for all participants to create independent predictions for each subject. The modeling proceeded as follows (see Fig. 1): each of the 268 regions from the Shen parcellation for each subject was used as a seed parcel, whereby every voxel of the seed parcel had a functional response to the fMRI contrast (a 1  N (number of voxels in the seed regions) vector), as well as DTI connectivity to 267 target parcels (N x 267 matrix, where rows are voxels and columns are connectivity to each of the target regions). Neural responses and DTI connectivity were concatenated (i.e., combined into one matrix, where rows represent voxels across all participants and columns represent connectivity of the voxel to target regions) for all but the left-out participant. Next, the relationship between the fMRI response of a voxel and its DTI connectivity was modeled using a linear regression implemented by the Stats-Models linear regression library of Python language (https://www .statsmodels.org/). This resulted in a 1  267 vector of coefficients relating the relevance of the DTI connectivity from the seed parcel to each of the target parcels for predicting the fMRI response in the seed Fig. 1. Overview of the modeling procedure. Each participant's brain was first divided into 268 regions from the Shen et al. (2013) atlas, as shown by the colored brain in c). Each region was then modeled separately using the following procedure: a) Voxel-wise DTI connectivity from the modeled region to the remaining 267 regions was concatenated for all but one participant (i.e., the left-out participant). b) FMRI t-statistic values corresponding to each of the voxels in a) are concatenated for all but the left-out participant. c) A linear regression (represented by Ä) models the relationship between DTI connectivity in a) and fMRI activity in b). This results in a vector of coefficients (depicted as a greyscale vector) of length 267 (i.e., the number of columns in a) representing each target region) reflecting the contribution of each target region to predicting the fMRI response. d) The left-out participant also has a DTI connectivity matrix with 267 columns that e) the function, f(x), from c) is applied to each voxel in the left-out participants connectivity matrix resulting in f) a vector of predicted fMRI activation for each voxel. Predicted responses are then compared with the actual fMRI responses for each voxel. This procedure is then repeated for each of the other 267 seed regions in c) for each participant, with every participant in Group 1 left out iteratively in order to generate independent predictions for each participant. To predict fMRI activation for Group 2, a final model, f(x), is generated from all of Group 1's voxel-wise connectivity and fMRI data, which is then applied to each participant in Group 2. This entire procedure is repeated for each contrast (i.e., lexical (EW), sublexical (PH), lexical vs. sublexical (EW > PH/PH > EW)). parcel. We then applied these coefficients to the 267  N DTI connectivity of the left-out subject, resulting in a predicted fMRI value for each voxel of the left-out participant's seed parcel. This procedure was repeated for each of the 268 seed regions of the Shen et al. (2013) atlas and concatenated in order to produce predictions for the entire brain of the left-out participant. We then compared the activation predicted by the model to the participant's actual fMRI activation images for each contrast and calculated the absolute error (AE; i.e., the absolute value of actual minus predicted activation) for each voxel. Finally, we created a model using all fifteen participants from Group 1's DTI connectivity and fMRI data that we then applied to the other 15 independent participants in Group 2. Group 2 The overall model coefficients from all fifteen participants in Group 1 were then applied to an independent group of subjects' (N ¼ 15) individual DTI connectivity data to produce predicted fMRI maps in a similar way to Group 1. We then calculated prediction accuracies by examining the AEs between actual and predicted activation (in the same way as Group 1). Model validation In order to assess the validity of our generated models, we compared the performance of our connectivity models to group activation models both across the cortex as well as in regions of interest (ROIs). The SciPy (http://www.scipy.org) Stats module was used to compare mean AEs (MAEs; i.e., the average of all AEs across the brain) in MNI space between the connectivity and group activation models, as discussed below. Comparison to group activation models Group activation models were also created using LOOCV using a similar technique to Osher et al. (2016). First, each participant's functional data was transformed to their high-resolution anatomical using FSL's flirt. Next, we performed a nonlinear registration of each participant's anatomical image to MNI space using the symmetric diffeomorphic normalization method for non-linear transformation from Advanced Normalization Tools (ANTs; Avants et al., 2008Avants et al., , 2011. This transformation was then applied to the functional data in anatomical space. All participant's (excluding the left-out participant) fMRI images in standardized space were then superimposed to create composite maps (i.e., the predicted activation for the left-out participant was the average activation from all other participants in the group). We then used this group averaged fMRI image as the input to FSLs FEAT using the same contrasts as those used on the individual participant data. This resulted in t-statistic images for each of the contrasts (i.e., EW, PH, EW > PH, PH > EW) that underwent the same standardization as the t-statistic images used in the DTI connectivity model (i.e., mean functional value of the whole brain was subtracted from the functional response at each voxel and divided by the whole-brain standard deviation). This predicted image was then transformed back into the left-out participant's native space and AEs were calculated. This was repeated for each of the participants in Group 1 to create 15 independent predictive models based on group activation. To calculate MAEs for each participant, whole-brain AE images were then transformed back into MNI using the reverse transforms of those previously applied from FSL's flirt and ANTs. For Group 2, the group activation model was created from the average activation for all of the subjects in Group 1, the resulting model was transformed to each participant's native space, and AEs and MAEs were calculated in a similar way as for Group 1. Regions of interest Each participant's whole-brain AE images were transformed to the MNI 152 T1 2-mm template using FSL's flirt and diffeomorphic normalization method for non-linear transformation from ANTs to ensure all ROIs were comparable across participants. ROIs were derived from parcels in the Shen et al. (2013) 268 region parcellation in the participant's native space and corresponded to the left anatomical VWFA (four regions: Shen 198,199,200,201), IFG (two regions: Shen 151, 156), and temporoparietal regions (two regions: Shen 182,183). Further, based on the proposed integral role of spatial attention in reading and the importance of right hemisphere white matter tracts in reading, we also examined the ability of our model to predict activation in primary spatial attentional regions. Specifically, in the ventral stream we examined the right IFG (two regions: Shen 16,19) and TPJ (two regions: Shen 47,48), and in the dorsal stream the right FEF (Shen 32) and SPL/IPS (Shen 43), resulting in a total of 14 ROIs. AEs for each region were calculated in a similar way to the wholebrain connectivity versus group average fMRI comparison. A voxelwise paired-samples t-test was performed per participant across all grey-matter voxels in each ROI. A Bonferroni-corrected threshold of p < 0.05/(total number of subjects in both groups times the number of ROIs) (30  14) ¼ 1.19  10 À4 was used to determine the number of participants whose activation pattern was better predicted by the connectivity than the group average activation models. We also calculated MAEs over all of the voxels in each ROI (i.e., the average activation in each ROI) for each participant and performed a paired-sample t-test for each ROI. Permutation tests In order to ensure that specific voxel-wise correspondence between activation and connectivity is what is important for accurate prediction and not other factors such as the number of parameters of the model, we also performed permutation testing for each of our ROIs from the final model from Group 1. The procedure for permutation testing was identical to our modeling methods, with the following exception. Prior to modeling, fMRI activation for each seed voxel was shuffled randomly across participants, without altering the DTI connectivity matrix, the linear model was fit and this procedure was repeated 5000 times for each ROI. We then created a distribution of R 2 scores (i.e., the proportion of variance in fMRI activity accounted for by connectivity) from the permutations and evaluated the real DTI model R 2 scores against this distribution. Functionally relevant DTI networks for reading In order to uncover the DTI structural connectivity networks associated with each reading contrast, we examined the coefficients from each of the 268 regression models derived from the participants in Group 1. For each anatomical parcel, we calculated the DTI connections that were significant predictors of functional activation for each contrast. Specifically, we assessed significance of the coefficients at a conservative, Bonferroni corrected p-value for multiple comparisons of all beta coefficients across the whole brain of p crit ¼ 7.00  10 À7 (268 regions x 267 coefficients per regression model ¼ 71556 beta weights, p crit ¼ 0.05/ 71556). For the EW and PH contrasts, we examined only the coefficients with positive beta values (i.e., increased connectivity associated with increased activation), however due to the bidirectional nature of our EW > PH contrast, we examined both positive (i.e., increased connectivity associated with increased activation for EW > PH) and negative (i.e., increased connectivity associated with increased activation PH > EW) beta coefficients in separate models. For visualization purposes, we then binarized the significant coefficients and represented the significant connections as bidirectional in a 268  268 square matrix for each contrast. We did this both across the entire brain (i.e., 268 regions to 268 regions) as well as for the ROIs. Following this, we performed a k-core analysis on both the whole brain and ROI binarized coefficient matrices using the k_core function from NetworkX python toolbox (Hagberg, Schult and Swart, 2008). K-core decomposition is a graph theory technique that can be used to find the underlying 'backbone' of a network by recursively removing nodes until all remaining nodes have a degree (i.e., number of connections) of at least k (see Hagmann et al., 2008 for more information about this technique), whereby we identified the largest k-cores for each network. Overall fMRI activation patterns across all participants Results from the whole-brain analysis for each contrast are shown in Fig. 2 and cluster statistics are in Table 1. EWs and PHs activated an extensive network of brain regions that spanned attention and reading areas in both the left and right hemispheres. For the EW versus PH contrast, there were no regions that showed greater activation for EWs than PHs, however PHs showed significantly greater activation than EWs several key areas, most notably the bilateral IPS/SPL, left fusiform gyrus, left IFG. Predicted neural responses from DTI modeling across all grey-matter voxels Representative ROI tractography for two participants can be found on OSF (https://osf.io/3bq52/, 'Representative Participant Probabilistic Tractography' folder). Fig. 3 shows the fMRI activation of the most functionally specific voxels (i.e., top 5 percent) for each contrast of both the predicted (from the DTI and group average fMRI models) and actual results for a single participant, in which the predicted results from the DTI model show a strikingly similar pattern of results to the actual fMRI response (particularly in comparison to the fMRI model predictions). This suggests that individual subject activation patterns can be predicted from their DTI connectivity patterns. This is supported by our measures of prediction accuracy, reported below. Lexical (EW) reading EW reading typically elicits activation in a ventral occipito-temporal circuit of brain regions that includes the anatomical VWFA in the left fusiform gyrus, lateral extrastriate regions, and inferior temporal gyrus, as well as language and phonological representation areas including the left IFG and TPJ (Taylor et al., 2013;Cummine et al., 2012Cummine et al., , 2015Borowsky et al., 2006Borowsky et al., , 2007. Fig. 3 shows concordance between the predicted and actual responses, particularly in the left fusiform gyrus. Fig. 4a) shows significant correlations between predicted and actual activation for the overall model from Group 1 for all brain regions (see Appendix A of the supplementary material for predicted versus actual scatterplots for each contrast, ROI, and participant). Comparison to group average activation. Group 1: Averaging across all grey matter voxels for all participants, the model created from DTI connectivity showed significantly lower MAEs than the model created from group fMRI activation, t(14) ¼ À5.01, p ¼ 1.90  10 À4 (see Fig. 5 for means and standard deviations for the EW contrast). Sublexical (PH) reading Phonetic decoding has been shown to rely more strongly on dorsal stream regions including the left inferior frontal gyrus (IFG, i.e., Broca's area) as well as the posterior superior temporal gyrus, angular gyrus, and supramarginal gyrus (Taylor et al., 2013;Cummine et al., 2012Cummine et al., , 2015Borowsky et al., 2006Borowsky et al., , 2007. Fig. 3 shows highly similar activation in ventral stream areas including the left fusiform gyrus, and dorsal stream areas including the angular and supramarginal gyri, and posterior parietal regions for the actual vs. predicted results of an example participant. Fig. 4b) shows significant correlations between predicted and actual activation for the overall model from Group 1 across all brain regions. Comparison to group average activation. Group 1: Similar to the EW task, averaging across all grey matter voxels, the model created from DTI connectivity showed significantly lower MAEs than the model created from group fMRI activation, t(14) ¼ À4.24, p ¼ 8.19  10 À4 (see Fig. 5 for means and standard deviations). Lexical (EW) > sublexical (PH) Contrasts between the EW and PH conditions were also assessed in order to determine whether DTI connectivity patterns are able to capture individual differences in lexical versus sublexical processing. Typically, lexical reading has been shown to elicit greater activation than sublexical reading in ventral stream areas including the parahippocampal and fusiform gyri, and middle temporal gyrus (MTG), as well as the posterior cingulum and precuneus, angular gyrus, gyrus rectus, and medial orbitofrontal cortex (Taylor et al., 2013; see also Carreiras et al., 2014;Price, 2012; see Fig. 3). Fig. 4c) shows significant correlations between predicted and actual activation for the overall model from Group 1 across all brain regions. Comparison to group average activation. Group 1: The model created from DTI connectivity showed significantly lower MAEs than the model created from group fMRI activation, t(14) ¼ À10.56, p ¼ 4.72  10 À8 (see Fig. 5 for means and standard deviations). Sublexical (PH) > lexical (EW) Pseudowords have been shown to elicit greater activation than real words in the posterior fusiform gyrus, occipitotemporal cortex, precentral gyrus, left IFG (i.e., Broca's area, supplementary motor area, superior temporal pole, left insula, left parietal cortex, and right inferior parietal cortex (e.g., Taylor et al., 2013). Further, PHs elicit greater activation in the left inferior/superior frontal and middle temporal gyri, left insula, and left SPL than pseudowords (Braun et al., 2015), as well as the angular and supramarginal gyri, and IPL (Borowsky et al., 2006). Fig. 3 shows the voxels with t-statistics in the top 5 percent for the PH > EW contrast across the brain. Because this contrast is the reverse of the EW > PH contrast, the MAE comparison statistics are the same for both groups as the EW > PH contrast. Connectivity-based predictions of neural responses within regions of interest Results from our ROI analysis in the most functionally specific regions showed that, when examining MAEs using a paired-samples t-test across all participants, the connectivity-based model outperformed the group average model for the majority of ROIs for each contrast (Table 2). Interestingly, the DTI model was the most accurate at predicting functional activation for the Lexical vs. Sublexical (i.e., EW > PH/PH > EW) contrast, with all but four ROIs showing significantly better prediction accuracies (at a Bonferroni-corrected threshold of p < 3.57  10 À3 ) for the DTI model than the group fMRI average model. Further, when examining voxel-wise comparisons for each participant for each ROI, we found that DTI connectivity models in general outperformed the group average models for the majority of participants for ROIs in both word reading (see Fig. 7) and attention areas (see Fig. 8), with the EW vs. PH contrast showing the greatest number of participants better predicted by the DTI connectivity model than group fMRI activation model. Permutation tests Histograms of the permuted and DTI model R 2 values for each ROI from the final model of all Group 1 participants can be found in Appendix Note. R ¼ right, L ¼ left, Z ¼ maximum Z score of the voxel in the cluster. Coordinates are in MNI space. Fig. 3. Representative actual versus predicted (from the DTI connectivity and fMRI models) activation for a single participant. Activation shows the most functionally selective voxels for each contrast (i.e., the top 5% of activation). The top two rows of brains for each contrast are the left hemisphere, the bottom two the right hemisphere. 6. Mean prediction errors and comparison to the group average benchmark model as a function of predictive model (i.e., connectivity versus group fMRI average) for Group 2. Prediction error represents the mean absolute errors across all cortical voxels, error bars represent the standard deviation. Predictions from the connectivity models were significantly more accurate than predictions from the group-fMRI models in all conditions. B of the supplementary materials and on OSF (https://osf.io/3bq52/, 'ROI Permutation Histograms' folder). To summarize, results from permutation tests showed that all ROIs were significant at p < 2.00  10 À4 , with the DTI model R 2 value exceeding all of the permuted R 2 values for all contrasts. Functionally relevant DTI networks for reading The matrices of significant beta coefficients, binarized coefficients, and k-cores for each contrast can be found on OSF (https://osf. io/3bq52/, 'Connectivity Matrices' folder), and the binarized coefficient and k-core matrices can be visualized online using the Bioimage Suite Connectivity Viewer (https://bioimagesuiteweb.gith ub.io/webapp/connviewer.html#). This allows readers to explore these extensive networks to examine network architecture in a dynamic way. The k-core network for the whole-brain EW contrast identified 122 nodes with at least 22 connections, 116 nodes with at least 23 connections for the PH contrast, 87 nodes with at least 18 connections for the positive beta coefficient in the EW > PH contrast, and 96 nodes with at least 18 connections for the negative beta coefficient in the EW > PH contrast. The ROI connectivity profiles (i.e., patterns of connectivity using the ROIs as nodes) for each contrast are shown in Fig. 9. Of note, both networks show extensive connectivity between left hemisphere language regions (particularly the anatomical VWFA and the left IFG) and right hemisphere attentional regions in both the parietal and frontal lobes. Further, connectivity for the PH contrast appears to rely more strongly on connectivity between dorsal stream regions than the EW contrast, which may suggest increased attentional involvement during sublexical reading. Results from our k-core analysis of ROI connectivity is shown in Fig. 10. This analysis identified a core for the significant positive beta coefficients for the EWs of 14 nodes with at least 4 connections. Of note, the right IFG (ROIs 16 and 19) showed connectivity to the right TPJ and putamen, and left IFG and anatomical VWFA; the right SPL/IPS (ROI 43) showed connectivity to the right TPJ and left IFG and anatomical VWFA; the right TPJ (ROIs 47 and 48) showed connectivity to the right and left IFG and right SPL/IPS and precentral gyrus; the left IFG (ROIs 151 and 156) showed connectivity to the right IFG, SPL/IPS, TPJ, and putamen; and the anatomical VWFA (ROI 199) showed connectivity to the right IFG, precentral gyrus, SPL/IPS, and putamen. The k-core for the ROI PH contrast had a core of 26 nodes with at least 4 connections. Notably, the right IFG (ROIs 16 and 19) showed connectivity to the right SPL/IPS, TPJ, and putamen, and left IFG and anatomical VWFA; the right SPL/IPS (ROI 43) showed connectivity to the right IFG, TPJ, and precentral gyrus, and left IFG; the right TPJ (ROIs 47 and 48) showed connectivity to the right and left IFG and right SPL/IPS and precentral gyrus, and the left anatomical VWFA; the left IFG (ROIs 151 and 156) showed connectivity to the right IFG, precentral gyrus, SPL/IPS, Note. SD ¼ Standard deviation. *Significant at p < 0.05. **Significant at a Bonferroni-corrected threshold of p < 0.05/(number of ROIs) (14) ¼ 3.57  10 À3 . TPJ, and putamen; and the anatomical VWFA (ROI 199) showed connectivity to the right IFG, precentral gyrus, and TPJ. The k-core for the ROI EW > PH contrast of positive beta weights had a core of 14 nodes with at least 4 connections. Notably, the right IFG (ROIs 16 and 19) showed connectivity to the right TPJ, left IFG, and left anatomical VWFA; the right TPJ (ROIs 47 and 48) showed connectivity to the bilateral IFG; the left IFG (ROIs 151 and 156) showed connectivity to the right IFG and TPJ; and the left anatomical VWFA showed connectivity to the right IFG. The k-core for the ROI EW > PH contrast of negative beta weights (i.e., PH > EW) had a core of 11 nodes with at least 4 connections. The right IFG (ROIs 16 and 19) showed connectivity to the right precentral gyrus and TPJ, and left anatomical VWFA; the right TPJ (ROIs 47 and 48) showed connectivity to the right precentral gyrus and IFG; the left IFG (ROIs 151 and 156) showed connectivity to the right precentral gyrus; and the left anatomical VWFA showed connectivity to the right precentral gyrus and IFG. Predicted neural responses from DTI modeling across all grey-matter voxels Together, our findings suggest that anatomical connectivity predicts fMRI activation during the cognitive process of reading. When examining MAEs across all cortical voxels we found that predictions from the DTI connectivity model were significantly more accurate than predictions from the group fMRI activation model across all contrasts. This suggests that voxel-wise fMRI activation of an individual across the cortex can be predicted using only their structural connectivity. This corroborates the findings of Osher et al. (2016) suggesting that structural connectivity fingerprints, in part, dictate functional activation, and extends them into the domain of skilled word reading. These results were found not only for LOOCV participants in Group 1, but also for an independent group of subjects (Group 2). Results from each of our contrasts show that models created from DTI connectivity outperformed those created from group-average fMRI activation across the entire cortex and thus were better able to predict voxel-wise fMRI activity during both lexical and sublexical reading. A particularly exciting finding was that our connectivity model was also sensitive to differences between lexical and sublexical processing, suggesting that this technique is sensitive to detecting the differential structural networks that underlie lexical and sublexical processing. Connectivity-based predictions of neural responses within ROIs We also examined the performance of our connectivity-based model at predicting neural responses in ROIs. First, we examined ROIs in brain areas known to be involved in reading and language, which included the left fusiform gyrus (i.e., VWFA), IFG, and TPJ and showed that, in general, our model was able to accurately predict fMRI activation in these regions to a similar degree, or better, than a group average fMRI model. As the integral role of visuospatial attention in word reading has recently been stressed in the research literature (e.g., Ekstrand et al., 2019aEkstrand et al., , 2019bFranceschini et al., 2012Franceschini et al., , 2017, we also chose to examine whether connectivity models could better predict reading task-based fMRI activation than group activation models in ROIs associated with spatial attentional processing in the right dorsal (SPL/IPS and FEF) and ventral (TPJ and IFG) spatial attention streams. Our results showed that reading related activation could be accurately predicted in the majority of these spatial attentional ROIs, thus highlighting the importance of spatial attention in reading, as activation in these regions can be accurately predicted from models of lexical and sublexical reading. Together, these results show that structural connectivity fingerprints to ROIs associated with reading and language play an important role in dictating subsequent fMRI activation during lexical and sublexical reading. Results from permutation testing showed that estimates of fit (i.e., R 2 ) from the DTI model were robust, exceeding those found when the fMRI voxelwise data was randomized. This is an important finding, as one critique of ours and Osher et al.'s (2016) method as a whole is that the connectivity model has a much larger number of parameters compared to the fMRI model, and thus this discrepancy may be contributing to increased flexibility of the DTI model to fit the data in comparison to the fMRI model. Although shuffling the voxel-wise fMRI activation may potentially destroy hidden structure of the data, it still provides valuable insight into the importance of voxel-wise correspondence between activation and connectivity. Specifically, if participants have similar activation patterns in the ROIs that is driving model performance with this high number of parameters (and not individual differences), we would not expect the DTI connectivity model to outperform the permuted models (because fMRI activation would be similar across participants). However, this does not appear to be the case, suggesting that specific voxel-wise correspondence between activation and connectivity is driving the performance of our model, particularly in these small, localized ROIs. Structural connectivity networks of skilled reading In order to identify which connections are important for predicting fMRI activation from DTI connectivity, we examined the significant betaweights from the connectivity model. Results from the whole-brain analysis uncovered a complex network of DTI connectivity that underlies skilled reading that spans both hemispheres of the brain, including the anatomical VWFA, speech production regions, and voluntary and reflexive attentional orienting areas. When examining patterns of connectivity associated with our ROIs, several interesting patterns emerged. First, PH reading appears to recruit more dorsal stream regions than EW reading, whereby the connectivity networks for the PH contrast showed greater interconnectivity between the right frontal and parietal cortices than the EW contrast. Second, both types of reading appear to rely on connectivity between the anatomical VWFA and right hemisphere attentional regions, suggesting that direct connectivity to the attentional network is an important facet of both lexical and sublexical reading. Third, when examining the core of the ROI network for each contrast, we found that the right putamen is an important part of lexical and sublexical reading. Connectivity from this region to speech production and spatial attentional areas was found to be a core feature of the EW and PH contrasts, supporting findings that purport that the putamen plays an important role in phonological output (e.g., Gould et al., 2017;Gould et al., 2018;Oberhuber et al., 2013;Seghier and Price, 2010). Implications for understanding networks of skilled reading These results extend the work of Saygin et al. (2012) that examined face processing in the fusiform gyrus, and Osher et al. (2016) that examined face, body, scene, and object processing across the cortex, into the processing domain of skilled reading. They are also in concordance with the findings of Saygin et al. (2016), which showed that connectivity to the VWFA (even prior to reading development) can predict subsequent fMRI activation in that area. We extend these findings not only to the entire cortex (i.e., by modeling 268 different brain regions that span cortical grey matter), but importantly, to skilled, adult readers. In addition, the atlas used in our experiment (i.e., the Shen et al., 2013 268 node parcellation) provides a higher resolution parcellation than the Destrieux 148 node atlas (Destrieux et al., 2010) used by Osher et al. (2016), thus providing a more fine-grained examination of structure and function. Results from our ROI analysis of the left fusiform gyrus show that distinct connectivity patterns exist in adulthood that account for a significant amount of variance in reading fMRI activation in these regions. As this region is critically important for reading, this suggests that distinct structural connectivity patterns to this region underlies word processing. Further, we also show that other regions integral to language and reading (i.e., the left IFG and TPJ) have specific connectivity profiles that allow for accurate prediction from DTI connectivity. A particularly exciting finding from this study is the ability of our model to predict relatively subtle differences in lexical and sublexical processing (i.e., the EW vs. PH contrast). Although there is some specialization for lexical and sublexical reading processes, there is also a large amount of overlap in the reading and language networks (for example, in phonological representation areas, as well as visual and semantic processing regions), particularly between real words and PHs. Comparatively, Osher et al. (2016) examined four different object categories that each have distinct specialization across the cortex and, in particular, the fusiform gyrus (i.e., faces, objects, scenes, and bodies). Further, processing these different object categories occurs naturally and does not require explicit instruction. Thus, the presence of unique structural connectivity patterns underlying these different types of object processing may be more intuitive from an evolutionary perspective. In contrast, the distinction between lexical and sublexical reading is much more subtle, yet our connectivity model was able to identify different structural connectivity patterns that underlie this distinction. Thus, there appears to be unique underlying architecture that subserves lexical versus sublexical processing. Based on our connectivity analyses, the core of this network appears to rely on connectivity between the left anatomical VWFA and right hemisphere attentional regions (particularly the rTPJ and rIFG), as well as attentional regions and phonological output areas (i.e., the left IFG). It is possible that this lexical versus sublexical system may be based on the structural development of other cognitive networks (consistent with Dehaene and Cohen's, 2007, cortical recycling hypothesis, whereby new cognitive processes overtake evolutionarily older brain circuits), including those for spatial attention. Our findings support the idea that reading is reliant on adequate Fig. 10. Core of the ROI connectivity network as determined by k-core decomposition. Lines represent the binarized significant beta coefficients from the DTI model. development of underlying structural connectivity to regions that make up the language and reading networks. This conclusion is supported by the work of Vanderauwera et al. (2018) and Wang et al. (2016) who found that pre-reading tract integrity is an important predictor of subsequent reading outcomes, as well as Saygin et al. (2013) who found that white matter tract volume in key language pathways played an important role in reading development. Further, our model provides exciting insights into the nature of reading impairments by uncovering patterns in structural connectivity associated with skilled reading in adult readers that can serve as biomarkers for identifying reading deficits. Thus, our model may have the potential to help to identify those at risk for reading impairments based on their early structural connectivity fingerprints (similar to Saygin et al., 2012), which may have implications for targeting remediation strategies (e.g., through spatial attentional training, see Franceschini et al., 2015Franceschini et al., , 2017). Our connectivity model was also able to successfully predict activation in known attention areas in the dorsal and ventral stream, lending support to the idea that spatial attention is an integral component of reading. This is in concordance with the findings of Ekstrand et al. (2019aEkstrand et al. ( , 2019b showing that spatial attention is differentially associated with reading strategy (i.e., lexical versus sublexical). Our results provide evidence that the underlying structural network architecture to attention related regions predicts the involvement of attentional orienting regions (as indexed by fMRI activation in reading tasks) during lexical and sublexical reading. This supports research that has found that white matter connectivity in the right hemisphere plays an important role in reading (Catani and Mesulam, 2008;Horowitz-Kraus et al., 2014), as well as research using spatial attentional training as an effective reading intervention (Franceschini et al., 2015(Franceschini et al., , 2017. Further, based on our model coefficients, connectivity to right hemisphere attentional regions in the right frontal and parietal cortices was a core feature of the skilled reading networks, particularly their connectivity to the anatomical VWFA. Thus, future research should continue to examine the role that right hemisphere connectivity and the attentional system play in skilled reading. It is important to note the purpose of this work is not to argue that connectivity models should replace group average fMRI analyses. Rather, we propose that modeling the relationship between connectivity and function provides valuable insight into the structural network architecture associated with specific cognitive activation patterns, and thus is highly complementary to group level functional analyses. We believe that analyses such as this will further advance exciting and novel lines of research focused on inferring task relevant structural network connectivity using network analysis approaches. Further, while absolute errors provide one window into the efficacy of DTI connectivity models for predicting brain function, future research should assess the efficacy of DTI connectivity models for predicting functional activation using other measures. In addition, this research only scratched the surface of the myriad of information that can be gleaned from complex task-related structural networks, and future research should incorporate additional forms of network and graph theoretical techniques to further characterize these networks. Importantly, these connectivity-based models may better account for individual variability in fMRI activation than group models, which has implications for both basic research and clinical application. This technique of modeling DTI connectivity with task-based fMRI activation may help to uncover characteristic structural connectivity associated with specific cognitive functions that accommodates individual differences. For example, recent research has shown that skilled readers (who do not differ in reading ability) can be clustered into separable groups based on their brain's response to written stimuli (Fischer-Baum et al., 2018). Thus, even within neurotypical populations there is individual variability in processing, which is possibly accounted for by consistent differences in structural connectivity networks. Therefore, future research should examine whether differences in DTI connectivity can systematically account for individual differences in fMRI activation. This technique could also be used to develop universal models relating brain structure to function using large databases such as the Human Connectome Project (https://www.humanconnectome.org) to characterize consistent patterns of structural connectivity that underlie specific neural responses. Further, it may provide a valuable clinical tool for uncovering language and reading networks in patients for whom functional imaging cannot be performed (e.g., patients who require sedation in the MRI, who are unresponsive/comatose, or are unable to perform the tasks required for functional scanning) from their DTI connectivity. Future research should assess the efficacy of these models for predicting functional brain responses in patient groups, including those who may have irregular or compromised network connectivity. In conclusion, we show that brain activation during both lexical and sublexical reading in skilled readers can be accurately predicted using DTI connectivity. This finding extends to known reading and language areas including the left IFG (i.e., Broca's area), left TPJ, and the anatomical VWFA, as well as important spatial attentional areas including the right TPJ, IFG, IPS/SPL, and FEF. Further, we identified the structural connectivity networks associated with different aspects of skilled reading using connectivity analyses, showing that interconnectivity between left hemisphere language areas and right hemisphere attentional areas underlies both lexical and sublexical reading. This research broadens our understanding of the structural connectivity fingerprint that underlies skilled reading and has important implications for understanding reading impairment. It may also have clinical implications for aiding localization of language and reading function in patients where functional neuroimaging is not possible. Thus, this research shows that there is a relationship between skilled reading and extrinsic brain connectivity, suggesting that functional organization of reading and language can be determined (at least in part) by structural connectivity patterns. We hope this work will serve as an impetus to examining the structural biomarkers of skilled reading to help broaden our understanding of this essential cognitive process.
2020-05-31T20:09:52.734Z
2020-05-30T00:00:00.000
{ "year": 2020, "sha1": "84a52f77220ecf5d324ad7d790832a3431924973", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.neuroimage.2020.117008", "oa_status": "GOLD", "pdf_src": "Elsevier", "pdf_hash": "f800247f6191699365e75341b3b51caea89a8001", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
256277032
pes2o/s2orc
v3-fos-license
In vitro and in vivo antiviral activity of monolaurin against Seneca Valley virus Introduction Surveillance of the Seneca Valley virus (SVV) shows a disproportionately higher incidence on Chinese pig farms. Currently, there are no vaccines or drugs to treat SVV infection effectively and effective treatment options are urgently needed. Methods In this study, we evaluated the antiviral activity of the following medium-chain fatty acids (MCFAs) or triglycerides (MCTs) against SVV: caprylic acid, caprylic monoglyceride, capric monoglyceride, and monolaurin. Results In vitro experiments showed that monolaurin inhibited viral replication by up to 80%, while in vivo studies showed that monolaurin reduced clinical manifestations, viral load, and organ damage in SVV-infected piglets. Monolaurin significantly reduced the release of inflammatory cytokines and promoted the release of interferon-γ, which enhanced the viral clearance activity of this type of MCFA. Discussion Therefore, monolaurin is a potentially effective candidate for the treatment of SVV infection in pigs. Introduction Seneca Valley virus (SVV) belongs to the genus Senecavirus in the family Picornaviridae. Phylogenetic analysis of the whole-genome sequence of Senecavirus A shows that it is closely related to members of the genus Cardiovirus (1). In 2015, Brazilian scientists isolated the complete genome of SVV from vesicular fluid and serum of pigs with vesicular disease and elucidated that SVV infection was associated with idiopathic vesicular disease in pigs (2). Subsequently, many other countries have also reported cases of pigs infected with SVV, where newborn piglets are more vulnerable to SVV infection. The main clinical manifestations of SVV infection in pigs are blisters and ulcers on the hoofs and snout (3,4). Clinical symptoms can be similar to foot-and-mouth disease, swine vesicular disease, and vesicular stomatitis, with a potential impact on the immune system of pigs (5). The virus is shed through the oral cavity, nasal secretions, and feces with a viral shedding duration of ∼28 days after infection (3). Presently, SVV is sporadically and locally prevalent, but its transmission mechanism is not completely clear. Currently, there is no vaccine or specific drug available for the prevention and treatment of SVV infection in pigs. Therefore, the control of SVV in pigs depends on the hygiene measures implemented on pig farms. Replacing antibiotics in animal feed with biologically active substances has become a hot topic in China. Medium-chain fatty acids (MCFAs) are a class of saturated fatty acids containing 6-12 carbon atoms. Even-numbered carbon MCFAs, such as caproic acid (C6), caprylic acid (C8), capric acid (C10), and lauric acid (C12), are found in natural foods, such as coconut oil, palm kernel oil, and milk. MCFAs undergo esterification with glycerol to form triglycerides, . /fvets. . known as medium-chain fatty acid triglycerides (MCTs). In addition to being a source of energy, MCFAs can also improve intestinal morphological structure and growth, prevent infection, regulate immunity, and act as an alternative to antibiotics (6,7). Both MCFAs and MCTs exhibit strong bacteriostatic activity against a variety of pathogens, including gram-positive and gram-negative bacteria, viruses, fungi, algae, and protozoa (8,9). The antimicrobial properties of fatty acids have been reported extensively in the literature (10). Previously, studies have confirmed the antiviral activity of free MCFAs such as capric, lauric, myristic, and long-chain unsaturated oleic, linoleic and linolenic acids against vesicular stomatitis virus (VSV), herpes simplex virus (HSV) and visna virus (11). Other studies reported similar antiviral activity of MCFAs, together with their alcohol and monoglyceride derivatives, against HSV 1 and 2 (12). Research also showed that MCFAs (caprylic, capric, and lauric acids) and monolaurin can inhibit ASFV in liquid conditions and reduce Asfivirus (ASFV) infectivity, which may help to prevent disease progression and virus transmission (13). On the one hand, SVV is a small, non-enveloped picornavirus, unknown until 2002 when it was discovered incidentally as a cell culture contaminant, and the family Picornaviridae also contains foot-and-mouth disease virus (FMDV) and swine vesicular disease virus (SVDV). On the other hand, since the vesicular lesions caused by SVV infection are clinically similar from those caused by other vesicular disease viruses, such as FMDV, SVDV, VSV and vesicular exanthema of swine virus (VESV). Thus, we evaluated the antiviral activity of MCFAs or MCTs against SVV. In this study, the anti-SVV activity of selected MCFAs and MCTs was evaluated in vitro, and the most effective compound was selected and tested in vivo. The clinical symptoms, viral load, and proinflammatory cytokines were recorded and analyzed to evaluate the anti-SVV activity of monolaurin. Our results provide a reliable basis for the potential clinical use of monolaurin for the treatment of SVV infection in pigs. Materials and methods Samples and reagents BHK-21 cells and the Seneca virus A strain SVV-SC-MS (complete genome GenBank: MN700930.1) were obtained from the Animal Biotechnology Center (ABTC) at Sichuan Agricultural University School of Veterinary Medicine. Fetal bovine serum, cell culture medium (DMEM), trypsin, and PBS buffer were purchased from Solarbio (Beijing Solarbio Science and Technology Co., Ltd., Beijing, China); DMSO was purchased from Sigma (USA); the CCK8 kit (code: Beyotime. C0038) used in this study was purchased from Beyotime Biotechnology Co., Ltd. Caprylic, caprylic monoglyceride, capric monoglyceride, and monolaurin were prepared by Guangdong Nuacid Biotechnology Co., Ltd. The PrimeScript TM RT reagent Kit (Perfect Real Time), DNA/RNA extraction kit, and TB Green R Premix Ex Taq TM (Tli RNaseH Plus) were purchased from Takara (Dalian) Engineering Co., Ltd. Maximum nontoxic dose BHK-21 cells were cultured in a 96-well plate at 37 • C under 5% CO 2 for 24-36 h until the cells grew into a monolayer. Caprylic, caprylic monoglyceride, capric monoglyceride, and monolaurin were dissolved in DMSO independently to prepare a 10 mg/mL stock solution. A cell maintenance solution of 100 µg/mL was prepared from the stock solution for these four test MCFA, whereafter a total of 11 concentration gradients were prepared from the cell maintenance solution following 2-fold gradient dilution. The concentrations of the cell maintenance solution for these four test MCFA were 100, 50, 25, 12.5, 6.25, 3.125, 1.563, 0.781, 0.391, 0.195, 0.098, and 0.049 µg/mL. Supernatant from all wells of a 96-well-plate with monolayer BHK-21 cells was discarded, and the MCFA sample solution (100 µL/well) was added. Two percent DMEM and one precebt DMSO controls were also set at the same time. The cells were cultured in an incubator for 48 h at 37 • C and 5% CO 2 . A cytotoxicity assay was performed according to the instructions of the CCK8 kit, and the cell viability was calculated. The concentration corresponding to a cell viability >90% was recorded as the maximum non-toxic dose (MNTD), which was used as the working dose for subsequent experiments. In vitro calculation of viral inhibition rate A mixture of 1 MNTD and 100 TCID 50 viral suspension was prepared by mixing the virus solution with the MCFA solution. BHK-21 cells were seeded in a 96-well-plate and grown to monolayers at 37 • C in a 5% CO 2 incubator. The supernatants were discarded, and an equal volume of the viral suspension was added to the wells (100 µL/well) of the experimental group. The 2% DMEM (A) and 1% DMSO (B) controls were also set. The plates were incubated for 1 h at 37 • C with 5% CO 2 in a cell incubator. After incubation, the supernatant was discarded, and 100 µL of the sample solution with a concentration of 1 MNTD or 100 µL of the maintenance solution was added to the corresponding wells and then incubated at 37 • C in a 5% CO 2 incubator. Cell infection was stopped when complete CPE was developed in virus-only control wells (for ∼36-48 h postinfection). The supernatants were collected and measured with the CCK8 method, and the virus inhibition rate was calculated. In vivo evaluation of anti-SVV activity A total of 25 weaned piglets at 21 days old were obtained from a pig farm (Sichuan gistar group) in Sichuan Province, China. All the piglets were SVV negative for the antigen and antibody test by PCR or ELISA kits (detection methods were established by ATBC). Before the experiment, the animal lab was sterilized with formaldehyde and pasteurizer. All the pigs were cared for according to original farm procedures to prevent stress and bacterial infection in the pigs. The piglets were fed common complete feed. Piglets were first observed for 3 days and then subjected to viral challenge and drug administration. The 25 weaned piglets were divided into 5 groups (n = 5), as shown in Table 1. Clinical symptoms The clinical symptoms of piglets in each group were observed daily, and scores were assigned as follows: fever at 2 points; lethargy at 2 points; decreased feed intake at 1 point; anorexia at 2 points; Frontiers in Veterinary Science frontiersin.org . /fvets. . blisters or ulcers at 2 points; and death as 5 points. Additionally, for every piglet in each group, morning feces and 0.5 mL jugular blood were collected daily for the determination of SVV load. Whenever blisters and crusts appeared or the piglets were on the verge of death, they were immediately sacrificed and necropsied. On day 14, all the remaining piglets were sacrificed, and the lungs, spleens, kidneys, and livers were aseptically collected and fixed in 4% paraformaldehyde. Quantitative detection of SVV load in stool and blood by RT-qPCR The feces and blood samples were thoroughly mixed with 3.0 mL of PBS and then centrifuged at 12,000 r/min for 3 min. Supernatants were collected for RNA extraction. Extracted RNA was reverse transcribed to obtain cDNA, which was added to the PCR master mix as detailed in Table 2, and loaded into a fluorescent quantitative PCR machine to detect the SVV viral load under the following thermocycling conditions: 40 cycles of denaturation at 95 • C for 30 s, annealing at 95 • C for 5 s, and elongation at 58 • C for 30 s. At the end of the amplification, melting curve analysis was performed from 65 to 95 • C with 0.5 • C per second. Histopathology Briefly, the Lung, spleen, liver, and kidney tissues were fixed in 4% paraformaldehyde for 36 h and then embedded in paraffin. According to standard method, tissue sections (4 µm), were stained with Hematoxylin and Eosin (H&E) for histopathological examination. Finally, histological lesions were recorded with a light microscope (OLYMPUS, Japan) at 400× magnifications. In vivo detection of inflammatory cytokines Piglet venous blood was collected at 0 and 3 dpi. The expression of IL-6, IL-8, IL-10, IL-1β, IFN-γ, and TNF-α in the supernatant was detected using ELISA kits according to the manufacturer's instructions (Multisciences (Lianke) Biotech Co., Ltd., Hangzhou, China). The absorbance was measured using a microplate reader at 450 nm. In brief, the samples were added to the wells, followed by the antigen in the samples bounding to the capture antibody, the microplate was washed, the detection antibody was added, and the microplate was washed again. After the substrate was added, the microplate reader detected the colored reaction products and calculated light density (OD) values, which were used to calculate and analyze the amount of antigen in each sample. Statistical analysis Statistical results were expressed as means and standard deviation (SD). Significant differences were determined with one-way analysis of variance (ANOVA), followed by Duncan's multiple range test in SPSS 20.0 (IBM Corp., Armonk, NY, USA). Significance was set at P < 0.05. Maximum non-toxic concentration determination After adding different concentrations of drugs and culturing for 96 h, the OD 450 value or cell viability was determined using a CCK-8 kit following the manufacturer's instructions. If the cell viability was >90%, the dose was recorded as MBTD. BHK-21 cells showed good tolerance to glycerol caprylate and monoglycery laurate with an MNTD of 50 µg/mL. Caprylic and capric monoglycerides showed little toxicity to these cells with an MNTD of 25 µg/mL (Figure 1). Monolaurin inhibits virus proliferation Although all four treatments had an inhibitory effect on SVV, the inhibitory rate of monolaurin was the highest, with values up to 80% (Figure 2). The anti-SVV activity of caprylic monoglycerides was better than that of capric monoglycerides. Monolaurin was selected for the subsequent in vivo tests. Clinical symptoms and scores After the piglets were challenged with the virus and treated with monolaurin, the development and progression of their clinical symptoms were observed and monitored continuously for 14 days (Figure 3). One piglet from the low-dose group died at 2 dpi. There were no other piglet deaths recorded in this group on the subsequent days. One piglet from the model group died at 3 dpi. No piglets died in the middle-dose, high-dose, or control groups. The piglets in the low-dose and model groups showed decreased feed intake and symptoms such as anorexia, lethargy, and fever after the virus Frontiers in Veterinary Science frontiersin.org . /fvets. . challenge. These piglets had blisters and ulcers on their snouts and hooves at 7 dpi. Except that erosions, ulcerations, and vesicular lesions of the snout, oral mucosa, and distal limbs, especially around the coronary band, as well as more general symptoms of illness such as fever, lethargy, and anorexia, may be observed from the model group. Hoof sloughing and lameness can also occur. By contrast the clinical symptoms, mental status, and feed intake of the monolaurintreated piglets were better than those of piglets in the model group. The piglets in the middle-dose and high-dose groups had similar clinical symptoms, with a lower post-challenge score than that of the model group. Post-challenge e ects of monolaurin on SVV load Piglet viral load peaked at 3 dpi and then continued to decline (Figure 3). Monolaurin reduced the viral load in the feces and blood of piglets infected with SVV in a dose-dependent manner (Figure 4). The treatment with high-dose Monolaurin was the most effective for viral load in fecal and blood of SVV-infected piglets. Compared with fecal viral load of SVV infected piglets, blood viral load decreased more significantly after 3 dpi ( Figure 4B). Histopathological examination Blisters and ulcers manifested on the snout and hoofs at 7 dpi. Pigs in the model and low-dose groups exhibited the following clinical manifestations: parts of the lung were atrophied, the alveolar septum was thickened, and the rest of the lung tissues had compensatory emphysema. In the model group, spleens showed diffuse hemorrhage, severe swelling of hepatocytes, partial cell necrosis, glomerular atrophy, and partial shedding of the renal tubular epithelium. There were no significant changes in the spleens, livers, and kidneys of pigs in the low-dose group. The pigs in the middle-dose and high-dose groups had alveoli without obvious . In vivo detection of inflammatory cytokines Proinflammatory cytokines were significantly increased by SVV infection, and treatment with monolaurin showed some degree of anti-inflammatory activity (Figure 6). High doses of monolaurin significantly decreased the levels of IL-1β, IL-10, and TNF-α to levels (p < 0.05), without a significant difference compared with the corresponding control groups (p > 0.05). However, with the increase of the monolaurin dose, the trend of the increase of IFN-γ level is more obvious. So, high doses of monolaurin significantly increased the amount of IFN-γ in a dose-dependent manner. Discussion Monolaurin, a monoglyceride formed from 12-carbon saturated fatty acids and glycerol, is naturally found in coconut oil, palm oil, and breast milk, and is a safe and highly effective monoglyceride with bacteriostatic activity (14). Monolaurin has broad antibacterial activities, including inhibiting bacterial growth, reducing the production of exotoxin, and forming biofilms (15). As a lipid, monolaurin can bind to the phospholipid bilayer of bacteria and disrupt the normal physiological processes of the bacteria, thereby inducing a bacteriostatic effect. Monolaurin is also reported to have a strong inhibitory effect on the growth and reproduction of gram-positive bacteria such as Staphylococcus aureus, Listeria monocytogenes, Helicobacter pylori, Bacillus, and Campylobacter jejuni, among others (16). Furthermore, monolaurin can block the release of gram-positive bacterial exotoxins (such as enterotoxins and streptococcal pyrogenic exotoxins, etc.) (17). Monolaurin can also bind to the lipid bilayer membrane enveloped viruses and inhibit viral activity by compromising viral integrity and infectivity. Monolaurin has shown a good inhibitory effect on some enveloped viruses, such as the HSV, influenza virus, PRRS virus, and porcine epidemic diarrhea virus (18,19). In this study, we show that monolaurin has a strong inhibitory effect on SVV even though it is a non-enveloped virus. However, the anti-SVV mechanism of monolaurin needs further research. Even though SVV causes blisters and ulcers on the snout and hoofs of pigs, there are only a few reports on autopsy symptoms and microscopic pathogenesis of this viral infection. Pathological experiments in this study showed lesions in the lungs, livers, spleens, and kidneys of infected piglets. Preventing SVV from destroying the integrity of the intestinal barrier helps to reduce the damage of the virus. Monolaurin has great potential for application in animal health, as it promotes growth and gut health. For example, some studies have found that monolaurin can significantly improve the growth performance of weaned piglets (20, 21). Additionally, dose-related monolaurin has been found to improve body weight, regulation of gut microbiota, and systemic inflammation in mice fed a low-fat diet (22). These studies show a significant positive correlation between monolaurin and the increased abundance of probiotics, such as Lactobacillus reuteri and Ruminococcus gnavus (22). Normal intestinal flora is necessary for the integrity of the tight junctions of the intestinal tract. Here, monolaurin significantly improves the health of the intestinal tract, which reduces the chance of viruses invading the intestinal epithelial cells and the bloodstream. Our findings corroborate the findings of these previous studies, as we found that monolaurin-treated pigs had significantly reduced viral loads in their blood and feces as well as reduced clinical symptoms associated with SVV infection. During the trial, one piglet from the low-dose group died at 2 dpi, one piglet from the model group died at 3 dpi. No piglets died in the middle-dose, high-dose. Many viruses can induce inflammatory responses and even cause an inflammatory factor storm (23). The mechanism is the excessive activation of immune cells by increasing intracellular inflammatory factors, including interleukin, TNF-α, and complement protein molecules (24). The storm-like suicide attack induced by pathogenic microorganisms in infected cells can cause bystander damage to other tissues by increasing vascular permeability and circulatory disorders, which can even result in multiple organ functional failure (MOF) (25). Usually, inflammation is a protective immune response that is conducive to clearing pathogenic microorganisms. However, uncontrollable excessive inflammation can cause autoimmune damage (26). In this study, we observed that SVV infection induced the release of many inflammatory cytokines, including IL-1β, IL-6, IL-8, IL-10, and TNF-α, triggering an inflammatory cytokine storm. Previous studies have found that monolaurin affects the lipid dynamics of human T cells and regulates T-cell signaling and the release of functional factors. It also inhibits the immune response that is overactivated by the virus, thereby reducing the amount of SVVinduced inflammatory cells (27). Seneca Valley virus infection can reduce the level of IFN-γ in the serum, hence reducing the antiviral activity of the body. On the other hand, monolaurin can increase the level of IFN-γ, possibly explaining one of the anti-viral mechanisms of MCFAs. The results from this study support the efficacy of Monolaurin against SVV. Data suggest that monolaurin block virus proliferation . Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.
2023-01-27T14:42:46.334Z
2023-01-27T00:00:00.000
{ "year": 2023, "sha1": "30691cbc8359ae5488aa654a5eb95d6a98c9ea89", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "30691cbc8359ae5488aa654a5eb95d6a98c9ea89", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14228884
pes2o/s2orc
v3-fos-license
Using Tau Polarization to Sharpen up the SUGRA Signal at Tevatron The most promising source of SUGRA signal at the Tevatron collider is the pair-production of electroweak gauginos, followed by their leptonic decay. In the parameter range corresponding to dominant leptonic decay of these gauginos one or more of the leptons are expected to be $\tau$ with $P_\tau \simeq +1$. This polarization can be effectively used to distinguish the signal from the background in the 1-prong hadronic decay channel of $\tau$ by looking at the fractional $\tau$-jet momentum carried by the charged prong. The LEP limit of chargino mass > 90 GeV corresponds to a gluino mass > 300 GeV in the minimal SUGRA model [1], which puts them beyond the discovery reach of the Tevatron collider. Thus the most promising source of SUGRA signal at Tevatron seems to be the pair production of electroweak gauginos,W + 1W − 1 andW ± 1Z2 . The leptonic decays ofW 1 andZ 2 into the LSP (Z 1 ) result in clean dilepton and trilepton final states with a significant missing-E T (E / T ) and very little hadronic jet activity. Recently there has been a good deal of interest in these processes as the main signatures of the SUGRA model at Tevatron [2]- [5]. The parameter space of particular interest to this signature is one where the lighter (right-handed) sleptons lye below theW 1 andZ 2 masses, resulting in very large leptonic branching fractions of these gauginos. This corresponds to two regions of the SUGRA parameter space -i.e. I) m 0 significantly less than m 1/2 (m 0 ∼ 1 2 m 1/2 ), implying ml R ,τ R < ∼ mW 1 ,Z 2 at any value of tan β, and II) m 0 ∼ m 1/2 at large tan β, implying mτ R < ∼ mW 1 ,Z 2 , where ℓ denotes electron and muon. In the 1st case one expects a ℓ + ℓ − τ signature fromZ 2W1 decay, sinceW 1 → τ ν τZ1 via the larger L-R mixing in theτ sector due to the larger τ mass. In the 2nd case one expects τ τ and τ τ τ signatures fromW 1W1 andZ 2W1 decays respectively. The presence of one or more τ leptons in the final state means that the τ channel is expected to play a very important role in superparticle search at Tevatron, particularly in the minimal SUGRA model [2]- [5]. The minimal SUGRA model predicts the polarization of τ resulting from the aboveτ decay to be = +1 to a good approximation, as we shall see below. The purpose of this note is to use this τ polarization (P τ = +1) to sharpen the distinction between the SUSY signal and the SM background. It has been shown in the context of charged Higgs boson search in the H ± → τ ν channel that in the 1-prong hadronic τ -jet the P τ = +1 signal from H ± decay can be effectively distinguished from the P τ = −1 background from W ± via the sharing of the jet energy between the charged pion and the accompanying neutrals [6]- [7]. This has been confirmed now by detailed simulation studies for both Tevatron and LHC. We shall use a similar strategy here to distinguish the SUSY signal from the SM background in the 1-prong hadronic τ -jet channels. In particular we shall see that the P τ = +1 signal can be effectively separated from the P τ = −1 background as well as the fake τ background from QCD jets by requiring the charged track to carry > 80% of the jet energy-momentum. We shall concentrate on the 1-prong hadronic decay channel of τ , which is best suited for τ identification. It accounts for 80% of hadronic τ decay and 50% of its total decay width. The main contributors to the 1-prong hadronic decay are τ ± → π ± ν(12.5%), ρ ± ν(26%), a ± 1 ν(7.5%), where the branching fractions for π and ρ include the small K and K ⋆ contributions respectively [1], which have identical polarization effects. Together they account for 90% of the 1-prong hadronic decay. The CM angular distribution of τ decay into π or a vector meson v (= ρ, a 1 ) is simply given in terms of its polarization as where L, T denote the longitudinal and transverse polarization states of the vector meson. The fraction x of the τ lab. momentum carried by its decay meson is related to the angle θ via where we have neglected the τ mass relative to its lab. momentum (collinear approximation). The only measurable τ momentum is the product xp τ = p τ −jet , i.e. the visible momentum of the τ -jet. It is clear from eqs. (2) -(4) that the hard part of the τ -jet, which is responsible for τ identification, is dominated by π, ρ L , a 1L for the P τ = +1 signal, while it is dominated by ρ T , a 1T for the P τ = −1 background. The two can be distinguished by exploiting the fact that the transverse ρ and a 1 decays favour even sharing of momentum among the decay pions, while the longitudinal ρ and a 1 decays favour uneven sharing, where the charged pion carries either very little or most of the momentum. It is easy to derive this quantitatively for ρ decay. But one has to assume a dynamical model for a 1 decay to get a quantitative result. We shall assume the model of ref. [8], based on conserved axial vector current approximation, which provides a good description to the a 1 → 3π data. A detailed account of the ρ and a 1 decay formalisms including finite width effects can be found in [6], [9]. A simple FORTRAN code for 1-prong hadronic decay of Polarized τ based on these formalisms can be obtained from one of the authors (D.P. Roy). It gives the distribution of the τ momentum among the decay pions in the 1-prong hadronic decay mode into one charged and any numbers of neutral pions, in terms of the π, ρ and a 1 contributions of eq.(1). This is a 2-step process. First it gives the fraction of the τ momentum imparted to the visible τ -jet (i.e. π, ρ L , ρ T , a 1L or a 1T ) via eqs. (2) -(4). Then it determines how this visible τ -jet momentum is shared between the decay pions using the ρ L,T and a 1L,T decay formalism of refs. [6], [9]. As we shall see below the two polarization states predict distinctive distributions in R = p π ± /p τ −jet , i.e. the fraction of the visible τ -jet momentum carried by the charged prong. This can be obtained by combining the charged prong momentum measurement in the tracker with the calorimetric energy deposit of the τ -jet. As specific examples of the two regions of interest in the SUGRA parameter space mentioned above, we have chosen two points representing the cases I and II, and evaluated the corresponding SUSY spectra using the ISAS-UGRA code -version 7.48 [10]. The resultingW 1 ,Z 2 ,Z 1 and the slepton masses are shown in the two rows of Table 1 along with theτ mixing angle, whereτ It may be noted here that the Polarization of the τ resulting from theτ 1 → τZ 1 decay is sinceZ 1 ≃B in the minimal SUGRA model and τ R has twice as large a hypercharge as τ L [11]. For the mixing angles of Table 1, cos θ τ = 0.19(0.53), we get P τ = 0.98(0.85). Hence the τ polarization is ≃ +1 to a good approximation over a wide range of the relevant SUGRA parameters, notably tan β. The E / T is evaluated from the vector sum of the lepton and jet p T after resolution smearing. The main results for the two cases are presented below. I. m 0 ∼ 1 2 m 1/2 (ℓ + ℓ − τ signal): The sparticle spectrum of the top row of Table 1 imply that the dominant decay modes ofZ 2 andW 1 arẽ with branching fractions ≃ 2/3 and 1 respectively. Thus one expects a distinctive ℓ + ℓ − τ signal accompanied by a significant E / T fromW 1Z2 production. Moreover this signal is expected to hold over a wide range of tan β, since the production cross-section as well as the above decay branching fractions are insensitive to this parameter. Note that the right-handed slepton masses of Table 1 are fairly close to theW 1 ,Z 2 masses due to the LEP limit on ml ,τ [1]. Hence the lepton from theZ 2 → ℓl R decay is expected to be relatively soft. We have therefore imposed a modest but realistic p T cut on the softer lepton. The cuts are |η ℓ 1 ,ℓ 2 ,τ −jet | < 2.5, φ ℓ 1 ℓ 2 < 150 • , M ℓ 1 ℓ 2 > 10 GeV and = M Z ±20 GeV. (10) Table 2 summarises the signal and background cross-sections after these cuts, where we have included a τ identification efficiency of 50% along with a 0.5% probability of mistagging a normal hadron jet as τ [12]. The latter is a conservative assumption, since the probability of a normal hadron jet faking a 1-prong τ -jet with p T ∼ 20 GeV has been estimated to be about 0.3% for the CDF experiment in Run-1, going up to 0.8% for the (1+3)-prongs τ -jet [13]. Table 2. The signal and background cross-sections (in fb) in the ℓℓτ channel after the cuts of eq. (10). It includes a 50% efficiency factor for τ identification along with a 0.5% probability of mistagging a normal hadron jet as τ . Signal Background Thanks to the E / T and the dilepton mass and opening angle cuts, the potentially large (Z ⋆ /γ ⋆ )j background is reduced to ∼ 0.1% of the signal. We have estimated this background using a simple analytic formula for the matrix element neglecting the vector coupling of Z to ℓl. The matrix element for (Z ⋆ /γ ⋆ )W has been evaluated using MADGRAPH [14]. Fig. 1 shows the P τ = +1 signal as a function of the fractional τ -jet momentum (R) carried by the charged-prong. For comparison it also shows the corresponding distribution assuming the signal to have P τ = −1. This could be the case e.g. in some alternative SUSY model with a higgsino LSP. The complimentary shape of the two distributions, as discussed earlier, is clearly visible in this figure. The P τ = +1 signal shows the peaks at the two ends from the ρ L , a 1L along with the pion contribution (added to the last bin), while the P τ = −1 distribution shows the central peak due to the ρ T , a 1T along with a reduced pion contribution [6], [9]. The expected luminosity of 2 fb −1 per experiment in Run-2 corresponds to ∼ 54 signal events in the ℓ + ℓ − τ channel for each experiment without any serious SM background. Thus one can use this distribution in this case as a confirmatory test of the minimal SUGRA model. II. m 0 ∼ m 1/2 and large tan β (τ τ signal): The sparticle spectrum of the bottom row of Table 1 imply that in this case the dominant decay modes ofZ 2 andW 1 arẽ Thus one expects a τ τ signal fromW 1Z2 ,W 1W1 andττ production with P τ ≃ 1 each. The 1st process contains a 3rd τ fromZ 2 → ττ 1 , whose polarization depends on the model parameters. The contribution from the dominant (W ) component ofZ 2 coupling to the subdominant (τ L ) component ofτ 1 has P τ = −1, while that from the subdominant (B) component ofZ 2 coupling to the dominant (τ R ) component ofτ 1 has P τ = +1. And it is the other way around for the higgsino component ofZ 2 . But in any case the τ resulting from this decay is relatively soft for the reason mentioned above and rarely survives the τ -identification cut of p τ −jet T > 15 GeV. Therefore we shall require the identification of two τ jets with P τ = +1, while there may be occasionally a 3rd τ jet with any polarization (inclusive τ τ channel). We shall neglect the contribution from this 3rd τ to the signal cross-section for simplicity, which means a marginal underestimation of the signal. The raw cross-sections forW 1Z2 ,W 1W1 andττ production processes are 770, 850 and 40 fb respectively. We impose the following cuts: (13) where we have reconstructed the invariant mass of the τ -pair for the signal and background events after resolving the E / T into their respective directions. The reconstructed M τ τ represents the physical invariant mass of the τ -pair for the (Z ⋆ /γ ⋆ )j background; and it plays a very effective role in suppressing this background. Of course it does not represent the physical τ τ invariant mass for the signal and other background processes, which have additional sources of E / T ; and the corresponding cut does not have any significant effect on these contributions. The resulting signal and background cross-sections are listed in Table 3. We see from the 1st row of Table 3 that the W j background, with the jet faking as a τ , is about 5 times larger than the τ τ signal. In view of the importance of this background we have estimated it via the on-shell W j as well as the 3-body production processes q ′q (g) → τ νg(q) using the matrix elements from [15]. The two estimates agree to within 5%. 1 Table 3. The signal and background cross-sections (in fb) in the τ τ channel after the cuts of eq. (13), including a 50% efficiency factor for each τ along with a 0.5% probability for mistagging a normal hadron jet as τ . The last row shows the total signal and background cross-sections after the R > 0.8 cut on the two τ -jets. Signal Background 2 compares the P τ = +1 signal and this P τ = −1 background as functions of the τ -jet momentum fraction R carried by the charged prong. It clearly shows the complimentary shapes of the two distributions, similar to those of Fig. 1. It means that the difference comes mainly from the opposite polarizations of τ rather than kinematic difference between the signal and the background. Requiring the charged track to carry > 80% of the τ -jet energy-momentum (R > 0.8) retains 45% of the signal as against only 20% of the background. Moreover the R > 0.8 cut is also known to reduce the fake background from normal hadron jets by at least a factor of 5 [16]. Thus demanding both the τ -jets to contain hard charged tracks, carrying > 80% of their momenta, would reduce the signal by a factor of 5 while reducing the dominant background by at least a factor of 25. The same is true for the QCD dijet background not considered here. The τ τ background from W W and tt are also reduced by a factor of 25 each. On the other hand the background from Z ⋆ /γ ⋆ → τ τ has P τ = 0, and the corresponding distribution lies midway between those of P τ = ±1. The resulting suppression factor is ≃ (1/3) 2 . The normalised SUSY signal (P τ = +1) and W j background (P τ = −1) cross-sections in the 1-prong hadronic τ -jet channel shown as functions of the τ -jet momentum fraction (R) carried by the charged prong. As we see from the bottom row of Table 3 the R > 0.8 cut on both the τ -jets reduces the total background to the signal level, i.e. about 3.5 fb each. With the expected Run-2 luminosity of 2 fb −1 per experiment, one expects a combined yield of ∼ 14 signal events against a similar background from CDF and DO /. Note that the corresponding significance level is S/ √ B ≃ 4 with or without the R > 0.8 cut. Nonetheless it is no mean gain that this cut can enhance the signal to background ratio from 1/5 to at least 1. This means that the τ τ channel can offer a viable SUGRA signature along with the ℓ + ℓ − τ channel at the Tevatron upgrades, starting with Run-2. It may be noted from Table 3 that requiring the τ pair to have opposite sign (same sign) will retain a little over 3/4 (under 1/4) of the signal while retaining 1/2 of the dominant background. Thus with sufficient luminosity it may be possible to improve the signal to background ratio by requiring the τ pair to have opposite sign. Finally it should be noted that while we have focussed the current analysis on the SUGRA model the same polarization strategy can be used to distinguish the SUSY signal from the SM background in the gauge mediated SUSY breaking model [17], where one expects a P τ = +1 from theτ R → τG decay.
2014-10-01T00:00:00.000Z
2001-09-11T00:00:00.000
{ "year": 2001, "sha1": "61c1e1639184b3c88cecb147a0e4c381d0a68377", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0109096", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "61c1e1639184b3c88cecb147a0e4c381d0a68377", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
260611028
pes2o/s2orc
v3-fos-license
Delete: Deep Lead Optimization Enveloped in Protein Pocket through Unified Deleting Strategies and a Structure-aware Network Drug discovery is a highly complicated process, and it is unfeasible to fully commit it to the recently developed molecular generation methods. Deep learning-based lead optimization takes expert knowledge as a starting point, learning from numerous historical cases about how to modify the structure for better drug-forming properties. However, compared with the more established de novo generation schemes, lead optimization is still an area that requires further exploration. Previously developed models are often limited to resolving one (or few) certain subtask(s) of lead optimization, and most of them can only generate the two-dimensional structures of molecules while disregarding the vital protein-ligand interactions based on the three-dimensional binding poses. To address these challenges, we present a novel tool for lead optimization, named Delete (Deep lead optimization enveloped in protein pocket). Our model can handle all subtasks of lead optimization involving fragment growing, linking, and replacement through a unified deleting (masking) strategy, and is aware of the intricate pocket-ligand interactions through the geometric design of networks. Statistical evaluations and case studies conducted on individual subtasks demonstrate that Delete has a significant ability to produce molecules with superior binding affinities to protein targets and reasonable drug-likeness from given fragments or atoms. This feature may assist medicinal chemists in developing not only me-too/me-better products from existing drugs but also hit-to-lead for first-in-class drugs in a highly efficient manner. Introduction (Docking Affinity and Scoring Affinity) calculated by AutoDock-Vina are presented 29 . Docking Affinity is obtained based on the docking conformations, which reflects the true binding affinity more closely; while Scoring Affinity is calculated based on the directly generated conformations from DL models, which demonstrates the capability of these models to capture protein-ligand binding geometries. On the BindingMoad set, the ranking in terms of binding affinity is Delete-CM > Delete-M > BindingMoad > DiffLinker, indicating that DiffLinker cannot always optimize the molecule from lead fragments that can bind more tightly than its natural reference. The success rate of the molecules generated by Delete is higher than 90%, demonstrating that Delete is a powerful tool in linker design. Other drug-forming properties, such as QED, SA, Rule of Five (Lipinski) and LogP, are non-trivial prediction problems in computational chemistry 30,31 . Hence, we argue that these drug-forming properties of the generated molecules only need to fall within reasonable intervals, which Delete satisfies. To further illustrate the potential of Delete in real-world drug discovery, we present a practical example from fragment-based drug discovery campaigns in the context of tuberculosis. Targeting inosine 5'-monophosphate dehydrogenase (IMPDH) presents a compelling strategy in drug design given its essential role as a key enzyme in the de novo biosynthesis of guanine nucleotides. Trapero et al. 32 have successfully discovered potent inhibitors for IMPDH through a comprehensive approach of fragment-based screening and structure-based design. During the fragment library screening, the authors identified a phenylimidazole derivative displaying low affinity towards IMPDH. To gain insight into the binding mechanism of this compound, X-ray diffraction analysis was performed, revealing that two molecules of the phenylimidazole derivative were concurrently engaged with the NAD pocket of IMPDH at a close molecular distance (Figure 2A). Through linking the two phenylimidazole molecules, the researchers succeeded in generating a novel compound ( Figure 2B) that exhibits a remarkable improvement of over 1000-fold in IMPDH affinity compared with the initial fragment hit. To follow their footsteps, we sought to simulate the real design process of the most potent compound by deleting its linker atoms based on the structure of the initial fragment hit and employing Delete to generate the linker segment of the molecule (Figure 2C-D). Remarkably, Delete successfully hit the origin compound by only generating a total of 76 different compounds. As shown in Table 2, despite an average 2D similarity of only 0.5010 when compared with the original crystal compound, the generated compounds display a high 3D shape similarity of 0.9285, indicating that Delete can preserve the key geometrical features of the original compound while producing diverse molecules. The average SA of the generated compounds is notably high at 0.717, indicating the rationality of the generated compounds. When comparing the generated molecules with the original compound using Vina scoring, 65.8% of the generated molecules exhibit better scores ( Figure 2F). The results of Figure 2G show that 60.5% of the generated samples gain higher estimated energy after conformational search, i.e., docking. Figure 2E exhibits the overlays of the four generated compounds and the corresponding 2D structures of the linker fragment. It demonstrates that the length of the linker generated by Delete is consistent with that of the linker in the original structure, and the generated compounds can accurately reconstruct the 3D structure of the original compound. For instance, as shown in the first example, the generated structure of the original compound is almost identical to that suggested by the crystal structure (RMSD). Furthermore, the other three examples indicate that Delete can provide more linkers with superior affinity and proper drug-likeness properties. To sum up, Delete possesses the capability to target reasonable and effective linkers with excellent efficiency, and generate molecules with improved scores, which will significantly expedite the drug linker design process. PROTAC Design As a specific task of linker design, proteolysis-targeting chimeras (PROTAC) design has garnered widespread interest. Recent research has explored the use of DL to accelerate PTOTAC design 33 , but this approach primarily explores the 2D chemical space rather than rational design based on the interactions induced by protein pockets. So here, we employed the linker-trained version of Delete to design a series of PROTACs targeting the famous SMARCA2 case. SMARCA2 is an ATPase subunit of the BAF (SWI/SNF) chromatin remodeling complexes, which controls genes involved in various biological processes such as DNA damage repair, DNA replication, and cell growth, division, and maturation 34 . Farnaby et al. 35 developed PROTAC degraders of SMARCA2 by employing a bromodomain ligand and recruitment of von Hippel-Lindau (VHL) E3 ubiquitin ligase. By isothermal titration calorimetry (ITC) experiments, the authors found that the binding of SMARCA2 and VHL by a poly(ethylene glycol)-based linker (PROTAC1) displayed 4.8 fold greater affinity. In order to not only mimic the binding conformation of PROTAC1 but also exhibit superior molecular recognition, they designed a linker by incorporating a benzylic group to form a π-stacking with the receptor Y98 residue (PROTAC2). The 2D structures and the overlay of two PROTACs are shown in Figure 3A and Figure 3B. In order to evaluate the ability of Delete for the challenging task of PROTAC linker design, we excluded the known linker atoms of the PROTACs and retained solely the VHL and SMARCA2 binding fragments as the input for the Delete model ( Figure 3C). Through this approach, we aim to investigate whether Delete could effectively generate promising linker structures in this such complex and demanding design task. We directly generated 106 linker structures using Delete without any further optimization and filtering. After scoring these structures with AutoDock-Vina and comparing them with PROTAC2, we observed that 88.7% of the generated compounds had better scores ( Figure 3E). The docking results surprisingly show that 98.1% of the compounds have better scores ( Figure 3F). Furthermore, compared with the crystal structure of PROTAC2, the average 3D shape similarity of the generated molecules is as high as 0.8931, and their average SA is 0.5446, demonstrating that Delete can generate drug-like molecules with reasonable structures and higher activities for the PROTAC linker design task. Here, we present three generated PROTAC compounds that can well mimic the length and binding conformation of the linker in the crystal structure of PROTAC2 ( Figure 3D). Interaction pattern analysis shows that these compounds demonstrate the ability to maintain the crucial CH−π interaction with Y98 observed in the crystal structure of PROTAC2. Summarily, Delete possesses the capability to PROTAC linker design, and the diversity of the generated compounds offer promising avenues for the design of novel linkers. Scaffold Hopping Scaffold hopping, a famous lead optimization strategy and also a well-known approach to develop me-too drugs, was first proposed by Gisbert 36 in 1999. Since then, a variety of computational methods have been developed 37-39 to spread this strategy. Most traditional methods rely on similarity comparison, such as bioisostere substitution or pharmacophore search 40 . In this study, we evaluated the performance of Delete on scaffold hopping under the Bemis-Murcko scaffold definition 41 . The results shown in Table 1 indicate that not only the docking energies of the generated molecules are lower than those generated by CrossDock and BindingMoad but also the scoring energies are also lower, demonstrating the powerful generative capability of Delete to build favorable scaffolds inside the protein pockets. It is interesting to find that most of the drugforming properties have a slight improvement in this task, though the BM scaffold removes the largest part of molecules as illustrated in Figure 4D. It seems that even only based several atoms scattered inside the pocket, Delete could still suggest potential structures for chemists. The impressive results of Delete on the most challenging scaffold hopping scenario implies that it would work well in the cases that removing fewer part of molecule, which is easier but closer to real-world applications. We investigated the performance of Delete on scaffold hopping for kinesin Eg5, which is a crucial target for cancer chemotherapy drug development 42 . Ulaganathan et al. 42 identified and characterized an allosteric pocket of Eg5 along with an inhibitor BI8 with a nanomolar K d , using X-ray methods ( Figure 4A-C). To accomplish the scaffold hopping task, we employed the Bemis-Murcko scheme to select the core scaffold with its corresponding side chains and then deleted the scaffold of the BI8 compound, retaining only some tiny fragments (TFs) for generation ( Figure 4D). The objective of this study is to investigate whether, using only TFs, Delete can construct reasonable scaffolds, generate compounds resembling the original structure, and create compounds that retain the geometrical similarity but with a novel skeleton. We successfully generated a total of 131 structures, and surprisingly, our results demonstrate that Delete can produce compounds that closely resemble the original compound. The average 2D similarity of the generated structures is only 0.4700, implying that the generated scaffolds have changed to different structures compared to the original compound (previous work take 0.6 as the threshold 19 ), while the 3D shape similarity was as high as 0.7878, indicating that the generated molecules retained the spatial characteristics of the original compound. By directly scoring the generated compounds using AutoDock-Vina and comparing them with the original compound, we found that the computed biological activity of the generated compounds was equivalent to or even higher than that of the original compound ( Figure 4F). Additionally, we redocked the generated and original compounds and compared the docking scores. The results show that 40.5% of the generated compounds achieve better scores than the originals ( Figure 4G). Figure Side-chain Decoration Side-chain decoration is another important lead optimization perspective, in which the foundation lies on growing side chains on privileged scaffolds 43 to explore possible interactions with residues while keeping the scaffold unchanged. Generating a series of scaffold-constraint compounds reduces the difficulty of synthesis and could retain the dominant structures of molecules that contribute to biological activity. Table 1 (Figure 5A), which revealed the presence of two sub-pockets positioned above and below the orthosteric site of the DRD2 receptor ( Figure 5B). Specifically, these sub-pockets are respectively bound by the tetrahydropyridopyrimidinone and benzisoxazole moieties of risperidone. Furthermore, several significant residues such as W100, F110, W386, F390, Y408 and T412 were identified within this binding site (Figure 5C). In this study, we utilized the piperidine structure of risperidone as a starting fragment for sidechain decoration to design novel drugs targeting DRD2 (Figure 5D). Through this approach, we generated 110 novel structures with a high degree of conformational similarity and molecular diversity compared to the X-ray crystallized compound. The average 2D and 3D similarities of the generated molecules are 0.4582 and 0.7422, respectively. Furthermore, the average SA and QED values of the generated molecules are 0.7215 and 0.6531, respectively, indicating the high synthesizable and druggable potentials of the generated compounds. Using AutoDock-Vina to directly score the generated structures, 23.6% of the molecules outperformed the original compound ( Figure 5F). Furthermore, after docking and comparing the Top1 docking scores, 43.6% of the compounds exhibit higher binding capability (Figure 5G), suggesting the efficacy of our side-chains decoration approach for designing drugs targeting DRD2. Figure 5E Fragment elaboration Fragment elaboration and side-chain decoration are two methods used in lead optimization to improve the pharmacological properties of drug candidates. While there is some overlap between the two techniques, fragment elaboration focuses on expanding a larger functional group that contains more pharmacophores to fill the empty sub-pockets, whereas side-chain decoration involves the optimization of 4-5 sites of the given scaffold. The results of fragment elaboration in Table 1 keep similar to the other three tasks, the binding affinities are advanced and the drugforming properties locate in a slight variance interval compared with the test set. In this task, recovery of lead optimization on the β1-adrenergic receptor (Adrb1) is the real-world case to demonstrate the potential of Delete in fragment elaboration. Adrb1 is a classic drug target with a well-established structure that has been extensively studied over the years 45 . Adrb1 antagonists are frequently used in cardiovascular medicine, as well as in other therapeutic areas, including migraine and anxiety 46 . Notable examples of these antagonists include cyanopindolol and carazolol (Figure 6A), which possess comparable ethanolamine backbones and distinct aromatic or heteroaromatic moieties. To investigate whether Delete could successfully reproduce the structure of cyanopindolol and whether it could further hit the heteroaromatic skeleton of other Adrb1 inhibitors, we started with the crystal structure of cyanopindolol ( Figure 6B) and removed the heteroaromatic moiety, retaining solely the ethanolamine backbone as the foundation for fragment elaboration (Figure 6C). We generated a total of 117 structures and successfully obtained the exact structure of cyanopindolol while even exploring the structure of carazolol, another inhibitor of Adrb1. It is shown that the molecules generated by Delete not only gain a high 3D conformational similarity to the original compound but also maintain a high diversity, according to the average 2D similarity of the generated molecules being 0.3471 and the 3D shape similarity being 0.8138. Furthermore, the average SA and QED of the generated molecules are 0.7118 and 0.7669, respectively, demonstrating their high synthesizable and druggable potential. By directly scoring the generated compounds using AutoDock-Vina and comparing them with the original compound, we find that the activity of the generated compounds is equivalent to or even higher than that of the original compound ( Figure 6F). Additionally, we re-docked the generated and original compounds to compare the docking scores. The results show that 66.7% of the generated compounds achieve better binding capability than the originals (Figure 6G), which further provides the effectiveness of the generated compounds. As shown in Figure 6B, the heteroaromatic moiety of cyanopindolol predominantly engages in hydrophobic and hydrophilic interactions with V122, S211, F307 and N310 of Adrb1. Four of the generated compounds are shown as examples in Figure 6E, among which the first example completely replicates the structure of the original compound, exhibiting excellent 3D structural superposition with the original compound and accurately reproducing all interaction features. The second example successfully hits another strong and potent Adrb1 antagonist compound, carazolol, and gains more interactions than the original structure as well as a higher Vina score. The wet experiment also reported that the activity of carazolol was about times higher than that of cyanopindolol Analysis of Generated Conformations Since Delete possesses the 3D generation capability, its generation of molecules is accompanied by the prediction of the binding conformation within the pocket. Previous experiments have focused on testing the chemical properties of the generated molecules, but this experiment emphasizes the plausibility of the geometries of the generated samples. Since it is impossible to get all the X-ray crystallized complex structures for our generated molecules, a feasible approach is to obtain the near-natural conformations by the docking method. Molecular docking was performed on the generated compounds and original ligands, and two RMSDs were computed: the RMSD of the crystallized conformations with the re-docked conformations for the original ligands (BindingMoad dataset); the RMSD of the docked conformations with the directly generated conformations for the model-generated molecules. The results shown in Table 3 demonstrate that the two types of RMSD are quite similar, with the mean values within 2Å , which is a typical threshold for successful docking in virtual screening 47 . The generation of the conformations similar to the crystal structure confirms that Delete has learned physically meaningful protein-ligand interactions from a large amount of structure data to match the atoms to the appropriate potential low within the pocket. Limitations Each model has its own limitations, which often rely on the underlying assumptions made in designing the model. All models are wrong, but some are useful, coined by the statistician, George Box 48 . In Delete, the basic assumption is the rigid generation assumption, i.e., the addition of atoms to the lead compound does not affect its position. This rigid constraint, although somewhat seemingly strong, still has its plausibility. The molecule obtained by rigid addition ensures that each step of atomic expansion fills a corresponding part of the pocket, resulting in an improvement of binding affinities with targets. If the structures change with the molecular growing process, the atoms that are suitable at the beginning will leave their proper positions during the change, and it cannot be guaranteed that the completed molecule fits the pocket perfectly. The discussion about which model based on which assumption is better needs to be further explored by future researchers. Conclusions Delete is an all-in-one solution for lead optimization in drug discovery, made possible by the introduction of the 3D molecular generation framework and the unified deleting strategies. This framework enables a single model to perform multiple tasks of lead optimization with excellent efficiency. Additionally, the embedding of geometric neural networks allows for the simultaneous prediction of near-natural conformations of complete molecules, unifying molecular generation and conformation generation. Comprehensive in-silico experiments conducted under four different lead optimization scenarios have confirmed the capabilities of Delete. It can be employed not only to optimize lead compounds for first-in-class drugs but also to assist medicinal chemists in developing me-too/me-better products by structural modification/substitution from existing drugs. It is expected to see that Delete would be experimentally verified through feature real-world drug discovery campaigns. Method Dataset Construction Delete has been trained on two datasets, one is CrossDock 24 , and the other is BindingMoad 27 . CrossDock is a recognized benchmark for existing pocket-aware 3D de novo design models 26,49,50 , which is curated from molecular cross docking results. Although docked conformations do not share exactly the same patterns with X-ray crystallized structures in principle, it is acceptable to enrich the dataset using physical tools since crystal data is far from saturated. In contrast, the BindingMoad dataset, which is the evaluation choice for DiffLinker 23 , consists of all crystal structures. After data processing, the CrossDock dataset has 100,000 training pocket-ligand pairs, while the BindingMoad has 35,516 training pairs. The pocket-ligand pairs are needed to be further processed to become training data for lead optimization. We comprehensively investigate the previous lead optimization methods and select the corresponding representative approach to obtain sub-task data of lead optimization. In the linker design task, matched molecular pair analysis 51 (MMP) is performed to cut acyclic single bonds twice; in the fragment elaboration, functional fragments are obtained from one cut of acyclic single bonds; in the scaffold hopping, the scaffold is derived as Bemis-Murcko scaffold 41 ; in the side-chain decoration, the side chains are all terminal acyclic groups. Besides, we also provide some other decomposition methods in the program, such as using the BRICS 52 rule to obtain fragments and using ScaffoldNetwork 53 to obtain intermediate scaffolds. Unified Deleting Framework Self-supervised learning has made significant progress in deep learning, substantially boosting the model's performance on many tasks 54 . As a part of self-supervised learning, the masking strategy aims to mask and recover whole, parts of, or merely some features of its original input [55][56][57] . In natural language processing, researchers mask the context and recover the marked words or phrases in the pre-training phase, so the resulting model can be used either as a pre-training encoder or directly for text generation 58,59 . In computer vision, researchers randomly mask pixels in images and recover them with Encoder-Decoder architecture 55 . In summary, pre-trained with masking strategies, models can automatically learn the intrinsic structure in the data itself on large amounts of unlabeled staff, which in turn improves their performance on downstream tasks. In the context of molecular data pre-training, there have also been some developments in masking Attachment Point Enumeration The first step in generating a new compound is to predict potential attachments on the existing lead compound. To do this, we enumerate all the atoms within the lead and predict their attachment scores, denoted as . A high score indicates a higher possibility of attachment. The overview of attachment prediction goes as follows: where 1,2,3,4 are linker transform matrixes, is the sigmoid function, is the active function, default is Leaky Relu. Atom Placement To place atom within pockets, the coordinate → , atomic symbol , and bonding relationship with existing atoms should be generated. First, we generate the coordinate and relative position of atom using the following equations: where 1 , 2 , 3 , 4 are parameterized matrixes, is the active function, is the coefficient of k-th component of relative coordinates, is the coordinate of the attachment point. The coordinate of atom is determined by the coordinate of the attachment point and the relative position. Such a design reduces the error compared with the direct prediction of Cartesian coordinate 67 , and avoids predicting 1-2, 13, 1-4 atoms in the local coordinate system 68 . Next, we predict the atomic symbol based on the interaction features of the pocket-ligand graph using the following equations: where Message is the message passing module, 1 is the parameterized matrix. Finally, we predict the bonding relationship with the existing atoms j using the following equations: Unlike previous work 68,69 that determines the bonding relationship with chemical rules (usually by OpenBabel), we predict it directly, which backward more information to the embedding module and reduces the potential error introduced by external bonding determination. Generation Complete There are two criteria for stopping generation, one is the given max number of atoms, and the other is automatically stop when the attachment probability of each atom is lower than a given threshold (e.g., 0.5). Shape similarity is quantified using the shape Tanimoto distance 72 , while electrostatic similarity is determined by the Carbo similarity 73 of the Coulomb potential overlap integral, which employs Gasteiger charges 74 . Supporting Information Part S1. Details of the design of the Delete modules.
2023-08-07T06:42:35.488Z
2023-08-04T00:00:00.000
{ "year": 2023, "sha1": "d0252663975b385a25abb7b224ee7e3d09784107", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d0252663975b385a25abb7b224ee7e3d09784107", "s2fieldsofstudy": [ "Computer Science", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
268582840
pes2o/s2orc
v3-fos-license
Characteristics and Geological Significance of Lacustrine Hydrothermal Sedimentary Rocks in the Yingejing Sag in Bayingebi Basin, Inner Mongolia, Northwestern China In order to promote site screening for high-level radioactive waste (HLW) disposal purposes, the characteristics of argillaceous rock (potential host rock) from the Yingejing Sag of the Bayingebi Basin, Northwest China have been well discussed. Results show that (1) Unlike argillaceous host rocks in foreign countries, the argillaceous rock mainly consists of analcite, dolomite, and albite; the contents of clay minerals are only about 10%. Five typical structures could be categorized, dominating by the massive structure. (2) Geochemical characteristics have the characteristics of abundance in deep source gas and liquid trace elements, a gentle right dip in the distribution pattern of rare-earth elements. (3) Petrological and geochemical characteristics determine the argillaceous rock as the genesis of lacustrine hydrothermal sedimentary rock. The hydrothermal sedimentary model also has been constructed, mainly controlled by tectonic activity of the Altyn Tagh fault from 100 to 120 Ma. The massive argillites with analcite and dolomite would lay the foundation for confirming the site for HLW disposal purposes in China. INTRODUCTION Many nuclear countries have conducted detailed research on argillaceous rock as the host rock for high-level radioactive waste (HLW) repository as a result of minimal porosity, self-sealing ability, and almost no systematic exchange of materials with the environment.−4 Currently, the Yingejing Sag (Tamusu area) in the Bayingebi Basin, Northwest China has been selected as a potential area for argillaceous rock repository purposes (Figure 1a).The target argillaceous rock formation is the lower Cretaceous Bayingebi formation with more than 500 m thickness, and the upper Bayingebi formation (K 1 b 2 ) is the target formation.However, preliminary research found that the mineral compositions and typical structures are distinct from the typical argillaceous rock overseas, which has been defined as argillaceous rock containing analcite and dolomite.The argillaceous rock containing analcite and dolomite (target formation) cannot be explained through normal lacustrine sedimentation.Meanwhile, elementary issues such as the mineral assemblage and genesis are still pending, which have delayed the site-screening process. Existing research studies near the Bayingebi Basin such as the Jiuxi Basin, 5 Santanghu Basin, 6 and Erlian Basin, 7 as well as continental lake basins such as the Junggar Basin 8 and the Hari Sag in the Yine Basin 9 (Figure 1b), pointed out that the lacustrine hydrothermal sedimentation may be the genesis of typical mineral compositions and structures. 10It could be inferred that lacustrine hydrothermal sedimentation provides a path to address characteristics and the genesis of argillaceous rock containing analcite and dolomite.It is known that hydrothermal sedimentation is generally defined as the deposition that occurs when a hot water system circulating in marine or lacustrine basement rocks emerges near the interface. 11During the hydrothermal sedimentation process, some typical structures are formed such as lamunar, speckled, and contemporaneous deformations.Meanwhile, their mineral compositions and elemental geochemical characteristics are different from those of the normal sedimentary rock.Therefore, focused on argillaceous rock containing analcite and dolomite (target formation), this study tries to find out (1) the petrological characteristics and mineralogical characteristics, (2) the elemental geochemical characteristics, and (3) the sedimentary patterns and control factors.This study is helpful in promoting the confirmation of site screening for HLW disposal purposes in China. 1.1.Geological setting.The Altyn Tagh fault (ATF) is a major, active intracontinental strike-slip fault system that separates the northern Tibetan Plateau from the Tarim block (Figure 1cd, 13,15 ).The fault extends from the western Kunlun to the northeast side of the Qilian Shan with a total length greater than 2,000 km. 13,14East of the northeastern most surface trace of the ATF at approximately 97°15′E longitude, an array of faults that have a regional horsetail splay geometry in the southern Alxa block may kinematically link to the ATF and constitute an eastward extension of the fault system. 9The ATF has been extending northeastward into the Mongolian region and is mainly characterized by a strike slip. 16,17The Bayingebi Basin is located in the Southern Alxa block.The tectonic setting of the basin is the junction of the Tarim, Kazakhstan, Siberian, and North China plates.The basin spans four different geotectonic units 18,19 and is oriented in a near-EW direction.The Yingejing Sag is located at the southern end of the Bayingebi Basin and is oriented in a roughly NE direction, covering an area of approximately 9000 km 2 (Figure 1a).The sag has stable sedimentation, a small phase transition, and large depth variation, with deep burial in the south, slightly less burial in the north, and a simple basement structure.The overall trend of faults, the secondary faults of ATF, in the area is NE or near-EW.The near-EW direction represents earlier faults than the NE direction. 20These faults control the interfacial structure of the basin as well as the type and spatial distribution of the sedimentary system. The basement of the Yingejing Sag is composed of Archean, Proterozoic, and Paleozoic metamorphic rocks.The caprock is mainly developed in the Jurassic, Cretaceous, and Quaternary strata.The Jurassic sedimentary strata are mainly composed of coal-bearing, coarse, and clastic rocks.The lithology is dominated by variegated conglomerate, sandstone, and fine sandstone; conglomerate and argillites are developed at the bottom, with dark gray and black tuff containing volcanic breccia at the top.Cretaceous strata comprise the sedimentary body of the caprock with a sediment thickness of more than 2200 m.The Lower Cretaceous sediments form the upper and lower portions of the Bayingebi Formation, namely, the K 1 b 1 and K 1 b 2 .The main lithology of the lower portion of the Bayingebi Formation (K 1 b 1 ) is composed of a purplish-red conglomerate and sandstone, occasionally mixed with siltstone and mudstone. The upper portion of the Bayingebi Formation (K 1 b 2 ) is composed of mainly gray-green and dark gray argillites and forms the target layer for the site of an HLW repository. 1.2.Petrological Characteristics of Argillites.Core observations and thin-section identification indicate that the upper portion of the Bayingebi Formation (target formation) contains reticular, speckled, massive, laminar, and contemporaneous deformations and shows an obvious regularity in the longitudinal direction.The lower portion is mainly composed of net veins, the middle and lower portions are speckled, the middle portion is homogeneously blocky, the middle and upper portions are mainly layered, and a number of gypsum layers are developed in the upper portion.A single layer of gypsum is generally 5−10 cm thick.Detailed analysis of the argillite structure was conducted by using a full core of the TZK-2 borehole. 1.2.1.Reticular Dolomite.The reticular structure is mainly developed at a depth of 730−800 m in the well, and the main vein-filling minerals are dolomite, ferric dolomite, analcite, and pyrite.The fractures range from wide to narrow, with reticular or irregular shapes (Figure 2a,b).Most of them are high-angle fractures interlaced with horizontal fractures.In the core, the width of the fracture in the filling veins is generally 0.5 mm−4 mm. 1.2.2.Speckled Analcite Dolomite.Speckled structures are mainly developed in the well at a depth of 550−730 m, and the particle composition of these speckled structures includes coarse-grained dolomite, calcite, pyrite, and analcite; the particle size is generally 2−6 mm.The core sample shows that these "snowflakes" are distributed within the matrix material (Figure 2c), which is composed of mainly argillaceous dolomite or argillaceous sediments and is rich in organic matter; horizontal and small deformation beddings are locally developed. 1.2.3.Massive Dolomitic Analcite.The 260−550 m section of the well exhibits a massive structure, with massive argillite forming the main rock type in the upper portion of the Bayingebi Formation.The rocks are mainly gray to dark gray argillites with a small amount of grayish-white silty argillites (Figure 2d).The massive argillite has excellent uniformity, a high degree of consolidation, and a slippery feel; the cut surface is smooth and exhibits a conchoidal fracture.Under a polarized light microscope, the texture of the massive argillite is uniform and is usually associated with dolomite, calcite, a small amount of organic matter, and pyrite.The argillite is situated in the middle of the upper portion of the Bayingebi Formation, which is the main target stratum for the site of an HLW repository. 1.2.4.Laminar Dolomitic Analcite.The laminar structures are mainly developed at 31−260 m in the well, although they were also intermittently observed in other layers (Figure 2e,f).This is the most common sedimentary structure in the argillites of the TZK-2 well, and the white laminar structures are composed of various mineral combinations with thicknesses of several micrometers to several millimeters.Depending on the different types of minerals in the stratum, the common mineral combinations are as follows: (1) single dolomite layer, (2) dolomite analcite combination layer, and (3) dolomite and organic layer.When the thickness of the upper layer is increased to a few millimeters or even centimeters, it is termed a banded structure.In the banded structures, crystals are more euhedral and generally larger than those in the laminar structures. 1.2.5.Syngenetic Deformed Dolomitic Analcite.This structure is characterized by soft deformation and irregular crumpled bedding to form laminations or strips.The scale of deformation is generally several centimeters (Figure 2g).This type of deformation structure may be caused by unconsolidated or weakly consolidated laminations of a white mineral being locally disturbed by mud and gravel (Figure 2h). SAMPLING AND ANALYSIS METHODS All of the collected boreholes from the Yingejing Sag were carried out with water by single-core barreling techniques.The research samples were taken from the target formation (K 1 b 2 ) were vacuum-sealed to maintain freshness.Based on the detailed observation and description of the argillite cores of the TZK-1 and TZK-2 wells, 150 thin sections were made to observe the microstructural characteristics of the argillite.Thirty samples were selected for analysis with scanning electron microscopy to further observe the crystal structure and micromorphological characteristics of the main minerals in the argillite.A total of 116 samples were selected for full-rock X-ray diffraction analysis to determine the main mineral species and the composition, in the vertical and horizontal directions, of the Bayingebi Formation argillites.In addition, 30 samples were tested for trace and rareearth elements.All the samples were researched in an orderly manner according to the corresponding experimental standards.The samples were representative, and the data were informative and reliable.Scanning electron microscopy, X-ray diffraction analysis, and the electron probe microanalysis (EPMA) tests (JXA-8530F Plus, Japan) were conducted at the State Key Laboratory of Nuclear Resources and Environment of East China University of Technology.The analytical instruments included a Nova Nano SEM450 emission scanning electron microscope from FEI Czech Co., Ltd.(with X-Max20 energy), a spectrometer, and a German Bruker D8 ADVANCE polycrystalline X-ray diffractometer.The EPMA analytical conditions were as follows: an acceleration voltage of 15 kV, a current of 20 nA, and a beam spot diameter of <2 μm, with a 10 s counting time for the major elements and 20 or 40 s for the minor elements. Analysis of trace and rare-earth elements was completed at Aussie Analytical Testing (Guangzhou) Co., Ltd.The experimental instruments included an inductively coupled plasma atomic emission spectrometer, produced by Agilent, USA, and an inductively coupled plasma mass spectrometer (model PerkinElmer Elan 9000, USA).The experiments were conducted with the system settings, accuracy, and precision of the detection method (relative deviation and relative error) controlled at <10 ± 5%. Mineralogical Characteristics of Argillites. The target mudstone formation is mainly composed of three mineral species (Figure 3): carbonates (dolomite and ankerite), albite, and analcite.In addition, small amounts of evenly distributed clay minerals (mainly illite and kaolinite), quartz, calcite, hematite, and pyrite are also present.Carbonate minerals are abundant in all samples.In particular, the mass fractions of dolomite and ankerite ranged from 3 to 50% and from 4 to 25%, respectively, with average mass fractions of 28 and 13%, respectively.The mass fractions of analcite and calcite ranged from 1 to 43% and from 0.2 to 23%, respectively, with averages of 17 and 3%, respectively.Both analcite and calcite decreased rapidly with depth.The total mass fractions of clay minerals are less than 10%, dominated by illite and kaolinite with average )] calculated based on the North American shale mass fractions of 3 and 2%, respectively.The typical analcite and dolomite are described in detail. 3.1.1.Analcite.The ideal chemical formula for analcite is NaAlSi 2 O 6 •H 2 O.It is colorless and transparent under a single polarizing microscope and has a low−low protrusion.Under orthogonal polarized light, analcite exhibited a first-order gray interference color with full or weak extinction.The crystals had a high degree of self-formation, which were self-shaped and semiself-shaped, and were easily distinguishable from the other surrounding minerals.The single crystal of analcite examined with a scanning electron microscope was a typical tetragonal octahedron (Figure 4a,g).Due to the unique ion-exchange performance of the analcite, it has a strong adsorption capacity for nuclides.Therefore, the existence of this type of analcite significantly improved the physical properties of the host rocks in the Bayingebi Formation. 3.1.2.Dolomite.Dolomite is the main rock-forming mineral of the argillites.Based on the crystal size, shape, and distribution, three types of dolomites were developed, namely, mud-crystal dolomite (Figure 4b,e), powdered crystal dolomite, and coarsegrained dolomite (Figure 4c,d).Dolomite is often distributed in strips within the argillite.Under a microscope, mud-crystal dolomites have a small particle size and are miscible with terrigenous clastic minerals to distinguish their crystal forms.Under scanning electron microscopy, the crystal size of the mudcrystal dolomite was uniform and the particle size was 2−10 μm, while the grain size of the powdered dolomite was generally 30− 200 μm.Compared to the mud-crystal dolomite, the crystal of silty dolomite has a high degree of automorphism (semiautomorphic to automorphic), and the hole is relatively clean.Scanning electron microscopy revealed that rhombohedral dolomite often coexists with analcite with a high degree of self-formation.The coarse-grained dolomite is rhombohedral, and its grain size can reach 0.5 mm. Characteristics of Trace and Rare-Earth Elements. The average values of trace elements in the argillites of the Bayingebi Formation (Table 1) can be determined by dividing the average content of the upper continental crust (UCC) elements to obtain the concentration coefficients of the corresponding elements (Figure 5a).The argillite is clearly enriched in Sr, U, and V.The set coefficient is >2, especially for U, which is abnormally enriched in the research area, to approximately 0.01−0.06%,reaching an industrial-grade requirement.The uranium deposit in this area has been explored based on the sandstone type.However, the discovery of several layers of industrial ore bodies in the argillites of the deep-lake facies is difficult to explain with the interlayer oxidation zone model, which has led to a lack of clarity around the metallogenic mechanism of uranium in this area.The proportions of Co, Cu, and Ni are close to the average value of the upper crust, displaying a slight enrichment.The proportion of Ba, Rb, Sc, Th, and Zr are lower than the average value of the upper crust, showing a relative loss.The Sr content in the sample (a large ion lithophile element) is much higher than the average value of the upper crust, up to 2740 × 10 −6 (μg), while Zr and other elements are mainly present in the coarse minerals and are relatively depleted in the fine-grained argillites.Se and S have similar properties, and their content in the upper portion of the Bayingebi Formation is 1 × 10 −6 (μg), which is higher than their abundance in the UCC (8.83 × 10 −8 μg). 21In combination, they form an independent selenium-containing mineral. Rare-earth element concentrations vary widely in the argillites, with a total amount of 59.29 to 283.80 μg/g and an average of 139 μg/g, which is slightly lower than the UCC average (146.37 μg/g).The ratio of light-to-heavy rare-earth elements (LREE/HREE) varies from 8 to 31.10, with an average of 12.52.The argillites are enriched in LREEs and depleted in HREEs, and the distribution curve of LREEs has a steep slope (La−Eu segment), showing a distinct "right-dip" type.The sample δEu is approximately 0.545−0.703,with an average of 0.616, which is a moderately strong negative anomaly.The δCe values are approximately 0.908−1.059,with an average of 1.01.The average distribution pattern of rare-earth elements and UCC elements (Figure 5b) shows a consistent change characteristic. Hydrothermal Sedimentation Process. During the process of spraying the bottom of a sea (or lake), hot water is filled and replaced in a hydrothermal channel below the spout, and the bottom of the sea (or lake) above the spout passes through the cold water.The substances carried in the hot water interact with the cold water and precipitate at the bottom of the sea (or lake), and two diagenetic or metallogenic systems are formed during this hydrothermal flow.Saddle-like and zebra-like structures have been proven to have a hydrothermal metasomatic genesis, associated with tectonic faults in marine hydrothermal dolomite. 9,22In lake-phase hydrothermal sediments, layered structures and plaques are considered to be important features formed by lacustrine hydrothermal fluids, reflecting pulsating hydrothermal eruptions and zonal deposition. 5,7,8From the bottom to the top of the upper portion of the Bayingebi Formation, the argillites exhibit a network of veins, a patchy structure, a massive structure, and a layered structure.The upper portion of the argillites has a large number of thin gypsum layers.The vertical structure of the argillites is very similar to that of hot water. Based on the temperature characteristics and mineral colors, hydrothermal deposition is divided into a "white chimney" type and a "black chimney" type.The first type forms light-colored minerals, such as carbonates, silicates, and sulfates, at lower temperatures, and the second type forms mainly dark minerals, such as sulfides and oxides, at higher temperatures.Most lakefacies hydrothermal deposits discovered to date have been characterized as the "white chimney" type, and the main minerals formed are aluminosilicates (such as albite, analcite, and tourmaline), sulfates (such as barite) and carbonates (such as dolomite and siderite). 7Analcite, dolomite, ferruginous dolomite, fluorite, galena, sphalerite, uranium minerals, and selenium minerals occur in the upper portion of the Cretaceous Bayingebi Formation in Yingejing Sag.To determine if the analcite in the study area is a metamorphic or primary mineral, the authors examined samples with a microscope and found that some samples contain montmorillonite, with contents less than 5%; montmorillonite is a layered silicate.Medium-water minerals are easily dehydrated and converted to illite during low-grade metamorphism.No low-grade metamorphic minerals, such as turbidite and grapevine, were found in the samples analyzed.When the metamorphic temperature reached approximately 200 °C, the following reactions occurred in analcite and quartz: Albite was formed; however, in the mineral profile, the proportions of analcite, quartz, and albite did not have a reciprocal relationship.Under the microscope, there was no evidence of a reaction between analcite and quartz to form albite; hence, the mineral composition of the argillites cannot be explained by metamorphism.The formation of Na − -, Ca − -, and Mg 2+ -rich minerals may be attributable to the alkaline hydrothermal fluids rich in Na + , Ca 2+ , and CO 3 2− gushing out of the lake bottom, mixing with the lake water, and directly crystallizing or forming hydrothermal sedimentary minerals in the early stage of metasomatism.This indicates that there may have been long-term white chimney-type hydrothermal activity in the Early Cretaceous, and the hydrothermal fluids had different properties during different periods.Different types of hydrothermal fluids mixed with cold lake water to form different types of minerals. The geochemical characteristics of hydrothermal sedimentary rocks are quite different from those of normal sedimentary rocks.Geochemical analysis can be used to effectively distinguish the sediment types.Iron−manganese oxide is often used as an important indicator of hydrothermal deposition. 11,23For instance, high Fe and Mg contents are considered typical of hydrothermally formed dolomite. 24The main minerals of the argillite samples in the upper portion of the Bayingebi Formation showed obvious Fe and Mg enrichment.The average proportions of TFe 2 O 3 and MnO were 5.47 and 0.11%, which were higher than the corresponding average contents of the UCC (4.93 and 0.07%, respectively). 25U and Se elements were abnormally enriched, with U as high as 0.01−0.06%locally and Se contents of 1 × 10 −6 (μg), 8.83 × 10 −8 times higher than its abundance in the crust. 21Se and S have similar properties and are much more enriched in hydrothermal fluids than in lake water.Ternary diagrams of Fe vs Mn vs (Cu + Co + Ni) × 10 11,26 (Figure 6a) and Ni vs Co vs Zn (Figure 6b) 27 are widely used for the discrimination of hydrothermal sediments.On the ternary diagram, all argillite samples in the study area are plotted within the hydrothermal deposition zone, indicating long-term hydrothermal activity during the Early Cretaceous sedimentation of the Yingejing Sag.The collected trace element data from a typical hydrothermal sedimentary area such as Shahejie Formation and Xiagou Formation show a consistent trend of change, contributing to the genesis of hydrothermal sedimentary (Figure 6c). The Si/Al ratios of analcite are characterized by high silica (2.316−2.866,avg.2.545) (Table 2), which is obviously higher than the sedimentary diagenetic type of analcite formed by sedimentary diagenesis and low-grade metamorphism and higher than that of low silicon analcite that is directly crystallized from highly alkaline water.This type has Si/Al ratios that are closer to those of high silicon analcite, which is commonly affected by siliceous volcanic glass and alkaline hydrothermal solution. 12Therefore, analcite may have formed by the direct crystallization of alkaline hydrothermal fluids. 4.2.Tectonic Setting of Hydrothermal Sedimentary Rocks.The Bayingebi Basin entered a stage of intraplate structural tectonic influence during the Mesozoic period and, during the Triassic−Jurassic period, developed a "rift and pull" sub-basin. 28In the Early Cretaceous, the basin was an extensional tectonic setting and a large number of normal faults developed at the edge of the basin, which caused the Yingejing Sag to form a double-breaking ground-type lake basin with sedimentary faults on both sides.At the edge of the basin, alluvial fan deposits dominated by red variegated conglomerates were formed in the lower portion of the Bayingebi Formation.Subsequently, the lake basin expanded and the water flow increased.The climate became warm and humid, forming the upper fan delta to the deep-lake sedimentary facies.In the late Early Cretaceous, influenced by the strike-slip activity of the ATF, large-scale magmatism occurred in Engelwusu and Yingen, and prevalent volcanic activity resulted in a large number of basalts in the Suhongtu Formation in the Bayingebi Basin.The regional multiperiod eruptive rocks are superimposed with sedimentary rocks, and a tectonic inversion occurred in the Late Cretaceous and Paleogene−Quaternary.The basin experienced Triassic-to-Jurassic rifting and the formation of the Lai Basin followed by the comprehensive development stage of the Early Cretaceous Lai Basin, the Late Cretaceous comprehensive sag stage, the Tertiary-to-Quaternary extrusion and uplift, settlement, and local deposition.In short, the distribution pattern of uplift and sag in the basin is strictly controlled by the ATF and its branch faults. The ATF has undergone multiple periods of activity, including a substantial active period in the late Early Cretaceous, from 100 to 120 Ma. 29 There are many hydrothermal sedimentary rocks distributed along the fault zone (Figure 1b), such as dolomite of the Lower Cretaceous Xiagou Formation in the Jiuxi Basin, 5,8 the Lower Cretaceous Suhongtu Formation and Bayingebi Formation Mudstone in the Hari Sag in the Yinji Basin; 9 analcite dolomitic argillites of the Lower Cretaceous Tengger Formation in the Baiyinchagan Sag; 7 and the Early Cretaceous Bayingebi Formation argillites of the Bayingebi Basin.Across different basins, spanning nearly 1000 km, Early Cretaceous strata exhibit very similar mineral combinations, characterized by dolomite, analcite, and albite.Therefore, it is likely that their genesis is connected via the ATF.Late Cretaceous tectonic activity is the main controlling factor for this type of rock. Sedimentary Model of Hydrothermal Rocks. Based on the analysis of petrology, geochemistry, and tectonic evolution, the deposition of the Bayingebi Formation is controlled by multiphase activity along the ATF.A set of regular hydrothermal sedimentary rocks was formed through the interaction of volcanic eruptions and lake-bottom hydrothermal fluids (Figure 7). The Early Cretaceous Bayingebi Basin is situated in an extensional tectonic setting.Lake water infiltrated along the fault under the influence of gravity and overburden pressure.During infiltration, the fluid undergoes a long-term water−rock reaction to extract Na + , Al 3+ , Si 4+ , Mg 2+ , Fe 2+ , Ca 2+ , and other ions from the basement rock and contributes hot water to the magma chamber.Then, driven by thermal energy and fluid potential energy, hydrothermal fluid is discharged to the lake basin along the fracture.This continuous circulation of underwater infiltration and convection of hydrothermal fluid provides the lake basin with a heat source and ore-forming ions.When the upwelling hydrothermal fluid moved to the volcanic eruption, the difference in physical and chemical conditions near the vent controlled the precipitation of hydrothermal minerals and formed different mineral assemblages.Dolomite, ferric dolomite, and analcite precipitated in the spout to form reticular dolomite; when the hydrothermal fluid carried a large amount of Si 2− , Al 3+ , and Na + into the lake basin and mixed with the lake water near the spout, speckled analcite dolomite formed.Due to the difference in colloidal and ionic properties, different deposits were formed in areas relatively far from the spout.This allowed the sodium-aluminosilicate hydrothermal fluid to form massive dolomitic analcite. 5Ca 2+ , Mg 2+ , Fe 2+ , and other ions in hydrothermal fluid are relatively active and can be transported over long distances to areas relatively far from the spout.Owing to mixing with lake water and the addition of lacustrine sediments, striated dolomitic analcite developed in the areas far from the spout.After the hydrothermal fluid was fully mixed with lake water, the lowtemperature hydrothermal minerals gradually underwent chemical precipitation and evaporation, forming a gypsum layer in the upper argillites. 4.4.Analysis of Host Rocks as a Repository.For argillaceous rocks to be a suitable host for an HLW disposal reservoir, it should meet certain geological conditions.The area of the site should not be less than 10 km 2 , the continuous distribution of the argillaceous rock geological body should not be less than 100 m, the underground extension width of the clay rock should not be less than 2 km, and the depth of the geological body should be in the range 300−1000 m. 2,30 The dimensions of the upper argillites of the Bayingebi Formation exceed these international selection guidelines.Moreover, the fault activities in the Tamusu area mainly occurred in the Cretaceous and before the Cretaceous, indicating that there has been almost no obvious fault activity since the Quaternary. 20ertainly, intensive investigations and long-term stability monitoring should be carried out in the Tamusu preselected area.The argillites have a mineral composition that is quite different from that of the French Callovo−Oxfordian clay rock, and the relative percentages of analcite and dolomite are significantly higher than in the Callovo−Oxfordian clay rock.Analcite has a strong "molecular sieve" and ion-exchange functions.Na + , K + , and Ca 2+ in the analcite crystal lattice are not tightly bound to the lattice atom, and cations are easily exchanged with the surrounding environment.Because of its unique geological conditions and mineral composition, the upper portion of the Bayingebi Formation argillites has certain advantages as an HLW repository.The analcite may play a role with respect to radionuclide retention, as discussed in the case of the potential Yucca Mountain host rock 31 and for low-and intermediate-level short-lived radioactive waste in Belgium. 32In view of functions of analcime channels and molecular sieves, mudstones bearing analcite have advantages over typical clay rocks (host rocks) in terms of radionuclide adsorption, water content, thermal stability, and permeability. 33The primary pores of mudstone filled with dolomite and analcime cement also highlight the advantages of the uniaxial compressive strength of mudstone, which contribute to engineering construction more than typical clay rocks. 34In recent studies, it was found that aqueous Se (IV) can be reduced to Se (0) 35 and that aqueous U (VI) could be partially reduced to U (IV) and/or U (V)-containing precipitates (U 3 O 8 , U 4 O 9 , etc.) by these Tamusu claystones. 36Additionally, the advantages and disadvantages of the Bayingebi Formation argillites compared to other potential host rocks remain to be assessed carefully in the future. CONCLUSIONS The argillites in the upper portion of the Bayingebi Formation exhibit five typical kinds of structures containing higher contents of dolomite and analcite, which shows enormous potential of the host rock for HLW disposal purpose. The trace elements in the Bayingebi Formation argillites exhibit a multicomponent combination.The abundance of Mo, Sb, Zn, As, Sr, and Se in the deep-source gas−liquid trace element combination is relatively high; ΣLREE > ΣHREE, indicating a gentle right-angled shape.All argillite samples in the study area are located within the hydrothermal deposition zone on the ternary diagram, indicating the long-term hydrothermal activity of the Yingejing Sag during Early Cretaceous sedimentation. The tectonic activity of the ATF, from 100 to 120 Ma, is the main controlling factor for the formation of lacustrine hydrothermal sedimentary rocks in the Yingejing Sag in the Bayingebi Basin. Figure 1 . Figure 1.Regional geological map in the Yingejing Sag of Bayingebi Basin and ATF.Adapted with permission from ref 12 and 13.(a) Regional geological map of Yingejing Sag in the Bayingebi Basin.(b) Existing research areas of lacustrine hydrothermal sedimentation.(c, d) Tectonic setting of ATF.The blue dashed box of panel (d) denotes the location of panel (a).Copyright 2020 Elsevier.Copyright 2023 American Geophysical Union. Figure 5 . Figure 5. Characteristics of trace and rare earth elements: (a) concentration coefficients of trace elements and (b) rare-earth element distribution patterns. Figure 6 . Figure 6.Element features for hydrothermal sediments: (a) Ni vs Co vs Zn ternary diagram and (b) Fe vs Mn vs (Cu+Co+Ni) × 10 ternary diagram.(c) Trace element comparisons from typical hydrothermal sedimentary areas; HD, hot water deposits; HN, water deposits; RH, red sea hydrothermal deposits; ED, Eastern Pacific hydrothermal deposits of metallic minerals; FHC, Franciscan hydrothermally deposited siliceous rocks. Figure 7 . Figure 7. Sedimentary model of lacustrine hydrothermal rocks in the Yingejing Sag, Bayingebi Basin.Adapted with permission from ref 19.Copyright 2023 Springer. Table 1 . Analysis Results and Characteristic Parameters of Geochemical Elements of Argillites in the Upper Bayingebi Formation a Table 2 . EPMA Analysis of Analcite and Dolomite in Hydrothermal-Sedimentary Rock a a Note: EPMA tests completed in ECUT by JXA-8230 Long Xiang − State Key Laboratory of Nuclear Resources and Environment, East China, University of Technology, Nanchang 330013, China; School of Earth Sciences, East China University of Technology, Nanchang 330013, China; orcid.org/0000-0003-1097-598X;Email: xl_son00126@ foxmail.com
2024-03-22T15:37:14.199Z
2024-03-19T00:00:00.000
{ "year": 2024, "sha1": "1ffd306f4bd1b40b16ff8abdc5d3a4a4ea164ad2", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.3c09486", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff71b3d9853c54e3454b135ae01aa9a6fe413e04", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
8066331
pes2o/s2orc
v3-fos-license
A Minimal tight-binding model for ferromagnetic canted bilayer manganites Half-metallicity in materials has been a subject of extensive research due to its potential for applications in spintronics. Ferromagnetic manganites have been seen as a good candidate, and aside from a small minority-spin pocket observed in La2−2xSr1+2xMn2O7 (x = 0.38), transport measurements show that ferromagnetic manganites essentially behave like half metals. Here we develop robust tight-binding models to describe the electronic band structure of the majority as well as minority spin states of ferromagnetic, spin-canted antiferromagnetic, and fully antiferromagnetic bilayer manganites. Both the bilayer coupling between the MnO2 planes and the mixing of the |x2 − y2 > and |3z2 − r2 > Mn 3d orbitals play an important role in the subtle behavior of the bilayer splitting. Effects of kz dispersion are included. The metallic conductivity in the FM phase can be explained within the double-exchange (DE) mechanism 15 , where e g electrons hop between the Mn sites through hybridization with the oxygen 2p orbitals. While the DE mechanism appears to capture the tendency towards ferromagnetism, the oxygen orbitals must be explicitly included to explain correctly the metal insulator transition at the Curie temperature 16 . Since the DFT band structure is found to be that of a nearly halfmetallic ferromagnet with a small minority-spin FS (Fermi surface), most studies in the literature focus only on the majority bands described within simple tight-binding (TB) models 17 , neglecting the minority bands. Here, we present a more realistic yet transparent TB model which incorporates the bonding and anti-bonding jx 2 2 y 2 . as well as the j3z 2 2 r 2 . orbitals, including the minority states as observed via ARPES in the FM 13 and AFM 11 states. Recall that in the cuprates there is strong copper-oxygen hybridization, but if one is mainly interested in the antibonding band near the Fermi level, one can study an effective, copper-only model. In this spirit, we develop an effective Mn-only model here, which includes the minority bands in order to provide a precise description of the minority electrons in determining the spin polarization at the Fermi level, a key ingredient needed for the design of spintronics devices. We delineate how our model Hamiltonian gives insight into the delicate interplay between the effects of orbital mixing and nesting features, which impact the static susceptibility and drive exotic phase transitions 18 . Our approach can also allow a precise determination of the occupancy of the minority t 2g electrons through an analysis of the experimental FSs. Results Band character near E F . In the DFT-based band structure, E F cuts through the majority jx 2 2 y 2 . and j3z 2 2 r 2 . bands, while there are only small electron pockets in the minority jxy . bands. Coupling between the two MnO layers in the FM state produces bonding and antibonding bands, which are directly observed in experiments 19 . Accordingly, our fitting procedure is based on a combination of four majority and two minority bands in order to accurately capture the near-E F physics of the system. For the majority e g bands, the strength of bilayer coupling for jx 2 2 y 2 . orbitals is much weaker than that for j3z 2 2 r 2 . orbitals because the lobes of jx 2 2 y 2 . orbitals lie in-plane, while those of j3z 2 2 r 2 . orbitals point out-of-the-plane. The bilayer coupling of various orbitals without hybridization can be seen along the C(0, 0)-X(p, p) line in Figure 1, where the two jx 2 2 y 2 . bands are nearly degenerate and the two j3z 2 2 r 2 . bands are split with a separation of <1.1 eV. Away from the nodal direction, the jx 2 2 y 2 . and j3z 2 2 r 2 . orbitals hybridize, and the splitting of the related bands becomes more complex. Near the M(p, 0) point, the two lowest bands are primarily of jx 2 2 y 2 . character. The mixing with j3z 2 2 r 2 . increases the splitting to <250 meV. Regarding the t 2g minority bands, since the lobes of jxy . orbitals lie in-plane, strength of the bilayer coupling is small. Unlike jx 2 2 y 2 ., the lobes of jxy . are rotated 45u from the MnO direction, so that the hybridization with other bands and the resulting splittings reach their maximum value at the X-point. Tight-binding model: majority spin. Since there is a large exchange splitting, we discuss the majority and minority bands separately. This section presents the TB model for the majority spins, obtained by fitting to the first principles band structure. The four bands near E F are predominantly associated with the eg orbitals of Mn 3d, jx 2 2 y 2 . and j3z 2 2 r 2 ., so that the minimal TB model involves four orbitals per primitive unit cell. In this connection, it is useful to proceed in steps, and accordingly, we first discuss a 2-dimensional (2D) model with bilayer splitting, followed by the inclusion of effects of k z -dispersion. For the 2D model, the relevant symmetric (1) and antisymmetric (2) combinations of the orbitals decouple, and the 4 3 4 Hamiltonian reduces to two 2 3 2 Hamiltonians, H 6 , where the basis functions are y 16 and y 26 with the subscripts 1 and 2 referring to the jx 2 2 y 2 . and j3z 2 2 r 2 . orbitals, respectively. The Hamiltonian matrices are c i (aa) 5 cos(k i aa), i 5 x, y, and a is an integer. t ij are the hopping parameters where t 11 is the hopping between the jx 2 2 y 2 . orbitals, t 22 for the j3z 2 2 r 2 . orbitals, and t 12 between the jx 2 2 y 2 . and j3z 2 2 r 2 . orbitals. Here the nearest neighbor hopping is denoted by t ij , the next nearest hopping by t' ij , and the higher order hoppings are denoted by a larger number of primes as superscripts. Note that the two matrices in Eq. 1 are identical except for the last term on the main diagonal, differing only in the sign of the bilayer hopping terms t bi1 and H bi2~tbi2 zt' bi2 c x a ð Þzc y a ð Þ À Á 2. The chemical potential m is obtained via a least squares fit to the first-principles GGA bands. If the hopping parameters are deduced within the Slater-Koster model 20 , one would obtain t 11 5 t 22 5 t 12 5 t bi2 , and t' 11~t ' 22 . However, we found an improved fit by letting the parameters deviate from these constraints. A number of additional hopping terms were tested, but found to give negligible improvements and discarded. A least squares minimization program was used to obtain the optimized TB parameters, which are listed in Table 1 (2D model). Values of TB parameters in Table 1 are consistent with previous results on cubic manganites 17 . It is reasonable that the four nearest neighbor parameters (t 11 , t 22 , t 12 , and t bi2 ) are the largest in absolute magnitude and are the most important fitting parameters. Sign differences between t' 11 , t' 22 and t' 12 control the presence of a closed FS related to j3z 2 2 r 2 . bands and an open FS from jx 2 2 y 2 . bands, consistent with earlier studies 18 . TB parameters with small magnitudes (t0, and t90) involve overlap between more distant neighbors. We emphasize that even though t0 and t90 are small, they contribute significantly to the overall goodness of the fit. A small value of t bi1 reflects weak intra-layer interactions between the jx 2 2 y 2 . orbitals due to the orientation of these orbitals. Since the magnitude of the crystal field splitting parameter E z is smaller than that of t 12 , the hybridization of jx 2 2 y 2 . and j3z 2 2 r 2 . is significant when H 12 is nonzero. Figure 2 compares the model TB bands (open circles) with the corresponding DFT results (solid dots). While the full 2D model is considered in Figure 2a, we also show in Figure 2(b), results of a much simpler TB model that employs only two parameters (E z and t) with t 11 5 t 22 5 t 12 5 t bi2 . For the simple model of Figure 2b, the parameter values (t 5 20.431 eV, E z 5 20.057 eV, and m 5 0.616 eV) were obtained via an optimal fit to the first-principles bands. It is obvious that the 2D TB model results shown in Figure 2a provide a vastly improved fit compared to the simple two parameter model in Figure 2b. The agreement in Figure 2a between the TB model and the first principles calculations is overall very good and the TB model correctly reproduces salient features of the band structure. At C, the two lowest energy bands are found to be nearly degenerate in both the TB model and the first principles calculations, with a splitting of 22t bi1 5 0.044 eV in the TB model. Following these two bands along C 2 X, one finds that the two larger dispersing bands with jx 2 2 y 2 . character have small bilayer splitting due to the small value of t bi1 . The two other bands in the same direction are of j3z 2 2 r 2 . character, and exhibit a larger bilayer splitting of 22H bi2 5 1.09 eV. Because H bi2 contains the next-nearest-neighbor hopping terms, the bilayer splitting of j3z 2 2 r 2 . bands develops an in-plane k-dependence. As a result, dispersion of the antibonding band is larger than that of the bonding band. Along the C 2 M and X 2 M directions, H 12 is non-zero, leading to the mixing of jx 2 2 y 2 . and j3z 2 2 r 2 . bands. At the M-point, H 12 reaches its maximum value, yielding a complex bilayer splitting of the Van Hove singularities. In other words, the bare bilayer splitting of jx 2 2 y 2 . is <50 meV, but hybridization with j3z 2 2 r 2 . enhances this splitting to <290 meV near M in the TB model as follows: Figure 3 compares the 2D-TB (open circles) and first-principles (dots) FSs. Agreement is seen to be quite good. The three pieces of FS are labeled by '1', '2' and '3'. The larger squarish pocket '1' centered at X is a mix of jx 2 2 y 2 . and j3z 2 2 r 2 ., the smaller squarish pocket '3' around the C-point is primarily of j3z 2 2 r 2 . character, and the rounded FS '2' lying between '1' and '3' centered at X is mostly of jx 2 2 y 2 . character. For comparison Figure 3b shows the FS from the simple two parameter TB model of Figure 2b, and we see again that this simple model gives a poor representation of the actual FS. Recall that in the cuprates, there is a small but finite k z -dispersion [21][22][23][24] , which is also the case in the manganites. Since the j3z 2 2 r 2 . orbitals have lobes pointing out of the plane, the interlayer hoppings are associated with j3z 2 2 r 2 . bands. In the 3D model, the 4 3 4 Hamiltonian now cannot be reduced to two 2 3 2 Hamiltonians because of the body-centered crystal structure. The basis functions are jx 2 2 y 2 . and j3z 2 2 r 2 . for the upper and lower MnO 2 layers. By including interlayer hopping t z between j3z 2 2 r 2 . orbitals and the intra-layer hopping t' z for j3z 2 2 r 2 . orbitals, we obtain the Hamiltonian matrix: where c z (c) 5 cos(k z c) and c is the lattice constant in the z-direction, which is approximately 5 times larger than the in-plane lattice constant a. The parameters obtained by fitting to the DFT bands are listed in Table 1 (3D model). Compared to the 2D model, the bilayer hopping parameters t bi1 , t bi2 and t' bi2 are significantly modified. t 22 and E z change by about 30 meV while other terms undergo only slight modifications. Plausible values of parameters are retained in the 3D model. The effect of k z -dispersion in the 3D model can be seen by comparing the FSs at k z c 5 0 and k z c 5 2p as shown in Figure 4. While FS '2' with mostly jx 2 2 y 2 . character remains unchanged, the FS piece '3' with primarily j3z 2 2 r 2 . character changes significantly. '3' is squarish at k z c 5 0 but becomes smaller and rounded at k z c 5 2p ('39'). Although '1' contains a significant j3z 2 2 r 2 . contribution, the effect of k z -dispersion on this FS piece is much smaller than on '3'. '1' and '19' match when k x a 5 p or k y a 5 p because the interlayer hopping terms t z and t' z have zero contribution due to the and '19' almost match when k x a 5 k y a because t bi1 is almost zero. Thus '1' and '19' can differ only away from the high symmetry kpoints and this piece of the FS is cylinder-like in 3D. Tight-binding model: minority spin. Due to the large exchange splitting, we only need to consider two bands in the case of minority spins, which are associated with the t 2g jxy . orbitals of the upper and lower MnO 2 layers. The 2 3 2 model Hamiltonian given below is diagonal with a bilayer splitting of D between the upper and lower jxy . bands. where Table 2 lists the parameters obtained from fitting first-principles band structure. Figure 5 compares the parameterized TB bands (open circles) with the first-principles GGA bands (solid dots). The minority spin FSs are overlayed in Figure 3 as triangles, and form two small pockets around C, as observed also in the ARPES experiments 13 . Doping and magnetic structure. We now turn to discuss how the low-energy electronic structure of the rich variety of magnetic phases displayed by LSMO is captured by our 2D and 3D TB models. Kubota et al. 10 10 . In the 2D and 3D models discussed above, for doping greater than x 5 0.38, the value of E F was found by assuming a rigid band type approximation 25 where the total number of occupied electrons N is given by N 5 2(1 2 x) at doping x. Over this doping range the exchange splitting from GGA was taken to be constant since the spins are ferromagnetically aligned in planes and the in-plane lattice parameters are not sensitive to doping 10 . We then invoke the argument of Anderson et al. 26 that the transfer integral between any two ions depends on cos(h/2) where h is the angle between their spins on neighboring layers as the magnetic state changes from FM to AFM. We thus replaced the bilayer TB parameters H bi2 , t bi1 and D by cos(h cant /2)H bi2 , cos(h cant /2)t bi1 and cos(h cant / 2)D, and for the 3D model t z was also replaced with cos(h cant /2)t z , using the experimental values of h cant at the corresponding dopings given by Kubota et al. 10 . Table 3 atom for the doping range 0.38-0.59, as obtained within our 2D and 3D models. Table 4 provides the same quantities over this doping range only in the FM state appropriate for saturating magnetic fields. [The doping range used for calculations in Tables 3 and 4 does not include the experimentally observed anomalous FS behavior 27 .] The magnetic moment m B per Mn atom, including the contribution of the three occupied t 2g orbitals, is given by m B 5 1 2 x 2 2Dn 1 3, and its values are consistent with magnetic Compton experiments 28,29 . The number of minority electrons, Dn, found in recent ARPES experiments 13 is also in good agreement with the corresponding values in Table 3. We find that, in comparison to the GGA, the LSDA underestimates the exchange splitting by 20% and thus overestimates the number of minority electrons. On the other hand, the TB parameters based on LSDA and GGA band structures differ only within 1%. Figure 6a compares the experimental FS for x 5 0.38 (FM) 13 , with the corresponding 2D TB model predictions. Good agreement is seen between theory and experiment for the FS pieces related to the d 3z 2 {r 2 (red line), the anti-bonding d x 2 {y 2 (green line), and the minority pockets (pink and black lines). The bonding hole-pocket (blue) is invisible at this photon energy due to matrix element effects 13,19,21,22 . In order to account for the coexistence of metallic and nonmetallic regions for x # 0.38, which has been interpreted as arising from a phase separation into hole-rich and hole-poor regions 27 , we found it necessary to adjust the doping of the theoretical FS at x 5 0.38 to an effective dopping of x 5 0.43. Figure 6b shows the x 5 0.59 11 experimental AFM FS, along with the corresponding 2D TB model results. Here also we find good agreement for the bonding and anti-bonding d x 2 {y 2 bands (blue and green lines). The same level of agreement between theory and experiment is also found for the 3D model, which is to be expected since the values in Tables 3 and 4 for the 2D and 3D models are very similar. Discussion The double-layered manganites, La 222x Sr 112x Mn 2 O 7 , have attracted much attention in recent years as model systems that present a wide range of transport and magnetic properties as a function of temperature, doping and magnetic field. In the FM phase at x 5 0.38, the majority t 2g electrons of Mn lie well below the Fermi level and are thus quite inert. Therefore, key to the understanding of the manganites is the behavior of the Mn magnetic electrons with e g character (jx 2 2 y 2 . and j3z 2 2 r 2 .). The results of magnetic Compton experiments 28 reveal that the FM order weakens when the occupation of the j3z 2 2 r 2 . majority state decreases. For spintronics applications, it is important to note that the Fermi level in the FM In order to understand this interesting phenomenology, we have developed a TB model encompassing both the FM and AFM phases, which correctly captures the low-energy electronic structure of LSMO using a minimal basis set. The complex bilayer splitting in the majority spins is well reproduced. In particular, the mixing of jx 2 2 y 2 . and j3z 2 2 r 2 . orbital degrees of freedom is found to be strong and momentum dependent. With inclusion of k z dispersion, the 3D FS including its various pieces is reproduced in substantial detail. Moreover, our model accurately describes the delicate minority t 2g FS pocket. Since the e g mixing has a pronounced effect on the shape of the FS, an accurate model allowing precise parameterization of the band structure is crucially important for modeling transport properties. Such a model would also provide a springboard for further theoretical work on strongly correlated electron systems, including Monte Carlo simulations to uncover the exciting many-body physics of the manganites 30,31 . Moreover, a precise description of the minority t 2g band is needed for the design of efficient spintronics devices. In this way, the TB models discussed in this study would also help develop the applications potential of the manganites. Methods The first-principles calculations were done using the WIEN2K 32,33 code. The electronic structure was calculated within the framework of the density-functional theory 34,35 using linearized augmented plane-wave (LAPW) basis 36 . Exchangecorrelation effects were treated using the generalized gradient approximation (GGA) 37 . A rigid band model was invoked for treating doping effects on the electronic structure along the lines of Ref. 25, but we expect our results to be insensitive to a more realistic treatment of doping effects using various approaches [38][39][40][41] . We used muffintin radius (R MT ) of 1.80 Bohr for both O and Mn, and 2.5 Bohr for Sr and La. The integrals over the Brillouin zone were performed using a tetrahedron method with a uniform 14 www.nature.com/scientificreports
2014-12-15T19:20:19.000Z
2014-12-15T00:00:00.000
{ "year": 2014, "sha1": "f2ef3a2a06631218f04e087b4c508fa4f7ed7d3a", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/srep07512.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "432fbb6fbecf172b291fc3f30f88a7a21e29f786", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science", "Medicine" ] }
119143179
pes2o/s2orc
v3-fos-license
Partial Dehn twists of free groups relative to local Dehn twists - a dichotomy A criterion for quadratic or higher growth of group automorphisms is established which are represented by graph-of-groups automorphisms with certain well specified properties. As a consequence, it is derived (using results of a previous paper of the author) that every partial Dehn twist automorphism of $\FN$ relative to local Dehn twist automorphisms is either an honest Dehn twist automorphism, or else has quadratic growth. Introduction Dehn twist are well known from surface homeomorphisms: Any set C of pairwise disjoint essential closed curves c i on a surface S, together with a set of twist exponents n i ∈ Z, defines a homeomorphism h : S → S through "twisting" S along each c i precisely n i times. The set C defines canonically a dual graph-of-groups G with isomorphism π 1 G ∼ = π 1 S, where the vertex groups of G are the fundamental groups of the components of S C and the edge groups are isomorphic to Z. The automorphism h * : π 1 S → π 1 S induced by the multi-Dehn-twist h can be described algebraically through a graph-of-groups isomorphism H : G → G. This natural correspondence between geometric and algebraic data has given rise to a more general definition of a Dehn twist automorphism ϕ of a group G, via a graph-of-groups G equipped with an isomorphism π 1 G ∼ = G and a graph-of-groups isomorphism H : G → G, which satisfy extra conditions that mimic the above described surface situation, so that one gets H * = ϕ (up to inner automorphisms). A special case, which is useful in many circumstances, is given by requiring that H acts trivially on the underlying graph Γ(G) and on each of the vertex groups G v of G, and that furthermore all edge groups of G are trivial. In this case the automorphism H * : π 1 G → π 1 G is determined by the family of correction terms δ e ∈ G τ (e) for any edge e of G, where τ (e) denotes the terminal vertex of e. In the paper [10] the more general case of a partial Dehn twist H : G → G relative to a subset V of vertices of G has been investigated, which differs from the above described situation in that for any v ∈ V the induced vertex group automorphism H v : G v → G v may not be the identity. If for each v ∈ V the map H v : G v → G v is itself a Dehn twist automorphism, then H : G → G is called a partial Dehn twist relative to a family of local Dehn twists. It is shown in [10] that in this case H can be blown-up to a refined graph-of-groups isomorphism which is a Dehn twist that incorporates both, H and the family of all H v , provided that the following criterion is satisfied: Criterion: For every edge e of H with endpoint v ∈ V the correction term δ e is H v -zero. Here for any graph-of-groups automorphism H : G → G, the associated path group Π(G), and any vertex v of G an element g ∈ π 1 (G, v) ⊂ Π(G) is H-zero if and only if there exists an element h ∈ Π(G) such that h −1 gH * v (h) has G-length 0. The main result of this paper is to show that this sufficient criterion is also necessary. In fact, we show: Theorem 1.1. Let H : G → G be a partial Dehn twist relative to a subset V of vertices of G. Assume for some v ∈ V that the vertex group G v is free, and that H v : G v → G v is a Dehn twist automorphisms. If there is an edge e of G with correction term δ e ∈ G v that is not H v -zero, then the automorphism H * : π 1 G → π 1 G has at least quadratic growth. Since Dehn twist automorphisms are known to have linear growth, this shows that H * is not conjugate to any Dehn twist automorphism. In particular, H can indeed not be blown up via the local Dehn twists H v to obtain a global Dehn twist of π 1 G. By combining this theorem with the main result of [10], we obtain: Corollary 1.2. Let ϕ ∈ Out(F n ) be represented by a partial Dehn twist relative to a family of local Dehn twists. Then either ϕ is itself a Dehn twist automorphism, or else ϕ has at least quadratic growth. The proof of this corollary is algorithmic, i.e. it can be effectively decided which alternative of the stated dichotomy holds. This is a crucial ingredient in the author's work [11], where an algorithm is given that decides whether a polynomial growth automorphism of a free group F n is, up to passing to a power, induced by a surface homeomorphism. It is also the starting point of a more detailed analysis of the growth of conjugacy classes for polynomially growing automorphisms of F n , see [11]. Acknowledgements. This paper is part of my PhD thesis, and I would like to thank sincerely my advisors, Arnaud Hilion and Martin Lustig, for their advice and encouragement. Graphs-of-groups and their isomorphsms The purpose of this and the following section is to briefly recall some basic knowledge and to establish some preliminary lemmas about graph-of-groups, Dehn twists on graph-of-groups, efficient Dehn twist as well as the notion of H-conjucation, which is introduced in [10]. Most of our notations are taken from [3]; we refer the readers to [8], [7] and [1] for more detailed informations and discussions. Throughout this paper, we refer to a graph as a finite, non-empty, connected graph in the sense of Serre (cf. [8]). For a graph Γ, we denote by V (Γ), E(Γ) its vertex set and edge set respectively. For an edge e ∈ E(Γ), we deonote by τ (e) its terminal vertex and τ (e) its initial vertex. The inverse of an edge e is denoted by e. Notice in particular that our graph Γ is non-oriented while one can always choose an orientation where: (1) Γ is a graph, called the underlying graph; (2) each G v is a group, called the vertex group of v; (3) each G e is a group, called the edge group of e, and we require G e = G e for every e ∈ E(Γ); (4) for each e ∈ E(Γ), the map f e : G e → G τ (e) is an injective edge homomorphism. Unless otherwise stated, in this paper we will always assume that all vertex and all edge groups of any graph-of-groups G are finitely generated. Given a graph-of-groups G, we usually denote by Γ(G) the graph underlying it. The vertex set of Γ(G) is denoted by V (G) while the edge set is denoted by E(G). Definition 2.2. The word group W (G) of a graph-of-groups G is the free product of vertex groups and the free group generated by stable letters The path group (sometimes also called Bass group) of G is defined by Π(G) = W (G)/R, where R is the normal subgroup subjects to the following relations: ⋄ t e = t −1 e , for every e ∈ E(Γ); ⋄ f e (g) = t e f e (g)t −1 e , for every e ∈ E(Γ) and every g ∈ G e . Remark 2.3. A word w ∈ W (G) can always be written in the form w = r 0 t 1 r 1 ...r q−1 t q r q (q ≥ 0), where each t i ∈ F ({t e ; e ∈ E(Γ)}) stands for the stable letter of the edge e i and each r i ∈ * (G v ) v∈V (Γ) . The sequence (t 1 , t 2 , ..., t q ) is called the path type of w, the number q is called the path length of w. In this case, we say that e 1 e 2 ...e q is the path underlying w. Two path types (t 1 , t 2 , ..., t q ) and (t ′ 1 , t ′ 2 , ..., t ′ s ) are said to be same if and only if q = s and t i = t ′ i for each 1 ≤ i ≤ q. Definition 2.4. Let w ∈ W (G) be a word of the form w = r 0 t 1 r 1 ...r q−1 t q r q . The word w is said to be connected if r 0 ∈ G τ (e 1 ) , r q ∈ G τ (eq) , and τ (e i ) = τ (e i+1 ), r i ∈ G τ (e i ) , for i = 1, 2, ..., q − 1. Moreover, if w is connected and τ (e q ) = τ (e 1 ), we say that w is a closed connected word issued at the vertex τ (e q ). Moreover the word w is said to be cyclically reduced if it is reduced and if q > 0 and t 1 = t −1 q , then r q r 0 ∈ f eq (G eq ). We recall the following facts. Proposition 2.6. For any graph-of-groups G, the following holds: (1) Every non-trivial element of Π(G) can be represented as a reduced word. (2) Every reduced word is a non-trivial element in Π(G). (3) If w 1 , w 2 ∈ W (G) are two reduced words representing the same element in Π(G), then w 1 and w 2 are of the same path type. In particular, w 2 is connected if and only if w 1 is connected. Definition 2.7. (fundamental groups) 1. Fundamental groups based at v 0 For any v 0 ∈ V (Γ), the fundamental group based at v 0 , denoted by π 1 (G, v 0 ), consists of the elements in Π(G) that are closed connected words issued at v 0 . For a vertex w 0 ∈ V (Γ) different from v 0 , we have π 1 (G, v 0 ) ∼ = π 1 (G, w 0 ). In fact, let W ∈ Π(G) be a connected word with underlying path from v 0 to w 0 . The restriction of ad W : Π(G) → Π(G) to π 1 (G, w 0 ) induces an isomorphism from π 1 (G, w 0 ) to π 1 (G, v 0 ). Sometimes we write π 1 (G) when the choice of basepoint doesn't make a difference. 2. Fundamental groups at a maximal tree T 0 The fundamental group at T 0 , denoted by π 1 (G, T 0 ), is generated by the groups G v , for all v ∈ V (Γ), and the elements t e , for all e ∈ E(Γ), subjects to the relations: ⋄ t −1 e = t e , t e f e (g)t −1 e = f e (g), for e ∈ E(Γ), g ∈ G e ; ⋄ t e = 1, for e ∈ E(T 0 ). By defintion we have immediately It's shown in the book of Serre [8] that the above two definitions of fundamental groups are equivalent. It follows immediately that, for a graph-of-groups G with trivial edge groups, the product * (G v ) v∈V (G) is free and forms a free factor of π 1 (G), moreover the disjoint union of basis of each vertex group v∈V (G) B v is a subset of the basis of π 1 (G). Definition 2.9. [Graph-of-groups Isomorphisms] Let G 1 , G 2 be two graphs of groups. Denote Γ 1 = Γ(G 1 ) and Γ 2 = Γ(G 2 ). An isomorphism H : G 1 → G 2 is a tuple of the form is a group isomorphism, for any e ∈ E(Γ 1 ); (4) for every e ∈ E(Γ 1 ), the correction term δ(e) ∈ G τ (H Γ(e) ) is an element such that Remark 2.10. A graph-of-groups isomorphism H : G 1 → G 2 induces an isomorphism H * : Π(G 1 ) → Π(G 2 ) defined on the generators by: It's easy to verify by computation that H * preserves the relations t e t e = 1 for any e ∈ E(G) and f e (g) = t e f e (g)t −1 e , for any e ∈ E(G) and g ∈ G e . Furthermore, the restriction of Remark 2.11. As in [3], we define the outer isomorphism induced by a group isomorphism f : G 1 → G 2 as the equivalence class Hence H * v induces an outer isomorphism H * v : π 1 (G 1 , v) → π 1 (G, H Γ (v)). Observe that when choosing a different vertex v 1 as basepoint, we may choose a word W ∈ Π(G 1 ) with underlying path from v 1 to v to obtain the following commutative diagram: By Lemma 2.2 and Lemma 3.10 in [3], H * v determines an outer isomorphism H * v 1 : In this sense, we observe that H : G 1 → G 2 induces an outer isomorphism H : π 1 (G 1 ) → π 1 (G 2 ) which doesn't depend on the choice of basepoint. H-conjugation. We recall in this subsection some basic definitions and properties about the notion of H-conjugation. Contrary to the previous subsection, which only contained standard definitions and notation, the content of this subsection has been defined in [10] and to our knowledge didn't exist previously. Definition 2.12. For a graph-of-groups automorphism H : G → G two reduced words w 1 , w 2 ∈ Π(G) are said to be H-conjugate to each other if there exists a reduced word w ∈ Π(G) such that w 1 = ww 2 H * (w) −1 . It's easy to show that H-conjugation is a well-defined equivalence relation on Π(G). Denote by [w] H the set which consists of all elements in Π(G) that are H-conjugate to w. We call [w] H the H-conjugacy class of w. Recall that the path length of a word w ∈ Π(G) equals to the number of edges the path underlying w crosses. We denote the path length of w by w G . Definition 2.13. A reduced word w ∈ Π(G) is said to be H-minimal if it has the shortest path length among its H-conjugates. More specifically, if w is H-minimal, then for every Since w G is a natural number, one has that every reduced word w ∈ Π(G) has a H-conjugate which is H-minimal. Therefore there exists a well defined H-length: Definition 2.14. A reduced word w ∈ Π(G) is called H-zero if and only if its H-length equals to zero, i.e. w G,H = 0. It also can be shown that w ∈ Π(G) is H-minimal if and only if it is H-reduced, as defined below: Definition 2.15. Let w ∈ Π(G) be a reduced word in the form of w = r 0 t 1 r 1 ...r q−1 t q r q , w is said to be H-reduced if its cannot be shortened by the are also H-reduced. Moreover, since H * preserves the path lengths of reduced words, we also Proof. In general, for every reduced word W ∈ Π(G), there exists γ ∈ Π(G) such that γ −1 W H(γ) is H-reduced. In the case where H acts trivially on the graph Γ, the reduced words γ and H(γ) underly exactly the same edge path. Hence the word γ −1 W H(γ) is a closed word issued at the ternimal vertex of γ. Moreover, it derives from Section 2.2 that we can find such an H-reduced word with path type that is a subsequence of the path type of W by applying the elementary operation defined in Definition 2.15. ⊔ ⊓ (4) for each G e , there is an element γ e ∈ Z(G e ) such that the correction term satisfies δ(e) = f e (γ e ), where Z(G e ) denotes the center of G e . We denote a Dehn twist defined as above by D = D(G, (γ e ) e∈E(G) ) Remark 3.2 (Twistor). Given a Dehn twist D = D(G, (γ e ) e∈E(G) ), we define the twistor of an edge e ∈ E(Γ) by setting z e = γ e γ −1 e . Then for any edge e we have z e ∈ Z(G e ) and z e = γ e γ −1 e = z −1 e . Remark 3.3. The induced automorphism D * : Π(G) → Π(G) is defined on generators as follows: = t e f e (z e ), for every e ∈ E(Γ). In particular, the induced automorphism on the fundamental group, D * v : Definition 3.4. In general, a group automorphism ϕ : G → G is said to be a Dehn twist automorphism if it is represented by a graph-of-groups Dehn twist. More precisely, there exists a graph-of-groups G, a vertex v of Γ(G), a Dehn twist D : G → G, and an isomorphsim θ : In this case the induced outer automorphism ϕ : G → G is called a Dehn twist outer automorphism. Remark 3.5. The reader may notice the following subtlety in the above definitions: Because of the role of the base point v in Definition 3.4, it may well occur that two automorphisms ϕ 1 and ϕ 2 of a group G define the same outer automorphism ϕ 1 = ϕ 2 which is a Dehn twist outer automorphism, but only ϕ 1 is a Dehn twist automorphism, while ϕ 2 isn't. Proposition 3.6 (Proposition 5.4 [3]). Suppose G is a graph-of-groups which satisfies that for every edge e there is an element r e ∈ G τ (e) with f e (G e ) ∩ r e f e (G e )r −1 e = {1}. Then two Dehn twists D = (G, (γ e ) e∈E(G) ), D ′ = (G, (γ ′ e ) e∈E(G) ) determine the same outer automorphism of π 1 (G) if and only if z e = z ′ e for all e ∈ E(Γ). This proposition shows that in many situations a Dehn twist on a given graph-of-groups is uniquely determined by its twistors. Thus sometimes we may define a Dehn twist by its twistors (z e ) e∈E(Γ) (for each e ∈ E(Γ), z e ∈ Z(G e ) and z e = z −1 e ). In this case, we may conversely define: General and partial Dehn twists. As discussed in [10], we can define a Dehn twist in a slightly more general context by replacing the last condition of Definition 3.1 by the following: (4*) the correction term δ(e) ∈ C(f e (G e )), where C(f e (G e )) denotes the centeralizer of f e (G e ) in G τ (e) , for all e ∈ E(Γ). It's shown in [10] that Dehn twists defined in either, the classical or the general version, are equivalent in the sense that: (i) every classical Dehn twist is a general Dehn twist; (ii) every general Dehn twist has a naturally corresponding classical Dehn twist which induces same outer automorphism. On other hand, if G is a graph-of-groups with trival edge groups, and H : G → G is an automorphism which acts trivially on the graph and vertex groups, then it follows immediately from the above definitions that H induces Dehn twist automorphisms, for any choice of the family of correction terms. Definition 3.7. (a) A partial Dehn twist relative to a subset of vertices That is to say, any of vertex group automorphism H v i with v i ∈ V may not be trivial. (b) More specifically, a partial Dehn twist relative to a family of local Dehn twists is a partial Dehn twist relative to a subset of vertices V of G, and at Dehn twist automorphism H v i , then ϕ is a partial Dehn twist relative to a family of local Dehn twists. This can be seen through replacing H by J • H, where the graph-ofgroups automorphism J : G → G is the identity on all edge groups and on all vertex groups for vertices outside V, and an inner automorphism on all Here the correction terms δ J e for edges e with terminal vertex τ (e) / ∈ V are trivial, while for edges e with τ (e) ∈ V they are properly chosen to "undo" the inner automorphism J τ (e) on G τ (e) , so that for any v / ∈ V the induced automorphism J * v : π 1 (G, v) → π 1 (G, v) is the identity map (and thus for any v ∈ V the automorphism J * v is an inner automorphism). See Section 2.4 in [10] for more details. Efficient Dehn Twist. Unless otherwise stated, in this subsection we always assume D : G → G is a Dehn twist defined in the classical meaning. We write D = D(G, (z e ) e∈E(G) ). Two edges e 1 and e 2 with common terminal vertex v are called ⋄ positively bonded, if there exist n 1 , n 2 ≥ 1 such that f e 1 (z n 1 e 1 ) and f e 2 (z n 2 e 2 ) are conjugate in G v . ⋄ negative bonded, if there exist n 1 ≥ 1, n 2 ≤ 1 such that f e 1 (z n 1 e 1 ) and f e 2 (z n 2 e 2 ) are conjugate in G v . For the rest of this subsection, we always assume for a graph-of-groups G its fundamental group π 1 (G) is free and of finite rank n ≥ 2. This implies, by definition of a classical Dehn twist, that any edge e with non-trivial twistor z e has edge group G e ∼ = Z. The graph-of-groups G satisfies The following has been shown in [3]: It's shown in [3] that every Dehn twist (classical or general) can be transformed algorithmically into an efficient Dehn twist, and furthermore, the latter is essentially unique: ) and an isomorphism between fundamental groups ρ : are properly chosen vertices, such that D ′ ρ = ρ D. ⊔ ⊓ We now return to the issue of H-conjugation as recalled in the last section, but with the specification that the graph-of-groups isomorphism H : G → G is a Dehn twist D, and we are interested in H-reduced or rather D-reduced elements as discussed in Definition 2.14 and Remark 2.15. This shows that even in case of an efficient Dehn twist D for non-Dreduced word in Π(G) being D-zero and having its cyclically reduced path length equals to zero are not equivalent. In view of the existence result for efficient Dehn twist representatives from Theorem 3.12 (1) we will always, when an outer Dehn twist automorphism ϕ ∈ Out(F n ) is given without specification of a Dehn twist representative, assume that it is represented by an efficient Dehn twist D : G → G. Similarly, a Dehn twist automorphism ϕ ∈ Out(F n ) without specification of a Dehn twist representative is always assumed to be represented by an efficient Dehn twist D : G → G. Recall from Remark 3.5 that the last convention may appear slightly restrictive, in particular when it comes to a partial Dehn twist D : G → G relative to a subset V of vertices v i of G for which the induced vertex group automorphism D v i is known to induce an outer Dehn twist automorphism D v i . However, if follows from Remark 3.8 that this restriction is immaterial; this class of automorphisms is precisely the same as the one given in Definition 3.7 (b), i.e. the class of partial Dehn twists relative to a family of local Dehn twists. In view of the uniqueness of D affirmed by part (2) of Theorem 3.12, the following notion is well defined: Definition 3.14. Let ϕ ∈ Aut(F n ) be a Dehn twist automorphism. Then any element w ∈ F n is called ϕ-reduced (or ϕ-zero) if it is D-reduced (or D-zero) with respect to some efficient Dehn twist representative of ϕ. Growth type. We first recall some standard notation and well know elementary facts: Definition 4.1. Let G be a finitely generated group, and let X = {x 1 , x 2 , ..., x n } denote its generating set. The length function with respect to the generating set X is defined by setting for any g ∈ G: .., n}, ε j ∈ {±1}}. The cyclic length of g ∈ G is defined by (1) For any g ∈ G we have |g| X ≥ 0, and |g| X = 0 holds if and only of g = 1. (2) For any g ∈ G the cyclic length g X is the minimum of all lengths of elements in the conjugacy class [g]. The elements h ∈ G and hgh −1 such that |g| X = |hgh −1 | X may not be unique. (3) If G = F n and X is a basis, we also have g X = |gg| X − |g| X . Furthermore g X = |g| X if and only if g ∈ G is cyclically reduced. (4) For any words g, h ∈ F n we always have |gh| ≤ |g| + |h|. Remark 4.3. For two sets of generators ., x ′ n } of G, the length fonctions | · | X and | · | X ′ are equivalent up to a constant. To be more precise, there exists a constant C > 0 such that for all g ∈ G: 1 C |g| X ≤ |g| X ′ ≤ C|g| X . Definition 4.4. Let ϕ ∈ Aut(G) be an automorphism and X be any generating system of G. For any element g ∈ G we introduce a function Gr(ϕ, g) to trace the length of g under the iteration of ϕ: Similarly, for the cyclic length we have: Notice that for ϕ ∈ Aut(G), g 1 , g 2 ∈ [g], and n ∈ N one has: Gr c (ϕ, g 1 )(n) = Gr c (ϕ, g 2 )(n) Also, for ϕ ∈ Out(G) and ϕ 1 , ϕ 2 ∈ ϕ we obtain Gr c (ϕ 1 , g)(n) = Gr c (ϕ 2 , g)(n) for all g ∈ G, n ∈ N. Thus it makes sense to consider the cyclic length of the conjugacy class [g] (or equivalently g ∈ [g]) under the iteration of the outer automorphism ϕ ∈ Out(G). Definition 4.5 (growth type). (a) We say that g ∈ G grows at most polynomially of degree d under iteration of ϕ ∈ Aut(G) if Gr(ϕ, g) is bounded above by a polynomial of degree d. The conjugacy class [g] grows at most polynomially of degree d under iteration of ϕ ∈ Out(G) (or equivalently, of ϕ ∈ ϕ) if Gr c ( ϕ, [g]) (or Gr c (ϕ, [g])) is bounded above by a polynomial of degree d. The automorphism ϕ ∈ Aut(G) has at most polynomial growth of degree d if any g ∈ G grows at most polynomially of degree d. Similarly, the outer automorphism ϕ ∈ Aut(G) has at most polynomial growth of degree d if any [g] ⊂ G grows at most polynomially of degree d. (b) Similarly we say that g ∈ G (or [g]) grows at least polynomially of degree d under iteration of ϕ ∈ Aut(G) (or of ϕ ∈ Out(G) respectively) if Gr(ϕ, g) (or Gr c ( ϕ, [g])) is bounded below by a polynomial of degree d with positive leading coefficient. The automorphism ϕ ∈ Aut(G) has at least polynomial growth of degree d if some g ∈ G grows at least polynomially of degree d. Similarly, the outer automorphism ϕ ∈ Aut(G) has at least polynomial growth of degree d if some [g] ⊂ G grows at least polynomially of degree d. (c) If g (or [g]) grows both at most and at least polynomially of degree d, then we say that it grows polynomially of degree d. (d) An automorphism ϕ ∈ Aut(G) (or an outer automorphism ϕ ∈ Out(G)) grows polynomially of degree d if every g ∈ G (or every [g] ⊂ G) grows at most polynomially of degree d and in particular there exists an element g 0 ∈ G (or a conjugacy class [g] ⊂ G) that grows polynomially of degree d. Definition 4.7. Let (w k ) +∞ k=1 a family of elements in G, we sometimes say that the sequence w k grows at least polynomially of degree d if there exists constant C 1 > 0 such that C 1 k d ≤ |w k |. Similarly, we say that w k grows at most polynomially of degree d if there exists C 2 > 0 such that |w k | ≤ C 2 k d and that w k grows polynomially of degree d if one can find C 1 , C 2 > 0 such that C 1 k d ≤ |w k | ≤ C 2 k d . Cancellation and iterated products. Let F n be a free group and denote by A a fixed basis of F n . As before, we denote the combinatorial length (with respect to A) of an element W by |W | = |W | A , and the cyclically reduced length of W by W = W A . Lemma 4.8. Let F n be a free group, let V ⊂ F n be a subgroup of rank n ≥ 2, and let (w i ) i∈N be any infinite family of elements w i ∈ F n . Then for any basis A of F n there exists an element v ∈ V and a constant C ≥ 0 such that for infinitely many indices i ∈ N the cancellation in the product Pick elements v 1 and v 2 in V which generate a subgroup of rank 2. Consider the products w i v m 1 w −1 i , for increasing integers m. We observe that one of the following must hold: (1) For some m ∈ N the cancellation in w i v m 1 w −1 i is bounded uniformly with respect to all i ∈ N. (2) For any m ∈ N there is an index j(m) ∈ N such that w j(m) has the suffix v −m 1 . (3) For any m ∈ N there is an index j ′ (m) ∈ N such that w −1 j ′ (m) has the prefix v −m 1 , or equivalently, w j ′ (m) has the suffix v m 1 . In the case of alternative (1), the proof is completed. In case of (2), we replace the family of all w i by the subfamily of all w j with j = j(m) for any m ∈ N. It follows from the statement (2) that this subfamily is infinite. In case (3) we do the same, but with j = j ′ (m). We now repeat the above trichotomy, with w i replaced by w j , and with v 1 replaced by v 2 . We observe that in this second trichotomy the alternatives (2) and (3) lead to elements w j with suffix that is simultaneously an arbitrary large positive or negative power of v 1 and of v 2 . But this is impossible, by our assumption that v 1 and v 2 generate a subgroup of rank 2. Thus alternative (1) must hold for the second trichotomy, which proves our claim. ⊔ ⊓ Cancellation in long products. Definition 4.9. We say U and V admit a common root if there exist an element R ∈ F n , R = 1, such that U = R m 1 , V = R m 2 for some suitable m 1 , m 2 ∈ N, R is called a common root of U and V . We recall the following well known fact: Proposition 4.10 ( [5]). For any elements U, V ∈ F n , there is an algorithm which decides whether they admit a common root. The following is well-known too: Lemma 4.11. For two elements U, V ∈ F n we have U n 1 = V n 2 for any n 1 , n 2 ≥ 1 if and only if U, V don't admit a common root. Proof. On one hand, if U, V admit a common root R such that U = R m 1 , V = R m 2 for some m 1 , m 2 ≥ 1, then we have U m 2 = V m 1 . On the other hand, if there exist n 1 , n 2 ≥ 1 such that U n 1 = V n 2 , by comparing the subfixes and prefixes of U and V , we can find a common root R ∈ F n . ⊔ ⊓ Lemma 4.12. If U −1 , V ∈ F n don't admit a common root, then the cancellation of the products U n 1 V n 2 , for any n 1 , n 2 ∈ N, is uniformly bounded. As consequence, there exists a constant and K 0 = K(U −1 , V ) such that for any n 1 , n 2 ∈ N, we have: Proof. If no constant K 0 as postulated exists, then by comparing the subfixes and prefixes of U and V we can find a common root R ∈ F n for U −1 and V , which contradicts our hypothesis. Therefore by definition we can find B 1 ≥ 0 such that Hence, by taking K 0 = −B 0 we have for any n 1 , n 2 ≥ 0. ⊔ ⊓ Remark 4.13. Furthermore, for U −1 , V ∈ F n which don't admit a common root, we have at the same time that the cancellation of the products V n 2 U n 1 , for any n 1 , n 2 ∈ N, is uniformly bounded by some constant B 2 ≥ 0. Taking Lemma 4.14. Let X, b, Y ∈ F n be elements such that X −m 1 = bY m 2 b −1 , for any m 1 , m 2 ≥ 1. Then there exists a constant K = K(X, b, Y ) such that for any n 1 , n 2 ≥ 0 we have: Proof. For any n 1 , n 2 ≥ 0, we may consider the word Taking U = X, V = bY b −1 , we know from Lemma 4.11 that U −1 , V don't admit a common root. Hence it follows from Lemma 4.12 that there exists ⊔ ⊓ Remark 4.15. Please notice that in the situation considered in Lemma 4.14 we may not have the following inequality: X n 1 bY n 2 ≥ n 1 X + n 2 Y + K Counter-example: Consider F 2 = a, b and let X = a −1 , Y = a. Lemma 4.16. Let X, b, Y ∈ F n be elements such that X −m 1 = bY m 2 b −1 , for any m 1 , m 2 ≥ 1. Then there exist cyclically reduced conjugates X, Y of X, Y and n 0 ∈ N such that for all n 1 , n 2 ≥ n 0 , in the reduced product of X n 1 bY n 2 neither X nor Y is completely cancelled. Proof. Consider the cyclically reduced conjugates for X, Y , where w 1 , w 2 ∈ F n . We may then write the word We derive from the uniformly bounded property that there exists n 0 ∈ N such that when n 1 , n 2 ≥ n 0 , in the reduced product of X n 1 bY n 2 = w −1 1 X n 1 w 1 bw −1 2 Y n 2 w −1 2 , neither X nor Y is completely cancelled. More concretely, since we always have the inequality |X n 1 bY n 2 | ≤ n 1 X + n 2 Y + 2|w 1 | + 2|w 2 | + |b|, it follows from Lemma 4.14 that the cancellation in the products X n 1 bY n 2 is uniformly bounded by B = 2|w 1 | + 2|w 2 | + |b| − K, where K = K(X, b, Y ) is the constant obtained in Lemma 4.14. Then we may choose n 0 ∈ N that satisfies X n 0 = n 0 X , Y n 0 = n 0 X ≥ B. Main cancellation result. Let now F be a set which consists of finitely many triplets (X i , b j , X k ), where X i , b j , X k ∈ F n are elements which satisfy (1) X i > 0 and X k > 0, and (2) X −m 1 i = b j X m 2 k b −1 j , for any m 1 ≥ 1, m 2 ≥ 1. We consider below words w = w(n 1 , n 2 , ...n q ) = c 0 y n 1 1 c 1 y n 2 2 c 2 . . . c q−1 y nq q c q ∈ F n which have the property (y i , c i , y i+1 ) ∈ F, for 1 ≤ i ≤ q. We then derive the following proposition: Proposition 4.17. There exist constants N 0 ≥ 0 and K 0 such that for any w = w(n 1 , n 2 , ...n q ) ∈ F n as above and any n i ≥ N 0 (with 1 ≤ i ≤ q), we have Proof. It follows directly from Lemma 4.14 and Remark 4.16 that for each triplet (y i , c i , y i+1 ), 1 ≤ i ≤ q − 1, there exist constants K i = K(y i , c i , y i+1 ) and N i ≥ 0 such that : i+1 | ≥ n i y i + n i+1 y i+1 + K i , Moreover, if n i , n i+1 ≥ N i neither of the cyclically reduced conjugates y i = w i y i w −1 i , y i + 1 = w i+1 y i+1 w −1 i+1 is completely cancelled in the reduced product We now prove the proposition by induction. (1) The case for q = 1 is trivial while the case for q = 2 is shown in Lemma 4.14. (2) Suppose the inequality holds for q = s. In other words: we can find constants N ≥ N s−1 and K such that for n i ≥ N |c 0 y n 1 1 c 1 y n 2 2 c 2 ...c s−1 y ns s c s | ≥ s i=1 n i y i + (s − 1)K and y ns s is not completely cancelled in the reduced procedure. In particular, given that in each inductive step the constants N and K ′ depends only on the triplets (y i , c i , y i+1 ), for 1 ≤ i ≤ q − 1 one can in fact deduce the final cancellation bound (q − 1)K 0 based on just the family F. In other words, the cancellation bound K 0 doesn't depend on the exponents n i 's, once they are bigger than N 0 := N . ⊔ ⊓ Remark 4.18. In addition if (y q , c q c 0 , y 1 ) ∈ F, similarly to what is done in the last proof, we may apply the same technique to the triplet (y q , c q c 0 , y 1 ) and obtain the following estimate for cyclical length of w (again assuming n i ≥ N 0 for all exponents n i ): n i y i + qK 0 Cancellation bounds for T -products. Let T ⊂ F n {1} be a finite set. We say that a product if w i ∈ F n and y i ∈ T for all indices i, and we say that the product W is for any integers m, m ′ ≥ 0. For any T -word W as in (4.1) and any multi-exponent Proof. For the given family W = W (w i , y i ) we set Then our claim follows directly from Proposition 4.17. ⊔ ⊓ Graph-of-groups with trivial edge groups: Growth bounds In this section we will suppress the base point v in the fundamental group π 1 G of a graph-of-groups G if it is immaterial, and write simply π 1 G. Lemma 5.1. Let G be a graph-of-groups with trivial edge groups. Let (W i ) +∞ i=1 ⊂ π 1 (G) be a family of cyclically reduced words on G, where W i = v 0 (i)t 1 v 1 (i)t 2 ...t q v q (i). If for some 1 ≤ k ≤ q, the length of v k (i) under some (hence any) finite generating system B k of the vertex group G v k , i.e. |v k (i)| B k , grows quadratically with respect to i, then W i grows at least quadratically with respect to i. Proof. As shown in Section 2.1 that the union of generating systems of each vertex group v∈V (G) B v forms a subset of a generating system B of π 1 (G), where each vertex group is a free factor. We obtain: It follows immediately from the conditions that at least one of |v k (i)| B k grows quadratically and that W i is cyclically reduced that the cyclically reduced length of W i grows at least quadratically. ⊔ ⊓ Recall that a graph-of-groups G is called minimal if it doesn't contain a proper subgraph G ′ such that the inclusion induces an isomorphism on the fundamental groups. For a finite graph-of-groups with trivial edge groups this amounts to requiring that any vertex of valence 1 has a non-trivial vertex group. Lemma 5.2. Let G be a minimal graph-of-groups with trivial edge groups, Then for any edge e with terminal vertex v = τ (e) that has non-trivial vertex group G v one can find a cyclically reduced word w ∈ π 1 (G) with underlying path that runs subsequently through the edge e and directly after through e. Proof. Because G is a minimal graph-of-groups, each connected component of the graph Γ ′ obtained from Γ(G) by removing the edge e must contain either a circuit ω or else a vertex v ′ with non-trivial vertex group G v ′ . Let γ be a path in Γ ′ which connects ι(e) either to the initial (= terminal) vertex of some such ω, or else to v ′ . Let v ∈ G v {1} and u ∈ G v ′ {1} (in the second case only). Then γ −1 * eveγ * ω * (in the first case) or γ −1 * eveγ * u (in the second one) are the words we are looking for, where γ * and ω * denote the sequence of stable letters t e i defined by the edges e i of γ and ω respectively. ⊔ ⊓ Proposition 5.3. Let G be a minimal graph-of-groups with trivial edge groups, and let H : G → G be a graph-of-groups automorphism which acts trivially on the underlying graph Γ = Γ(G). Let v be a vertex of Γ, with vertex group automorphism H v : G v → G v . For some edge e with terminal vertex τ (e) = v denote by δ e ∈ G v the correction term of e. Assume that for some g ∈ G v there exist a constant C > 0 and a strictly increasing sequence of numbers n i ∈ Z which satisfy: Then the induced outer automorphism H of π 1 G has at least quadratic growth. Proof. If follows immediately from Lemma 5.2 that one can find cyclically reduced word w ∈ π 1 G with underlying path that runs through the edge e and subsequently through e, and w contains the word t e gt −1 e as subword. As a consequence the iteration of H * : Π(G) → Π(G) on w will give words H k * (w) that contain Hence it follows from the assumed inequality and from Lemma 5.1 that the subsequence H n i * (w) grows at least quadratically. Therefore the conjugacy class of w grows at least quadratically under the iteration of H * , which implies that the induced outer automorphism H grow at least quadratically. ⊔ ⊓ Dehn twists This section is dedicated to translate our cancellation propositions into graph-of-groups language. Through the whole section, we always assume that the free group F n is of rank n ≥ 2. We first prove the following Proposition. Proposition 6.1. Let F n be a free group with rank n ≥ 2, and let D ∈ Aut(F n ) be a Dehn twist automorphism which is represented by an efficient Dehn twist. Then we have: (1) There exists a finite set of "twistors" T = {z 1 , . . . z r } ⊂ F n {1}, such that for any element w ∈ F n there exists a (non-unique) "T -decomposition" of w as product (6.1) w = w 0 w 1 w 2 . . . w q−1 w q = w 0 y 0 1 w 1 y 0 2 w 2 . . . w q−1 y 0 q w q with w i ∈ F n and y i or y −1 i in T such that D n (w) = w 0 y n 1 w 1 y n 2 w 2 . . . w q−1 y n q w q , and y −m for any integers m, m ′ ≥ 0. (2) The rank of the subgroup of F n which consists of all elements fixed by D satisfies: rk(Fix(D)) ≥ 2 Proof. By definition D is represented by an efficient Dehn twist D : G → G on a graph-of-groups G with fundamental group π 1 G isomorphic to F n . We pick a vertex v 0 of G as base point and specify the above isomorphism to be θ : F n ∼ = −→ π 1 (G, v 0 ). The automorphism D fixes the θ −1 -image of the vertex group G v 0 of G elementwise, and since efficient Dehn twists have all vertex groups of rank ≥ 2 (see Proposition 3.10), this proves claim (2) of the proposition. In order to obtain claim (1), we chose a maximal tree Y in the graph Γ and identify in the usual fashion each vertex group G v canonically with a subgroup of π 1 (G, v 0 ) by connecting v 0 to v through a simple path in Y . Similarly, for any edge e the stable letter t e ∈ Π(G) gives rise to an element in π 1 (G, v 0 ) by connecting v 0 to the terminal vertices of e through simple paths in Y (which gives 1 ∈ π 1 (G, v 0 ) if and only if e belongs to Y ). The collection T is then given by the twistors z e of the edges e ∈ E + (G) (for some orientation E + (G) ⊂ E(G), see subsection 2.1). For any w ∈ F n the collection of factors w i in the T -product decomposition (6.1) is obtained by writing θ(w) as a reduced word v 0 t 1 v 1 t 2 v 3 . . . v q−1 t q v q in π 1 (G, v 0 ) (see Proposition 2.6), and by applying θ −1 to v 0 or to any of the t i v i for i = 1, . . . , q. The equality D n (w) = w 0 y n 1 w 1 y n 2 w 2 . . . w q−1 y n q w q is a immediate consequence of the definition of a Dehn twist automorphism, see Remark 3.3. The inequalities y −m i = w i y m ′ i+1 w −1 i , for any integers m, m ′ ≥ 0, follow directly from the condition that the efficient Dehn twist D does not have twistors that are positively bonded, see Definition 3.9 (5). ⊔ ⊓ Remark 6.2. The representation of w ∈ F n as T -product as given in Proposition 6.1 is not unique. However, it follows from the proof that the sequence of twistors y i is well defined, and thus also the T -length |w| T of w. The intermediate words w i are well defined up to replacing them by y p i w i y q i+1 for some p, q ∈ Z. Definition 6.3. A T -product representative of w ∈ F n as in Proposition 6.1 is called cyclically T -reduced if y −m q = w q w 0 y m ′ 1 w −1 0 w −1 q for any integers m, m ′ ≥ 0. (1) Let D and T be as in Proposition 6.1. Assume that D : G → G is an efficient Dehn twist representative of D, with respect to some identification isomorphism θ : F n → π 1 (G, v 0 ) for some vertex v 0 of G. For some element w ∈ F n let W ∈ Π(G) be the corresponding element in the Bass group of G, i.e. W = θ(w) ∈ π 1 (G, v 0 ) ⊂ Π(G). We say that w is D-reduced if W is D-reduced. (2) Recall that θ(w) is called D-reduced if the element W is reduced as word in Π(G), and if its G-length can not be shortened by D-conjugation, i.e. by passing over to a word V −1 W D(V ) ∈ π 1 (G, v 1 ) ⊂ Π(G), for some vertex v 1 of G. It follows from our considerations in the proof of Proposition 6.1 that in this case w is T -reduced and cyclically T -reduced. (3) If W is not D-reduced, then we can follow the procedure indicated in Definition 2.14 and Remark 2.15 to perform iteratively elementary D-reductions until we obtain a new word W ′ ∈ π 1 (G, v 1 ) ⊂ Π(G) which is D-reduced, for a possibly different vertex v 1 . In this case we change our identification isomorphism θ corresponding to the performed D-reductions to obtain a new identification θ ′ : F n → π 1 (G, v 1 ) with respect to which w is D-reduced. (4) Recall that a D-reduced word W ∈ Π(G) is D-zero if and only if W has G-length 0, or in other words, W is contained in some vertex group G v . In this case we say that a word w ∈ F n with θ(w) = W is D-zero. We thus note that any w ∈ F n which is D-reduced (possibly with respect to a modified identification isomorphism θ ′ an in (3) above) but not D-zero is T -reduced, cyclically T -reduced, and of T -length |w| T ≥ 1. Let D ∈ Aut(F n ) be a Dehn twist automorphism, recall that for any element w ∈ F n and any integer n ≥ 1 we denote by D (n) (w) the iterated product, defined through: D (k) (w) := w D(w) D 2 (w) . . . D k−1 (w), and the partial iterated product is given by D (k 1 ,k 2 ) (w) := D k 1 (w) D k 1 +1 (w) . . . D k 2 (w). Proposition 6.5. Let D ∈ Aut(F n ) be a Dehn twist automorphism, and denote by T the set of "twistors" defined in Proposition 6.1. Let w ∈ F n be a D-reduced word which is not D-zero, i.e. |w| T = 0. Then the combinatorial length of D (k) (w) has quadratic growth with respect to k, i.e. one can find constant C 1 such that |D (k) (w)| ≃ C 1 k 2 . Proof. It follows from Proposition 6.1 that w admits a T -reduced decomposition w = w 0 y 0 1 w 1 y 0 2 w 2 . . . w q−1 y 0 q w q with w i ∈ F n and y i ∈ T which satisfies that D k (w) = c 0 y k 1 c i . . . c q−1 y k q c q . It follows immediately from Proposition 4.19 that there exist constants N 0 ≥ 0 and K 0 such that for k ≥ N 0 we have: Since w is D-reduced, Remark 6.4 shows that the cancellation in between D i (w)D i+1 (w) is bounded by some constant K 1 . In particular, we may take k ′ = N 0 hence On the other hand, for all k ∈ N we always have Together these two inequalities give |D (k) (w)| ≃ C 1 k 2 for some C 1 > 0. ⊔ ⊓ Remark 6.6. Let N 0 be as above. Denote B = D (N 0 ) (w) = wD(w)D 2 (w) . . . D N 0 −1 (w). We can now find N 1 > N 0 ∈ N large enough so that the following two conditions hold: 1. Let w 1 be the prefix of D (N 0 ,N 1 ) (w) which ends with y q−1 such that |B| ≤ |w 1 |. 2. The number N 1 is large enough so that |y N 1 −1 q | ≥ |w 1 | + |c q | + |c q−1 |. Corollary 7.2. Let ϕ ∈ Out(F n ) be represented by a partial Dehn twist H : G → G relative to a family of local Dehn twists. Assume that for some edge e of G the correction term δ e is not locally zero. Then ϕ has at least quadratic growth. ⊔ ⊓ As a final remark we want to point out that Corollary 1.2 is indeed a direct consequence of the above corollary together with the main result of [10], which states that, in the situation of Corollary 7.2, if all of the correction terms for the edges of G are locally zero, then H can be blown-up at the local Dehn twists to give a Dehn twist representative of the automorphism ϕ.
2016-05-14T23:18:57.000Z
2016-05-14T00:00:00.000
{ "year": 2016, "sha1": "1395fdd199df05fc4eced538df9eb0212c017062", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1395fdd199df05fc4eced538df9eb0212c017062", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
25181486
pes2o/s2orc
v3-fos-license
Incidence and mortality due to snakebite in the Americas Background Better knowledge of the epidemiological characteristics of snakebites could help to take measures to improve their management. The incidence and mortality of snakebites in the Americas are most often estimated from medical and scientific literature, which generally lack precision and representativeness. Methodology/Principal findings Authors used the notifications of snakebites treated in health centers collected by the Ministries of Health of the American countries to estimate their incidence and mortality. Data were obtained from official reports available on-line at government sites, including those of the Ministry of Health in each country and was sustained by recent literature obtained from PubMed. The average annual incidence is about 57,500 snake bites (6.2 per 100,000 population) and mortality is close to 370 deaths (0.04 per 100,000 population), that is, between one third and half of the previous estimates. The incidence of snakebites is influenced by the abundance of snakes, which is related to (i) climate and altitude, (ii) specific preferences of the snake for environments suitable for their development, and (iii) human population density. Recent literature allowed to notice that the severity of the bites depends mainly on (i) the snake responsible for the bite (species and size) and (ii) accessibility of health care, including availability of antivenoms. Conclusions/Significances The main limitation of this study could be the reliability and accuracy of the notifications by national health services. However, the data seemed consistent considering the similarity of the incidences on each side of national boundaries while the sources are distinct. However, snakebite incidence could be underestimated due to the use of traditional medicine by the patients who escaped the reporting of cases. However, gathered data corresponded to the actual use of the health facilities, and therefore to the actual demand for antivenoms, which should make it possible to improve their management. Introduction The symptoms caused by viper bite are mainly hemorrhagic and cytotoxic, the latter sometimes resulting in limb amputation or permanent disability [14; 15]. Some species of Crotalus may also produce neurotoxic symptoms similar to envenomation by Elapidae [16], and sometimes associated with acute renal failure [17]. Unlike the neurotoxins of rattlesnake venoms that act on presynaptic receptors (β-neurotoxins), the α-neurotoxins of Elapidae venoms bind to postsynaptic cholinergic receptors [13]. In both cases, paralysis of the cranial nerves can occur, inducing in some cases a potentially fatal respiratory arrest in the absence of specific (antivenom) and/or symptomatic treatment (artificial ventilation). The aim of this work was to assess the epidemiological burden of snakebite, including the incidence, mortality, population at risk and main explanatory characteristics of their frequency and severity: season, environment, altitude, density of human population, management, etc., in order to provide recent and useful data to improve the management of snakebites in the Americas. Methods A bibliographic search was performed by querying MedLine (PubMed last access 06/11/2016) using the keywords "America AND snake à AND [envenom à OR antiven à ]". From a total of 4,514 references, 187 concerned the epidemiology and/or management of snakebites in the Americas. Furthermore, websites regarding i) the epidemiology of snakebites (using the words "health surveillance", "surveillance bulletin", "epidemiology surveillance", "snakebite envenomation", "snakebite death"), ii) population demography (using the words "population demography") and iii) administrative and environmental geography (using the word "map") were identified using the Google search engine for each of the countries of America and using the official language of each country (English, Spanish, Portuguese, French and Dutch). Access to these websites was made between September 2010 and December 2016. The list of the websites and the last access date to each are mentioned in Table 1. However, a few websites were closed during this period and sometimes replaced by new ones, the use of which was often restricted by a password. All the data were transferred and analyzed using Excel software. The trend curves and R 2 , the coefficient of determination that is the square of the coefficient of correlation indicates the extent to which the dependent variable is predictable, were calculated through Excel. The comparisons were made using parametric tests (t-test, χ 2 and Pearson correlation) or nonparametric (Mann-Whitney), depending on the distribution of studied variables and number of cases/groups. The significance level was equal to 0.05 and the means were expressed using a 95% IC. Statistical analyzes were performed using the BiostatTGV online software (http:// marne.u707.jussieu.fr/biostatgv/). Topographic, physical and political maps were taken from the World Atlas of Wikimedia (https://commons.wikimedia.org/wiki/Atlas_of_the_world) and drawn on the basis of the data obtained in this study. Results The average incidence is about 57,500 snakebites a year (6.34 per 100,000 population), resulting in almost 370 deaths (0.037 per 100,000 population), with a case fatality rate below 0.6% (Table 2). However, there are wide variations across countries and within each of them. The data are detailed for each country according to the websites mentioned in Table 1, eventually temperate by recent epidemiological or clinical publications. 2007-2014 period. There was a steady decrease in annual incidence ( Fig 1A). The incidence showed a decreasing gradient from north to south (Fig 2), which corresponded to the climatic trend between the Chaco province which climate is subtropical, and the Patagonia province more rigorous on the one hand, and the Andean climate of eastern Argentina, on the other hand. Two provinces presented a higher incidence than the others: Santiago del Estero in the north with a low population density (7 inhabitants per km 2 ) and Misiones in the north-east with a higher density (35 inhabitants per km 2 ) but predominantly agricultural. The seasonal distribution of envenomation showed a summer incidence five to six times higher than the winter one ( Fig 1B). These results corroborated those by Dolab et al. [18] obtained from a questionnaire survey conducted at health facilities. These authors have shown the strong geographical heterogeneity of the incidence which can reach 150 envenimations per 100 000 inhabitants in certain places. They confirmed the low case fatality rate (0.04% according to the survey). Bothrops were responsible for 96.6% of the bites, Crotalus 2.8% and Micrurus 0.6%. Population at risk consisted of young men bitten during agricultural activities. Most envenomations (90%) were treated within the first four hours by an antivenom. Belize No information on snakebites has been obtained for Belize. However, on the basis of existing data from neighboring countries showing similar environments, the annual number of bites can be estimated at 35 (10 per 100,000 population) and deaths at 1 every 2 to 4 years (0,1 per 100 000 inhabitants). Bolivia The information is available online from 1996 but the notification was interrupted between 2001 and 2009. It returned to availability from 2010. The presentation of the data has been standardized, in particular as regards the classification of age groups. By 2015, the information has been supplemented by the addition of the gender of the patients. However, mortality from envenomation is still not provided. In their study on snakebites in Bolivia, Chippaux and Postigo [19] reported a national incidence of 8 bites per 100,000 population per year with a case fatality rate of 0.42 per 100,000 population. They extrapolated mortality from household survey data, which lacks precision and reliability but was consistent with the mortality observed in neighboring countries. Updating data available up to 2015 confirmed the impact over this period with over 900 annual bites (9.1 per 100,000 population). The number of deaths is still not reported but has been estimated at around 40 per year [19]. The main results of these authors, in particular the geographical distribution and the distribution by age group, were confirmed by the notifications during the years 2010-2015. Snakebites in the Americas The annual incidence increased significantly between 2010 and 2015 ( Fig 3A). The distribution of the specific incidence showed a steady growth according to age (Fig 3B). The sex ratio (M/F) was 1.81. The geographical distribution was very heterogeneous. The incidence is very low (less than 1 snakebites per 100,000 inhabitants) in the high mountain region, notably the Altiplano (departments of Potossi, Oruro and most of that of La Paz where altitude exceeds 3,500 m asl). The lowland or steppe departments, such as the Chaco region (Departments of Tajira, Santa Cruz, Chuquisaca, Cochabamba and Beni) have an incidence of between 5 and 50 per 100,000 inhabitants. Finally, the incidence exceeds 50 bites per 100,000 inhabitants in the Department of Pando in the Bolivian Amazon [19]. The seasonal distribution ( Fig 3C) showed a clear difference between the Chaco province (medium-altitude steppe) where the incidence is highest in the dry season, and Amazonia (low-lying primary forest) where bites occur mainly during the season rains. Finally, if the relationship between population density and incidence was not shown, Chippaux and Postigo [19] observed a significant inverse correlation (P < 1.6Á10 −4 ) between incidence and altitude. Brazil The notification of snakebites is performed for a long time in Brazil but the results are online only since 2001. According to Chippaux [20] from the data reported by the health facilities and available online on the site of SINAN which is the main Database on causes of morbidity and mortality available online since 2001 [21], the average number of snakebites was about 27,200 per year (15 per 100,000 population) with more than 115 deaths (0.06 100,000 inhabitants) during the period 2001-2012. The geographical distribution showed a clear predominance in northern Brazil, especially in the Amazon (Fig 4). The seasonal distribution of bites was more pronounced in the summer, particularly in the southern regions ( Fig 5). The incidence by age group varied greatly from region to region. It was higher among young people in the Amazon and in people over the age of 40 on the inland plateau [20]. Bothrops species were responsible for most of the bites everywhere in Brazil. Bites by Crotalus durissus are more frequent in the eastern and central savannas. The bites by Lachesis sp. are mostly observed in the Amazonian region. Those by Micrurus sp. are rare. Finally, there was a strong inverse correlation between incidence and population density [20]. The population at risk was made up of male farmers. Risk factors were more or less directly related to the agriculture and rural housing of the victims [22; 23]. Bochner and Struchiner [24] showed that these characteristics have been constant since the first epidemiological studies carried out by Vital Brazil in the early 20 th century. Incidence and mortality increased discreetly and seemed to follow demographic trends [20]. Canada Snakebites appeared to be very rare in Canada because of a climate unfavorable to the establishment of snake populations, and a highly mechanized agricultural activity. The presence of Sistrurus catenatus is attested in southern Ontario, the most populated region of the State, and Crotalus oreganus occurs in British Columbia. Crotalus horridus disappeared from Oregon since 1941 and from Quebec more recently [25]. Rumors of his return to southeastern Canada, including Quebec, have not been validated by the Recovery Commission for the Ontario Rattlesnake [26]. According to Dubinsky [27], there were about sixty snakebites reported in Ontario each year. There would have been 2 deaths between 1900 and 1960 [28] and since the 60s none has been reported in the literature. There were no figures for British Columbia and snakebites are considered very rare. In total, it can be assumed that snakebites are fewer than 100 annually and no death was reported in Canada since a long time. Snakebites are distributed in the two southern states (Ontario and British Columbia) close to the border of United States of America where rattlesnakes are still encountered. However, some snakebites recorded could be illegitimate bites inflicted when manipulating a snake in the field or in captivity. Chile There are neither Bothrops, nor Crotalus, nor Micrurus in Chile. Snakebites by opistoglyphic snakes were reported but not considered as public health issue [29]. The incidence of snakebites increased significantly during the period ( Fig 6A) without clear explanation. Maybe, the case report system-or political situation-improved enough to obtain more reliable data. The geographical distribution was heterogeneous. Incidence was relatively high in the whole of the country, especially in the Amazonian departments in the south (Fig 7) and much lower in central Colombia, both mountainous and urban. There was no correlation between population density and incidence. The seasonal distribution was constant throughout the year (Fig 6B). Costa Rica The notification has been available online since 2005 with some shortcomings-or delays in data capture-after 2012. The incidence of snakebites was significantly higher in the eastern provinces. In the center of the country, it was lower, especially in the province of San José, which is the most densely populated and mountainous (Fig 8). Annual snakebites ranged 500-1,000 without any particular trend [30]. The sex ratio was 1.7 (M/F) and increased in adult male whereas it decreased in women over 15 years (Fig 9A). The seasonal incidence was relatively stable during the year, however, with marked variability in the rainy season from May to November when the majority of snakebites occurred ( Fig 9B). These results were in agreement with those from the literature. The highest mortality is observed in the provinces of Puntarenas in the south and Limon in the east, linked to the abundance of Bothrops asper [31; 32]. Based on the notification of cases and environmental information, Hansson et al. [33] were able to model high risk zones of bites by Bothrops asper, and to recommend a targeted supply of antivenoms. Ecuador Access to full epidemiological data for years prior to 2013 was limited [34]. From 2013, the weekly notification was available online but showed many shortcomings. The highest incidence occurred in the Amazonian provinces (Oriente province), with 37% of the envenomations and an average annual incidence of 100 envenomations per 100,000 https://doi.org/10.1371/journal.pntd.0005662.g008 inhabitants (Fig 10). The majority of snakebites (58%) were in the coastal region (Costa province) with an average incidence of 12 bites per 100,000 inhabitants. In highland provinces in the center of the country (Sierra province), incidence was about 5 bites per 100,000 population (5% of the snakebites). During the rainy season, from January to April, the incidence of snake bites is twice as high as in the dry season. The incidence of snakebites is twice as high after the age of 10 and remains stable from teenagers to elderly. The very young children below 5 are ten times less involved than adults. El Salvador Although notification of snakebites has been mandatory since 2010, data are not accessible. On the other hand, they are subject to periodic reports put online. We used that of 2013 which compiled the data from 2010 to 2012. About 300 annual snakebites (5 per 100,000 inhabitants) were irregularly distributed during the year. Based on the results of neighboring states, the annual number of deaths can be https://doi.org/10.1371/journal.pntd.0005662.g010 estimated at 3 (0.05 per 100,000 inhabitants). The six months of the rainy season (May to October) accounted for nearly 65% of the envenomations (Fig 11A). The population at risk was mainly composed of young men. Patients aged 10 to 30 constituted 51% of the bites, while this age group represented less than 40% of the population. In addition, the sex ratio (M/F) was 1.5. During this period, no death was reported. The geographic distribution of incidence was heterogeneous, i.e. lower on the coast and in the center of the country (Fig 12), a probable consequence of the local population density, which is the highest of the Americas (Fig 11B). French Guyana There was no recent data concerning this small French department. According to the literature, mostly from surveys dating back to the 1980s, the annual incidence of envenomation exceeded 25 cases per 100,000 inhabitants with relatively high mortality [35][36][37]. Guatemala Data were available online since 2001 with some gaps, notably in 2005. With almost 900 snakebites on average each year (2001-2010), the distribution of the incidence was very heterogeneous (Fig 13). Mortality was not documented. It was estimated on the basis of neighboring country mortality at about 10 deaths per year (0.06 per 100,000 population). Guyana There was no notification of snakebites in Guyana. However, a study of cases of envenomation treated at the Georgetown Public Hospital Corporation (GPHC) in 2014 provided an estimate of the burden of envenomation for Guyana as a whole. However, data for the Amazon region, which is sparsely populated but with high snakebite risk, was highly under-estimated, partly because it was likely that few patients visit the health facilities and, on the other hand because the evacuation possibilities on Georgetown are almost nonexistent. According to Bux [38], there would be more than 200 snakebites each year in Guyana, an incidence greater than 25 bites per 100,000 inhabitants. The number of deaths was not specified, but Langston [39] mentioned a high number of deaths. The press reported 3 deaths in Georgetown between 2011 and 2014, which was probably underestimated since it did not take into account deaths in provincial health facilities. More than 80 snakebites were treated each year at the Georgetown Reference Hospital during the 2010-2012 period. However, the geographical distribution was biased due to the lack of reliable data for the South (Amazonian region) of the country (Fig 14). The age-specific incidence calculated on the basis of hospital data showed a constant increase of snakebite incidence until the age of 30-40 years and then a steady decline up to 60 years. Honduras Notification of snakebites has been mandatory since 2009 but online display was interrupted at the end of 2013. A little more than 650 snakebites occurred annually on average (10 per 100,000 inhabitants). The number of deaths was not reported but was estimated at 7 per year (0.08 per 100,000 population) based on observations in neighboring countries. Snakebites were mostly distributed to the north and east of the country (Fig 15), regions with the lowest altitude. The number of snakebites is relatively stable throughout the year with a slight increase in incidence during the rainy season from May to October. [8]. The geographical distribution of the bites covered the whole of the island, but mainly involved small agricultural communes (Fig 16). However, no obvious link was observed between snakebite incidence and agricultural work in the two main types of plantations of Martinique (bananas and sugarcane). Mexico Venomous animal attacks was reported since 1996 but snakebites were separated and available online only since 2003. The annual number of bites averaged 4,000 (3.3 per 100,000 inhabitants) with steady growth between 2003 and 2015 ( Fig 17A). The number of deaths was below fifty per year https://doi.org/10.1371/journal.pntd.0005662.g012 Snakebites in the Americas (0.035 per 100,000 inhabitants). As showed by Frayre-Torres et al. [43], the mortality rate decreased from 0.25 per 100,000 population in the 1970s to 0.05 during the 2000s. The lowering continued after the 2010 and is now less than 0.04 per 100 000 (Fig 17B).In addition, mortality was higher in the South than in the North of Mexico and increased significantly after the age of 40, whereas it appeared to be stable before. Case fatality rate was higher among males than females (P <0.028). The geographical distribution was relatively homogeneous (Fig 18) with a decreasing trend from the north, where the mean incidence was close to 2 per 100,000 inhabitants, towards the center (average incidence 7 per 100,000 inhabitants) and the South (incidence greater than 9 bites per 100,000 inhabitants). The sex ratio (M/F) was 1.97. The seasonal distribution showed a marked summer increase in snakebites (Fig 18). Nicaragua Notification of snakebites is not available online. The epidemiological data were based on the work by Hansson et al. [44] the source of whom was the Ministry of Health. According to these authors, there were about 650 snakebites each year (56 per 100,000 inhabitants) and 7 deaths (0.6 per 100,000 inhabitants). The geographical distribution was very heterogeneous, with a higher incidence in the south of the country, largely dependent on altitude, land use and health supply [44]. Panama Notification of snakebites in Panama was not available online. According to the Ministry of Health, the average annual incidence could be 1,900 snake bites (55 bites per 100,000 inhabitants). Valderrama et al. [45] mentioned about fifteen deaths per year (0.5 deaths per 100,000 inhabitants). The incidence was highest in the provinces of Darién, Coclé, Los Santos (three provinces in the center of the country) and Veraguas in the east, although in the latter the data were much underestimated. The work by Barahona de Mosca (2003, quoted by Valderrama et al. [45]) showed that people aged 20 to 44 were the most affected (44%), followed by teenagers aged 10-19 (23%), and children 0-9 (18%). In all age groups, males were most often bitten. Highest incidence occurred during the rainy season (from May to November). Paraguay Notification of snakebites has been mandatory since 2008 but was only truly functional from 2009. Nearly 250 snakebites were reported annually (3.5 per 100,000 population) during the period 2004-2015. Snakebites decreased regularly between 2009 and 2013, and then increased dramatically in 2014 and 2015. However, the general trend of incidence is decreasing (R2 = 0.7319) suggesting that the annual variations are random and risk is reducing. The average number of deaths was 5 per year (0.08 per 100,000 inhabitants). The seasonal incidence is relatively constant throughout the year with a slight increase during the rainy season (December to April). The incidence was higher in northern and eastern Paraguay (Fig 19). Peru Notification of snakebites has been available online since 2000. On average, 2,150 snakebites occurred per year in Peru (7.2 per 100,000 population), resulting in about 10 deaths (0.043 per 100,000 population) during the years 2000-2015. The increase in incidence was significant. However, after a steady increase until 2011, the incidence https://doi.org/10.1371/journal.pntd.0005662.g018 tends to stabilize or even to decrease slightly in recent years (R 2 = 0.739). The highest incidence was observed in the Amazon region, while the incidence in the coastal region and the south of the country was low (Fig 20). The seasonal incidence is constant for most of the year with a net decrease in the middle of the dry season (mainly from June to September). Saint Lucia There was no information about Saint Lucia. However, the epidemiological situation should be comparable to that of Martinique, which corresponded to about ten bites per year (6 per 100,000 inhabitants) and one death every 5 to 10 years (0.1 per 100,000 inhabitants). Bothrops caribbaeus, a species close to B. lanceolatus, is endemic to the island [8; 46]. Suriname Notification of snakebites was not mandatory in Suriname and no information on envenomation has been found. Based on the situation in French Guiana, the annual number of snakebites can be estimated at 135 (25 per 100,000 inhabitants) and the number of deaths at 5 deaths (0.9 per 100,000 inhabitants). Snakebites in the Americas Trinidad Notification was not mandatory in the island of Trinidad for which there was no information on snakebites. Based on the data collected in coastal Venezuela and Guyana, it can be expected 130 snakebites (10 per 100 000 inhabitants) and 1 to 2 deaths (0.1 per 100 000 inhabitants) each year. Four poisonous species occur in Trinidad: Micrurus lemniscatus and M. circinalis, both Elapids, and Bothrops atrox and Lachesis muta that are vipers. M. circinalis and M. fulvius are present in some Bocas islands. There is no Elapidae or Viperidae in Tobago [8]. United States of America The notification of snakebites in the US was old but hardly available online. Several sources were used and the data were regularly reported in the literature [47][48][49][50][51][52][53][54][55][56]. These data were based on notifications from separate systems but were consistent and highly convergent. Between the late 1950s and early 2000s, incidence decreased by half (3.6 versus 1.7 per 100,000 population) as a result of both the reduction in the number of bites (6,680 in 1959 versus 4,735 in 2005) and the increase in population (185 million versus 285 million). The reduction in incidence concerned most of the States, particularly in the southern and eastern US (Fig 21). However, using the National Electronic Injury Surveillance System, Langley et al. [56] estimated the number of snakebites (including from non-venomous snakes) to be close to 9,200 on average per year over the period 2001-2010. The number of bites for which the species was identified as venomous would be more than 2,800 per year. Furthermore, Morgan et al. [57] reported 97 health deaths from 1979 to 1998, i.e. 4.85 on average per year (0.002 per 100,000 population). The population at risk was predominantly composed of people whose age is between 10 and 50 years. However, the age-specific incidence showed a peak in teenagers (incidence higher than 5 bites per 100,000 young people aged 10-14 years) and then a steady decrease in adults to about 2 bites per 100,000 Subjects over 65 years of age. The sex ratio (M/F) was 2.7. Most bites occurred from late spring to fall [53]. However, the information provided by the various databases did not detail whether the bites were accidental or illegitimate, the latter probably more frequent in USA, and not seasonal. Uruguay Notification of snakebites was mandatory but data were not available online. However, the Ministry of Health published a summary report on snakebites between 1986 and 2001 and a second on the cases of 2010 and 2011. Despite the lack of information between 2002 and 2009, the incidence was likely to be stable. There are nearly 80 snakebites annually (2.4 per 100,000 population) and 2 deaths (0.033 per 100,000 population). The geographical distribution showed a very high incidence in the eastern part of the country, high in the west and low in the south, especially in the Montevideo region (Fig 22). The age-specific incidence was the highest in young subjects between 15 and 30 years of age. The sex ratio was highly imbalanced in favor of man (M/F = 4.9). The seasonal incidence showed a marked increase in the spring-summer period (October to April) with a peak in March (average cases twice higher than those of other summer months). Venezuela The reporting of snakebite incidence and mortality has been mandatory since 1995 and has been available online since 1996 and 1995 respectively [58]. From 1995-96 to 2012, the average number of snakebites and deaths was 5,700 (20 per 100,000 population) and 32 (0.1 per 100,000 population) a year, respectively. Incidence increased from 1996 to 2006 (R 2 = 0.7194) and then drastically decreased until 2011 (R 2 = 0.9576, the last available year. The overall trend is slightly decreasing from 1996 to 2011 (R2 = 0.1507). A possible explanation could be deterioration in the collection of data after 2010 but it is not excluded that changes in economical activities induced a lower snakebite risk. The geographical distribution was relatively homogeneous (Fig 23). There is a correlation between the mean incidence of snake bites and population density (R 2 = 0.6568). Interestingly, the incidence was likely to be underestimated-compared to data from other countries-in some states of the Amazon region, which could be due to either low performances of case reporting system or peculiar treatment seeking behavior by patients, both linked to poor health care offer. Mortality was relatively constant over time [59]. However, the relative risk of death as a function of age was roughly constant from childhood to adulthood up to 40 years (between 0.05 and 0.09 per 100,000 subjects of each age group) and rose in older people to exceed 0.5 per 100,000 population above 60 years of age. Discussion and conclusion Every year, near 60,000 snakebites (6 per 100,000 inhabitants) are managed by the health services of the Americas. Despite the lack of mortality data in a few countries, most of which are small and poorly populous, the total number of deaths can be estimated at 370 per year (0.04 per 100,000 inhabitants), based on the data from the neighboring countries and risk factors described below. The previous epidemiological estimates, based mainly on medical and scientific literature, mentioned greater numbers of snakebites: about 115,000 [84,110-140,981] with 2,000 deaths [652 -3,466] in the study by Kasturiratne et al. [2] and even 150,000 snakebites of which 5,000 deaths in Chippaux's one [1]. The number of bites did not decreased in the last twenty years (see below), in contrary of deaths. These figures were therefore overestimated, which can be https://doi.org/10.1371/journal.pntd.0005662.g023 explained by the highly biased epidemiological source of information. Indeed, most authors who publish epidemiological or clinical studies on snakebites report facts upon regions with high incidence-or severity-of envenomation that are often poorly representative [60]. Nevertheless, the general incidence is much lower than in Asia or Africa [1; 2; 61], excluding for particular regions such as the Amazon. However, mortality remains moderate, except in enclosed or poorly equipped areas. Most of the data collected in this study comes from the Ministries of Health of the concerned countries. Until now, epidemiological surveys were needed to obtain information that was most often limited geographically according to the constraints and choices of the investigators. Sometimes methodological biases, particularly in site selection, led to approximations or significant errors in the estimation of the incidence or severity of envenomations [62]. For the past decade, mandatory reporting of snakebites resulted in better epidemiological data in most countries of the Americas. Mandatory reporting of cases allows covering a country as a whole rather than a few sites chosen by the investigators, leading to poorly representative figures. However, data gaps and limitations are still observed resulting from a poor surveillance system. On the one hand, it is expected that over time the data collection will improve and on the other hand the standardization of the questionnaires will make it possible to have more robust, reliable and complete information. For example, useful, often missing data, particularly severity, treatment (brand and dose of antivenom) and clinical outcomes (mortality, sequelae) need to be collected, which is not currently the case in most situations. However, in some countries (Brazil, United States), these data are available, showing that such a goal is feasible. It is rarely stated whether the notification of snakebites included asymptomatic bites, which is probably the case in most countries. Asymptomatic snakebites may result either from a bite by a non-venomous snake or a venomous one that did not inject venom (dry bite). According to the countries and authors, asymptomatic snakebites represent between 10 and 40%, about one third of which are dry bites [7; 63; 64]. As a consequence, the comparison with the recent literature has been very useful for, a) confirming (or supplementing) the data from other sources and, b) providing additional information, in particular on the clinical severity of envenomations, details on circumstances of the bite or implementation of the treatment. It was emphasized that the notification was not very precise and reliable, at least variable from one country to another. However, the reporting system improves over the time and, of course, provides a minimal-conservative-incidence of snakebites seen by healthcare institutions from which it can be inferred treatment needs, especially antivenoms. The increase in incidence observed in some countries (Bolivia, Brazil, Colombia, Mexico, Peru, Venezuela) can be attributed to an improvement in data collection, particularly in the early years of its implementation. The stabilization or reversal of the upward trend confirms this. However, environmental (e.g. reduction of snake population) or demographic (population migration to urban centers with low snakebite risk (see below)) causes should not be underestimated. It is notable, for example, that the incidence is often similar on both sides of a border between two neighbor countriesdespite likely differences in data collection efficiency -, reflecting a constant figure regarding both risk and population reaction to the snakebite. Actually, administrative policies are different on each side of the border, but populations are often the same on the both sides. . . It is known, for example, that many patients prefer to use alternative medicine rather than a modern treatment provided by health center. This occurrence is poorly addressed in Latin America, but it probably plays a significant role in underestimating the incidence and possibly severity (mortality) of envenomations. However, some inconsistencies can be explained either by different environmental conditions affecting the risk factors mentioned below, or by significant differences in the quality of the notifications. The report still suffers from inadequacies, resulting in underestimations of snakebite incidence and mortality in some regions of Latin America [44]. The geographical distribution of the incidence was heterogeneous: it was higher in the intertropical region and in developing countries. The incidence depends mainly on environmental and anthropic factors that are detailed below. The number of deaths appeared to be more difficult to determine due to the lack of notification in several countries. However, these countries are generally sparsely populated regions, which limit the impact on the total result. We proposed here a reasonable estimate for each of these countries at the risk of a trivial error. Basically, the incidence results from the encounter between a man and a snake. It is therefore legitimate to consider the activities and the presence of the first as well as the behaviors of the latter. It is difficult to explain what affects snakebite incidence because of the complexity of possible causes and their interactions, such as the biology of animal populations composed of many species or the demographics of human populations that are dependent on many social, economic, environmental factors. The coefficient of determination R 2 indicates the proportion of the variance in the dependent variable that is predictable from the independent variable, i.e. it gives some information about the goodness of fit of a model. The closer R 2 is to 1, the better the data match the model, but this does not mean the model is relevant. Incidence tends to grow mechanically as a function of demography although there is a partial offset related to a decrease due to anthropization of the environment which reduces snake populations and/or snake-man contacts. In addition, the proximity of human populations to the natural environment explains a greater frequency of encounters with snakes. As a consequence, snakebites occur usually in rural areas during agricultural activities, especially in developing countries where farming is an important and weakly mechanized economic activity. Population density was sometimes inversely correlated with the incidence of bites, as in Brazil [20], suggesting that a high human presence limited the development of snake populations. However, other reasons may locally explain the inverse correlation, e.g. when the human population remains large while snakes do not encounter favorable conditions for their development. For instance, the altitude and roughness of the climate appeared to have a negative impact on snake populations as shown in Bolivia or El Salvador, and Canada or Argentina, respectively. Isolated areas are the most affected, mainly due to lack of good roads linking urban centers and activities of the population performed in precarious conditions (forestry, subsistence agriculture and hunting, among others). These occurrences increase both the likelihood of encounters with snakes and the difficulty of receiving timely medical help. As a consequence, scarcity of health centers is a factor that indirectly influences the incidence of snakebites and directly (and significantly) affects the clinical outcomes of envenomations [33; 44; 65]. The abundance of snakes, especially species that inhabit cultivated or settled areas and sometimes even reproduce there, varies according to climatic (heat and humidity) and environmental (vegetation and landscape) factors that determine food supply, both qualitative and quantitative, and camouflage opportunities [66]. While some species established in natural environments, such as the Amazon rainforest, e.g. Bothriopsis taeniata, are absent or rare in anthropogenic areas, others come near to human settlements and may even grow there [67], at least to some extent. Some species of Crotalus, e.g. C. viridis or C. oreganus in the USA [68; 69], or Bothrops, as Bothrops asper in Costa Rica [36], are attracted to anthropogenic areas where they find their food. Ecological niche modeling (ENM) allows, using appropriate algorithms, to predict the geographic distribution of a species from climatic and environmental data. Yañez-Arenas et al. [70] used the ENM to assess the potential distributions of several species of rattlesnakes in Veracruz and to associate them with a prediction of abundance estimated by the distance from the niche centroid (DNC). These authors found a significant inverse relationship between the snakebites and DNCs of two common vipers (Crotalus simus and Bothrops asper), partially explaining the variation in the incidence of snakebites. Moreover, the DNCs of the two vipers, combined with the marginalization of human populations, accounted for 3/4 of the variation in incidence. Thus, several factors, environmental, socio-economic and sanitary, contribute to explain the incidence of snakebites. Populations at risk were very similar in most countries. While children and teenagers constituted an important part of the population, sometimes the majority in developing countries, they were not the mostly bitten. Population at risk was predominantly composed of young men between the ages of 15 and 45, living in rural areas and bitten during agricultural activities. This may explain why bites occur most often during hot (summer) and wet (rainy season) periods, usually at harvest time. The severity of the envenomation, in particular mortality, is related to the species, but also the size, of the snake responsible for the bite, which determine the composition of the venom and the quantity injected respectively [14; 15; 71]. This explains why some snakebites are asymptomatic, when the snake is not venomous, or when it does not inject its venom [6; 7; 63; 64]. It is more difficult to explain some of the factors identified by Jorge et al. [71] as the season or time of day. This may be due to a particular distribution of species within stands, depending on time and space according to their ecological tropisms. Age of the patient appeared to be a risk factor, especially at both ends of life, in children and elderly persons-a priori more vulnerable [72]. However, as we have seen above, children are not the most exposed. In addition, the mortality and incidence of complications-most notably the sequelaedepend on the management of snakebites, i.e. the health care system as a whole (number and distribution of health facilities, equipment, access to antivenoms and adequacy of therapeutic protocols, skill of health personnel, etc.). For example, the significant decline in mortality in many countries-particularly in Costa Rica [30][31][32], Ecuador [34], Mexico [43] and Venezuela [59] while the number of snakebites in these countries remained stable or even increased-can be attributed to better management of snakebites, notably through the improvement of primary health care and access to medical services, including availability of antivenoms. However, other factors may also affect the mortality and severity of envenomations, such as the availability of health centers and treatment, which may be very irregular, particularly in remote areas where activities of the indigenous population are often very close to nature. The delay in treatment may thus compromise the clinical course of envenomation. Nevertheless, the treatment seeking behavior is complex and many patients, particularly in remote areas, still use traditional medicine. The latter should be associated with modern medicine in order to define relevant recommendations that do not put them into competition but optimize the therapeutic approaches to avoid complications and disabling sequelae as is still often the case. This study summarized the burden and epidemiological characteristics of snakebites in the American continent. The incidence and severity of envenomation appeared to be lower than previously assessed, although many risk factors have been already known and studied. This work showed the importance of mandatory reporting of snakebites to improve their management, provided that health authorities endorse, analyze and exploit the data. It therefore seems necessary to continue this effort, improve the case reporting system and take the measures that can be inferred from the obtained analysis of the available information.
2018-04-03T01:44:24.359Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "43095d48d1451ce7eee4fe558d9d833c71927bda", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0005662&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "43095d48d1451ce7eee4fe558d9d833c71927bda", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
212796913
pes2o/s2orc
v3-fos-license
Rational drug design, synthesis, and biological evaluation of novel chiral tetrahydronaphthalene-fused spirooxindole as MDM2-CDK4 dual inhibitor against glioblastoma Simultaneous inhibition of MDM2 and CDK4 may be an effective treatment against glioblastoma. A collection of chiral spirocyclic tetrahydronaphthalene (THN)-oxindole hybrids for this purpose have been developed. Appropriate stereochemistry in THN-fused spirooxindole compounds is key to their inhibitory activity: selectivity differed by over 40-fold between the least and most potent stereoisomers in time-resolved FRET and KINOMEscan® in vitro assays. Studies in glioblastoma cell lines showed that the most active compound ent-4g induced apoptosis and cell cycle arrest by interfering with MDM2 -P53 interaction and CDK4 activation. Cells treated with ent-4g showed up-regulation of proteins involved in P53 and cell cycle pathways. The compound showed good anti-tumor efficacy against glioblastoma xenografts in mice. These results suggested that rational design, asymmetric synthesis and biological evaluation of novel tetrahydronaphthalene fused spirooxindoles could generate promising MDM2-CDK4 dual inhibitors in glioblastoma therapy. Introduction Glioblastoma is a malignant disease associated with poor prognosis, with few treatment possibilities. The disease involves deregulation of P53 and cell cycle signalling pathways 1e4 . Our analysis of genomic alterations in glioblastoma according to data in the Cancer Genome Atlas (TCGA) identified the q13e15 region of chromosome 12 as one of the regions that most often rearranged in the disease ( Fig. 1A and B) 5e7 . This region encodes the P53-interactor murine double minute 2 protein (MDM2) and cyclin-dependent kinase 4 (CDK4). We also verified both genes to be significantly overexpressed at the mRNA and protein levels in patients with glioblastoma, regardless of P53 mutation status (Fig. 1CeE). Extensive efforts have already been made to develop small molecules that can disrupt the interaction between MDM2 and P53 in order to unleash the latter's anti-tumor activity. A diverse array of privileged scaffolds has been discovered, including derivatives of imidazoline, piperidinone, benzodiazepine, chromenotriazolopyrimidine, terphenyl, isoindolinone and pyrrolidine 8e20 . Some of these derivatives have advanced to clinical trials for the treatment of breast cancer, leukemia, lymphoma and glioblastoma. Spirocyclic oxindoles have recently been patented as a newly identified type of P53eMDM2 inhibitor ( Fig. 2A) 21e26 . While N-, O-and S-containing heterocyclic substitutions have been extensively explored to generate novel C3spirooxindole inhibitors of P53eMDM2 interaction, the investigation of all-carbocycle modifications at the C3 position as potent MDM2 inhibitors are underdeveloped. CDK4, one of the main controllers of cell cycle entry, is substantially overexpressed in glioblastoma, breast and ovarian cancers, making it an attractive therapeutic target 27e29 . Some recent efforts have generated promising leads by targeting compounds to allosteric binding sites in CDKs 30e33 . The allosteric pocket varies among CDKs, in contrast to the highly conserved ATP-binding site. Planar naphthalene derivatives can dock well into the narrow allosteric binding site of CDK4, making them a privileged scaffold for generating subtype selective inhibitors (Fig. 2B) 34,35 . Analysis of CDK expression and mutations in glioblastoma samples in TCGA database indicates that CDK4 is the most often overexpressed CDKs in the disease, and it is overexpressed in over half of patients with glioblastoma associated with mutations in P53 (Fig. 1D). These results suggest that simultaneous inhibition of both MDM2 and CDK4 may be effective against glioblastoma 36e39 . Moreover, the co-amplification of MDM2 and CDK4 has been reported in several type of cancers including sarcoma, glioblastoma, bladder cancer, gastric cancer, etc. 40e50 Although the simple combination therapy of MDM2 and CDK4 inhibitors in preclinical experiments were reported recently, the two independent reports demonstrated the paradoxical results in sarcoma 51,52 . In addition, Klein et al. 53 reported palbociclib-induced senescence resulted MDM2 downregulation in cancer cells. These results indicated that the regulation mechanisms between MDM2 and CDK4 may be more complicated than previously thought. Therefore, we aimed to develop scaffolds for dual inhibitors of both proteins that could avoid resistance due to P53 mutation and that could bind CDK4 selectively to avoid off-target effects. After analysing the binding modes of known MDM2 inhibitors and CDK4 inhibitors, we speculated that fusion of the planar tetrahydronaphthalene (THN) ring at the C3-position of oxindole might generate a scaffold that could bind at the P53-binding site in MDM2 as well as at the allosteric site in CDK4. We started with THN 54e61 and spirooxindole derivatives 62e72 because they are privileged drug-like architectures, so the resulting THN-fused C3spirooxindoles should possess good druglikeness (Fig. 2C). Here we rationally designed and asymmetrically synthesized a series of chiral THN-spirooxindole-based MDM2/CDK4 dual inhibitors, which showed promising anti-glioblastoma activity in vitro and in vivo. In particular, compound ent-4g displayed good CDK selectivity: it showed nanomolar IC 50 against CDK4, micromolar IC 50 against CDK2, and no appreciable inhibition of other CDKs or kinases. The novel compound inhibited proliferation and induced apoptosis in glioblastoma cell lines expressing wild-type or mutated P53. The novel compound inhibited the growth of glioblastoma xenografts expressing mutant P53 better than the MDM2 inhibitor nutlin-3a alone or together with palbociclib. Results and discussion 2.1. Rational design and synthesis of chiral THN-fused spirooxindoles as dual inhibitors of MDM2 and CDK4 In spirocyclic oxindole-based MDM2 inhibitors, the oxindole fragment occupies the Trp23-containing cleft of P53, and appropriate stereochemistry is critical for good binding affinity 73,74 . Therefore, we focused on asymmetric synthesis of optically pure C3-spirooxindoles 75e78 . We started from hydronaphthalene 79e84 and spirooxindole 10,22,23,25,65 because they are privileged frameworks occurring in many anti-tumor natural products and pharmaceuticals. Combination of privileged frameworks can facilitate molecular diversity and discovery of lead compounds 85,86 . We knew that the spirocyclic oxindole inhibitor would have to fit within the flat, narrow allosteric pocket of CDK4. Preliminary docking studies and integrative molecular simulations suggested that an inhibitor bearing a planar THN would bind well to CDK4 and MDM2. In the CDK4 allosteric site, the scaffold could interact with surrounding hydrophobic residues and residues in the DFG-loop, avoiding interactions with the highly conserved ATPbinding site that might reduce selectivity for CDK4 87 . At the P53binding site in MDM2, the THN-fused C3-spirooxindole could form hydrogen bonds and hydrophobic interactions mimicking Phe19, Trp23 and Leu25 of P53. In fact, introducing a hydrogen bond acceptor and electron-withdrawing group (EWG) onto the THN would allow formation of a hydrogen bond with Thr16, which could strengthen MDM2 binding. Hence, we used the 3-ylideneoxindoles 88e90 (1 and 2) and 2methyl-3,5-dinitrobenzaldehyde (3a) as substrates, to prepare the THN-fused spirooxindole derivatives 91 int-4 and int-4 0 through Michael-aldol cascade reaction, promoted by the bifunctional hydrogen-bonding catalyst (1R,2R-catalyst). Next, the protecting groups of int-4 and int-4 0 were removed to afford the compounds 4 and the diastereoisomer 4 0 (Scheme 1). The screening of reaction conditions, synthetic methods and detailed data of int-4, int-4 0 , 4 and 4 0 are contained in the Supporting Information. To explain the diastereodivergence of the organocatalytic Michaelaldol cascade, we also proposed plausible transition-state models based on the observed stereochemistry of the products (Supporting Information Scheme S4). Structureeactivity relationships in chiral THN-fused spirooxindoles based on cytotoxicity and enzymatic inhibition assays We assessed the ability of 4ae4p and 4a 0 e4p 0 to inhibit MDM2 and CDK4 using time-resolved fluorescence resonance energy transfer (TR-FRET). As positive control drugs, we used the MDM2 inhibitor nutlin-3a 92 and the CDK4/6 inhibitor palbociclib 93 . The inhibition rates for each compound at 1.0 mmol/L were determined (Table 1). At concentrations below 1.0 mmol/L, the inhibition caused by nutlin-3a, 4ae4c and 4ie4p dropped from about 40% to 20%, while palbociclib, 4d and 4g still showed inhibition of 40%e60%. The IC 50 values of most active compounds 4ae4j were also measured ( Fig. 3A and Supporting Information Table S1). Compounds 4ae4p worked better than compounds 4a 0 e4p 0 at inhibiting the activity of MDM2 and CDK4 as well as the proliferation of glioblastoma cell lines. Among the more active compounds 4ae4j, derivatives 4d and 4g with a halogen at the 5-position of the oxindole showed the greatest MDM2 inhibition and cytotoxicity. Although 4d and 4g inhibited MDM2 and CDK4 less than nutlin-3a and palbociclib, all compounds showed similar cytotoxicity against the tested glioblastoma cell lines based on the MTT assay. At high concentrations, all compounds showed good inhibition of two cell lines expressing mutated P53 (T98G and U251) and one cell line expressing wild-type P53 (U87MG, Fig. 3B, C). It was notable that the cell proliferation inhibitory potencies of compounds 4de4h in U87MG cells were better than that of T98G and U251 cells, which suggested that only the activation of wild-type P53 should suppress the glioblastoma cell proliferation. Focusing on 4d and 4g as the most active compounds in these bioactivity screens, we explored their structureeactivity relationships and bioactive mechanisms. According to the above methodology (Scheme 1), the corresponding enantiomers ent-4d/ ent-4g and ent-4d 0 /ent-4g 0 were synthesized using the 1S,2Scatalyst, and four of the eight possible stereoisomers for 4d and 4g were obtained with high stereoselectivities (Scheme 2). Selected isomers of these compounds were serially diluted from 50 mmol/L to 5 nmol/L and tested against MDM2 and CDK4 in TR-FRET assays (Supporting Information Fig. S1). We also tested isomers against glioblastoma cell lines expressing wildtype P53 (U87MG) or mutated P53 (U251, Table 2). The isomers ent-4d and ent-4g inhibited growth of U87MG cells to a greater extent than nutlin-3a or palbociclib, and they inhibited growth of U251 cells better than palbociclib. The strong cytotoxicity of ent-4g against glioblastoma cells expressing mutated P53 is consistent with its low IC 50 values against MDM2 and CDK4. Compound ent-4g was chosen for further bioassays and mechanistic studies. The KINOMEscan â method was used to determine the kinase selectivity of ent-4g against a panel of 99 kinases in parallel ( Fig. 3D and Supporting Information Table S2) 94 . The compound caused negligible or minimal inhibition to most kinases other than CDK4-cyclinD1, CDK4 and CDK2. In the case of CDK2, 4% of control protein remained after competitive binding of 100 nmol/L ent-4g (0.8% to CDK4-cyclinD1 and 2.2% to CDK4), which is probably because CDK2 possesses 66% of sequence identities to CDK4. These results suggest that ent-4g can be regarded as a specific MDM2/CDK4 inhibitor. Structural basis of 4g isomer binding to MDM2 and CDK4 Molecular docking and dynamics studies were conducted to gain potential insights into how 4g/4g 0 and ent-4g/ent-4g 0 bind to MDM2 and CDK4 (Supporting Information Fig. S3). Molecular simulations were conducted for 100 ns, and binding free energies were calculated using the MM/GBSA method (Supporting Information Table S3) 95 . As references, we examined the coecrystal structure of MDM2 with SAR405838 (PDB ID: 5TRF) 96 and a homology model of CDK4 complexed with the allosteric inhibitor 8-anilino-1-naphthalene sulfonate (ANS), based on the crystal structure of CDK2 with ANS (PDB ID: homology model generated from 3PXZ) 97 . Fig. 3E reveals differences in how 4g/4g 0 and ent-4g/ent-4g 0 are predicted to bind to their target sites. Binding conformation differed substantially between 4g/4g 0 and ent-4g/ent-4g 0 ; during the dynamic's simulation, 4g moved to another ANS binding site, 4g 0 moved to the ATP binding site and ent-4g 0 moved to the hydrophobic pocket. Fig. 3F compares how ent-4g is predicted to bind to the target sites with how SAR405838 and ANS bind. These analyses suggest that ent-4g mimics P53 residues Phe19, Trp23 and Leu25 in interacting with MDM2, and that the compound forms a stable hydrogen bond with MDM2 residue Thr16 (Fig. 3G), which has never been reported before. In our simulations, compound ent-4g formed hydrophobic interactions with a pocket formed by Val57, Gly160, Leu161 and Ile164, maintaining the DFG-loop in an "out" conformation 98e100 . Binding Scheme 1 Preparation of 4 and 4 0 for bioactivity screening. Int Z intermediate. of ent-4g to the CDK4 allosteric pocket is predicted to depend on pep stacking between the oxindole ring of ent-4g and Phe93, as well as electrostatic interactions between the nitro group of ent-4g and Arg61 ( Fig. 3F and G) 101 . The contributions of single amino acid residues in MDM2 substrate binding pocket were decomposed by using a computational alanine-scanning which was dependent on the assumption that local changes of the protein do not influence the whole conformation of the complex significantly. The 14 residues covering the walls of MDM2 substrate binding pocket were alternatively mutated to alanine from the simulation trajectory of the wild-type MDM2einhibitor complex and results were shown in Fig. S3. As was expected, the mutation of key binding residues resulted significant increase of binding free energies, which suggested the disrupted inhibitoreresidue interactions. The highest binding free energy changes were the mutation of Leu54 to alanine in both ent-4g and SAR405838 complexed to MDM2, the Thr16 in ent-4g complex and Lys94 in SAR405838 complex were also stronger than the other residues (>4.0 kcal/mol). The computational alanine scanning results also confirmed that binding modes of ent-4g suggested by molecular docking and MD simulation. Ent-4g inhibits U251 glioblastoma cell proliferation by altering cell cycle progression and P53 signalling To further elucidate the molecular mechanism of ent-4g, U251 glioblastoma cells were incubated with the compound, and then changes in gene expression were analysed globally using an Illumina Hiseq4000 platform (Novogene Co., Ltd., Beijing, China, Fig. 4A and Supporting Information Fig. S4) 102 . Enrichment analysis using integrated GO 103 , KEGG 104 and Biocarta 105 revealed significant alteration in the cell cycle and P53 signalling pathways, as shown in the KEGG pathway enrichment results (Fig. 4B). To identify the subroutine of programmed cell death induced by ent-4g, we treated the two glioblastoma cell lines with the compound, then assessed their cell cycle distribution via propidium iodide staining with flow cytometry, as well as apoptotic levels using Annexin V-FITC/PI dual staining (Keygen, Nanjing, China). The compound induced significant apoptosis and cell cycle arrest in G1 phase in both cell lines ( Fig. 4C and D). In addition, ent-4g increased the proportion of glioblastoma cells showing hyper-condensed, apoptotic nuclei based on Hoechst 33,342 staining (Beyotime, Shanghai, China, Supporting Information Fig. S5). The compound treatment triggered an increase in MDM2, P53 and P21 levels ( Fig. 4E and F). Like palbociclib, ent-4g inhibited autophosphorylation of CDK4 and phosphorylation of retinoblastoma (RB) in U251 cells (Fig. 4E). In fact, the compound stimulated BAX to a greater extent than nutlin-3a did, and it activated more cleavage of caspase-3 than palbociclib did. To complement these in vitro assays, we treated U251 glioblastoma xenografts in mice with ent-4g. Animals were analysed at 21 days after oral administration of ent-4g, nutlin-3a or palbociclib (Fig. 5AeC). All treatments potently inhibited tumor growth, with ent-4g showing significantly greater effects than the reference drugs. This anti-tumor activity was associated with the up-regulated expression of MDM2, P53 and P21, as well as phosphorylation inhibition of CDK4 and RB ( Fig. 5D and E). Treatment with ent-4g was also associated with significantly reduced Ki-67, which serves as a proliferation marker with prognostic and predictive potential in glioblastoma, and a significantly higher number of TUNEL-positive apoptotic nuclei. Despite these anti-tumor effects of ent-4g, hematoxylin and eosin staining of tissue sections from main organs after treatment indicated no severe toxic effects (Supporting Information Fig. S6). Moreover, compound ent-4g displayed good stability in human liver microsomes assay, with over 90% of ent-4g remained after 10 min incubation of 1 mg/mL proteins at 37 C, and its half-life period in human liver microsomes assay was 46.5 min. The tumor and plasma concentrations of compound ent-4g in mice xenograft models were measured after four daily dosage of i.p. administra- (Table 3) indicated that ent-4g distributed well into tissues (apparent V ss of 7.36 L/kg) with a moderate plasma clearance rate (1.21 L$kg/h) after i.v. injection of 7.5 mg/kg dosage, and the absolute oral bioavailability of ent-4g was around 30%. Conclusions In summary, we have discovered THN-fused spirooxindole derivative ent-4g as a potent inhibitor through rational drug design and asymmetric synthesis of the designed compounds. The compound ent-4g showed strong ability to inhibit both MDM2 and CDK4 in glioblastoma cells expressing wild-type or mutant P53. Molecular dynamics simulations indicate that the compound ent-4g tightly binds to MDM2 and CDK4. Ent-4g could induce significant apoptosis and cell cycle arrest in G1 phase by Scheme 2 The preparation of compounds ent-4d, ent-4g, ent-4d 0 , and ent-4g 0 . Ent Z enantiomer. up-regulating MDM2, P53 and P21 levels, reduced Ki-67, phosphorylation inhibition of CDK4 and RB, as well as higher number of TUNEL-positive apoptotic nuclei. The compound also strongly inhibited the growth of glioblastoma xenografts in mice. The approach presented here may be useful for discovering novel MDM2/CDK4 dual inhibitors and generating leads for the treatment of glioblastoma and many other cancers. Chemistry Nuclear Magnetic Resonance (NMR, Bruker-400 MHz, Bruker Corporation, Karlsruhe, Germany and JEOL-600 MHz, JEOL, Tokyo, Japan) data were obtained for 1 H at 400 MHz and for 13 C at 100 MHz or for 1 H at 600 MHz and for 13 . The mixture was stirred at 0 C until the reaction was completed based on TLC. The reaction was quenched with aqueous NaHCO 3 (aq.) and CH 2 Cl 2 . The organic layer was dried by Na 2 SO 4 and concentrated. The residue was purified by chromatography on silica gel to give the major isomer product int-4a. Next, to a solution of int-4a (50 mg) in CH 2 Cl 2 was added HCl/EtOAc (5e10 mL) at room temperature until the reaction was completed based on TLC. The reaction was quenched by EtOAc and water. The organic layer was dry by Na 2 SO 4 and concentrated, which was purified by chromatography on silica gel to give the deprotected spiro-oxindole derivative 4a as a white solid in 84% yield (30.1 mg, 0.07 mmol) after flash chromatography, The compounds 4be4q were prepared according to the synthetic method of 4a. (4 mL) was added TMSCl (25.9 mL, 0.3 mmol) and imidazole (45.8 mg, 0.6 mmol). The mixture was stirred at 0 C until the reaction was completed based on TLC. The reaction was quenched with aqueous NaHCO 3 (aq) and CH 2 Cl 2 . The organic layer was dried by Na 2 SO 4 and concentrated. The residue was purified by chromatography on silica gel to give the major isomer product int-4a 0 . Next, To a solution of int-4a 0 (50 mg) in CH 2 Cl 2 was added CF 3 COOH (5 eq.) at room temperature until the reaction was completed based on TLC. The reaction was quenched by EtOAc and water. The organic layer was dry by Na 2 SO 4 and concentrated, which was purified by chromatography on silica gel to give the deprotection TMS intermediate product. Further, potassium tert-butoxide (1.03 mL of a 1 mol/L solution in THF) was added to an aerated solution of the Bn-protecting THN-fused spirooxindole derivatives intermediate product, in DMSO (3.0 mL) at room temperature. After 20 min, 1 mol/L HCl was added and further followed by sodium hydrogen carbonate solution to give a pH neutral solution. The solution was then diluted with brine, extracted with ethyl acetate (4 Â 20 mL) and evaporated under reduced pressure 105 . The residue was purified by flash chromatography to give the deprotected spiro-oxindole derivative 4a 0 as a white solid in 76% yield ( Computational The Accelrys Discovery Studio (DS3.5, Accelrys, San Diego, CA, USA) was utilized for the homology modeling of CDK4 binds to allosteric inhibitors. Then optimization of initial model, equilibration, interaction free energy calculation (MM-GBSA) and computational alanine scanning were optimized by the standard molecular dynamics protocol in AMBER12 package with the MMFF94 force field. The detailed computational procedures and parameters were provided in Supporting Information. Cell proliferation, apoptosis and Western blotting assays The glioblastoma cell lines U87MG, U251 and T98G were obtained from the ATCC (American Type Culture Collection, Virginia, VA, USA) and cultured in the state key laboratory of biotherapy, west china hospital, Sichuan University, China. The cell proliferation and apoptosis assay were measured by using the MTT method and Annexin V/PI staining kit (Keygen, Nanjing, China), respectively. The Western blot (WB) analysis was performed according to the previous reports and manufacturer's protocols. The experimental procedures were described in Supporting Information. RNA sequencing and bioinformatics analysis Total RNA from U251 cells with or without ent-4g incubation were extracted using the Trizol reagent (Life Technologies, Carlsbad, CA, USA) according to the manufacturer's protocol. Only the samples with RNA integrity values of >8.0 were used for mRNA sequencing on the Illumina HiSeq4000 platform by Novogene Co., Ltd. (Beijing, China). The experimental procedures were described in Supporting Information. Xenograft models and in vivo evaluations The in vivo antitumor activity and preliminary safety of ent-4g were carried out according to the Guidelines for the Care and Use of Laboratory Animals that were approved by the Committee of Ethics of Animal Experimentation of Sichuan University, Chengdu, China. The six to eight weeks old SPF (specific pathogen-free) nude mice were purchased from Beijing Huafukang Biotechnology Co., Ltd. The in vivo antitumor activity preliminary safety of ent-4g were performed on the U251 subcutaneous xenograft models, and the pharmacokinetics evaluation of ent-4g were assessed in SpragueeDawley rats (350e400 g). The detailed experimental procedures were provided in the supplementary materials.
2020-01-02T21:55:24.958Z
2019-12-27T00:00:00.000
{ "year": 2019, "sha1": "7c93bd9a0389a8ebfdc60b5ef026a531d55868e6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.apsb.2019.12.013", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb7db95b5a1d5eadfafcfbccd740c2eabf465f07", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
244909716
pes2o/s2orc
v3-fos-license
Characteristics of Microbial Community and Function With the Succession of Mangroves In this study, 16S high-throughput and metagenomic sequencing analyses were employed to explore the changes in microbial community and function with the succession of mangroves (Sonneratia alba, Rhizophora apiculata, and Bruguiera parviflora) along the Merbok river estuary in Malaysia. The sediments of the three mangroves harbored their own unique dominant microbial taxa, whereas R. apiculata exhibited the highest microbial diversity. In general, Gammaproteobacteria, Actinobacteria, Alphaproteobacteria, Deltaproteobacteria, and Anaerolineae were the dominant microbial classes, but their abundances varied significantly among the three mangroves. Principal coordinates and redundancy analyses revealed that the specificity of the microbial community was highly correlated with mangrove populations and environmental factors. The results further showed that R. apiculata exhibited the highest carbon-related metabolism, coinciding with the highest organic carbon and microbial diversity. In addition, specific microbial taxa, such as Desulfobacterales and Rhizobiales, contributed the highest functional activities related to carbon metabolism, prokaryote carbon fixation, and methane metabolism. The present results provide a comprehensive understanding of the adaptations and functions of microbes in relation to environmental transition and mangrove succession in intertidal regions. High microbial diversity and carbon metabolism in R. apiculata might in turn facilitate and maintain the formation of climax mangroves in the middle region of the Merbok river estuary. INTRODUCTION Mangroves represent one of the most productive ecosystems in tropical and subtropical estuaries and shorelines. They possess biological resources and play important roles in carbon fixation, erosion mitigation, and water purification (Giri et al., 2011;Brander et al., 2012). They often occur in marine-terrestrial ecotones with obvious geographical and hydrological heterogeneities, leading to interesting sequential species zonation along continuous gradients. The adaptation and succession of mangroves in intertidal regions are speculated to be closely related to microbes in the sediments. Unfortunately, the potential roles of microbes and their functions in mangrove ecosystems are still poorly understood, although changes in vegetation during mangrove succession and how mangrove plants adapt to intertidal environmental adversities have been well studied (Wang et al., 2019;Cheng et al., 2020). In recent years, high-throughput sequencing has offered a comprehensive perspective of microbes (Andreote et al., 2012;Alzubaidy et al., 2016;Lin et al., 2019). Benthic microbial community is highly correlated with soil properties (the depth of soil layer, pH, salinity, and nutrient availability) (Abed et al., 2015;Zhou et al., 2017;Tong et al., 2019) and aboveground plants ( Bardgett and van der Putten, 2014). Previous findings also showed that microbial diversity and function might vary significantly among different mangrove habitats because of environmental transition and mangrove succession (Bai et al., 2013). Mangrove coverage also regulates the structure and composition of microbial community by altering redox conditions and organic carbon levels in the sediments (Holguin et al., 2001). Moreover, microbes in sediments also play important biogeochemical roles (e.g., C, N, and S cycles), which can facilitate mangrove survival in intertidal regions (Reis et al., 2017). Owing to the withering and retention of mangrove branches and leaves, mangrove sediments contain a large amount of organic carbon, and most carbon turnover in mangrove ecosystems is carried out by sediment microbes (Alongi, 1988). Benthic microbes may also promote the efficiency of biogeochemical cycles in the sediments, such as C, N, and S cycles (Lin et al., 2019). In addition, anaerobic metabolism can further facilitate the production and consumption of methane and nitrous oxide (Giani et al., 1996;Reis et al., 2017), which can contribute to the emission of greenhouse gases from mangrove wetlands (Rosentreter et al., 2018). However, the responses of microbes to environmental transition and mangrove succession have not been well demonstrated. It is essential to further identify microbial taxa and their metabolic potential in mangrove ecosystems. Thus, the present study aimed to (i) identify the changes in microbial community and diversity among different mangrove habitats; (ii) explore microbial functions and metabolic potentials in mangrove ecosystems; and (iii) explore the potential correlations among environmental factors, mangrove populations, and benthic microbial communities and functions. The purpose of this study was to evaluate the hypothesis that microbial community and function would respond positively to environmental transitions and might contribute to mangrove survival and succession in intertidal regions. Therefore, surface sediments from three mangrove fields (Sonneratia alba, Rhizophora apiculata, and Bruguiera parviflora) were examined using 16S high-throughput and metagenomic sequencing analyses. The implications of this study should be useful for guiding future research on the roles of the microbial community and their functions in mangrove succession. Study Area and Sample Collection Sediment samples were collected on November 25, 2019, in a mangrove reserve located in the Merbok river estuary, Malaysia (Figure 1). In the upper estuary (with lower salinity), the habitats were mainly occupied by S. alba and sporadically mixed with Nypa fruticans, whereas the lower estuary had groves of B. parviflora and sporadic Avicennia genera. In the middle region of the Merbok river estuary, the dominant species was R. apiculata. According to tidal transition and mangrove succession, surface sediments were collected from three mangrove populations (S. alba, R. apiculata, and B. parviflora), and five parallel samples were collected from each mangrove population. The samples were placed in liquid nitrogen and frozen for nucleic acid extraction. The chemical parameters of the sediments were determined and described as follows. Determination of the Chemical Parameters of Mangrove Sediments The sediment properties included salinity, pH, total organic carbon (TOC), total phosphorus (TP), and total nitrogen (TN). Salinity and pH were measured during sampling in the field. The samples were dried naturally and sieved (2 mm). TOC, TP, and TN in the sediments were then analyzed following standard measurements (Huang et al., 2017). Illumina Sequencing of 16S rRNA Gene and Data Analysis Sediment samples (1 g) were weighed, and total DNA was extracted using an E.Z.N.A. R Soil DNA Kit (Omega, Norcross, GA, United States). In this study, the V3 and V4 regions of the 16S rRNA gene were sequenced using 338F (5 -ACTCCTACGGGAGGCAGCAG-3 ) and 806R (5 -GGACTACHVGGGTWTCTAAT-3 ) primers. PCR was performed in a 25 µl reaction containing the following: 5 × FastPfu buffer (4 µl), 2.5 mM of dNTPs (2 µl), 5 U/µl of FastPfu polymerase (0.5 µl), 5.0 µM of primers (1.0 µl each), and template DNA (10 ng). PCR was performed using the following conditions: pre-denaturation at 95 • C for 3 min; 30 cycles of denaturation at 95 • C for 30 s, annealing at 55 • C for 30 s, and extension at 72 • C for 45 s; and a final extension at 72 • C for 10 min. After amplification, the PCR products were purified and analyzed by paired-end sequencing (2 × 250) using the Illumina MiSeq platform (Illumina, San Diego, CA, United States) according to standard protocols. All sequences were deposited in the National Center for Biotechnology Information (NCBI) Sequence Read Archive (SRA) Database under accession number PRJNA756333. For the paired-end reads obtained by MiSeq sequencing, samples were distinguished according to the barcode information and then spliced according to overlap sequences using FLASH (version 1.2.11). After quality control analysis, normalization of clean data was carried out for operational taxonomic unit (OTU) clustering analysis and species taxonomy analysis. CD-HIT tool was used to define tags with sequence similarities >97% as OTU clusters. QIIME software (version 1.9.1) was used to analyze the alpha diversities of the sequences, based on the Shannon index and ACE index. A representative sequence (showing the highest default abundance) in each OTU was selected for species classification using RDP classifier software (version 2.11) with the default threshold 0.8. Two-sided Welch's t-test was used to analyze the statistical significances of microbial community structure in different samples. Beta diversity was calculated using unweighted UniFrac, and principal coordinates analysis (PCoA) was performed with R software (version 3.3.1). Redundancy analysis (RDA) was performed using the R software (version 3.3.1) vegan package, and statistical significances of RDA were judged by performing the PERMUTEST analysis of variance. Metagenomic Sequencing Analysis Qualified DNA samples were diluted in fragmentation buffer and randomly disrupted using a Covaris M220 ultrasonicator (Covaris, Inc., Woburn, MA, United States). The DNA fragments obtained after disruption were used for library construction. The qualified library was sequenced using the Illumina HiSeq 2500 high-throughput sequencing platform with 2 × 150 paired-ends. This platform was also used for data configuration and image analysis with HiSeq Control software. Metagenomic data have been deposited in the NCBI SRA Database (PRJNA766709). Raw sequence data were trimmed with FASTP 1 , and lowquality reads, lengths of < 50 base pairs, N-rich reads, and adapter reads were removed. Sequences with different sequencing 1 https://github.com/OpenGene/fastp depths were assembled using Megahit software 2 , and contigs were obtained using the succinct de Bruijn graph method. MetaGene (Noguchi et al., 2006) was used to predict open reading frames (ORFs) in contigs, and a statistical table of gene predictions was obtained for each sample. The predicted gene sequences of all samples were clustered by CD-HIT 3 , and the longest gene from each cluster was selected as the representative sequence to construct a non-redundant gene set. With the use of SOAPaligner (Li et al., 2009), the high-quality reads of each sample were compared with the non-redundant gene set (95% identity) to determine the abundance information for genes in corresponding samples. BLASTP (version 2.2.28) was used to compare the non-redundant gene set with the NR database (e-value: 1e −5 ), and species-annotation results were obtained based on the corresponding taxonomy information of the NR database. The analysis of Kyoto Encyclopedia of Genes and Genomes (KEGG) was used for functional annotation according to the BLAST results. Community contributions to functions were determined using the NR database annotation. Sediment Physicochemical Parameters The differences in physicochemical properties among the three mangrove populations are shown in Supplementary Figure 1. Overall, the highest TOC and TN levels were detected in the sediments of R. apiculata among the three mangroves studied (p < 0.05). Higher TP and pH, but lower salinity, were observed in the sediments of S. alba than R. apiculata and B. parviflora. 16S rRNA Gene Illumina MiSeq Sequencing Based on Illumina sequencing, 893,587 sequences were obtained from 15 samples. In total, 4,842 OTUs were observed at a 97% similarity level (Supplementary Table 1). In terms of microbial diversity, R. apiculata exhibited the highest OTUs number and bacterial richness index among the three mangrove populations (p < 0.05; Figure 2B). Higher Shannon index values were also represented for R. apiculata, although the differences were not significant. The Venn diagram shown in Figure 2C revealed a total of 3,532 OTUs in the sediments associated with R. apiculata, whereas 640 specific OTUs were observed. In addition, 1,645 common OTUs were observed among the three mangroves. As shown in Figures 2D-F, the types and relative abundances of dominant OTUs for R. apiculata and B. parviflora increased significantly compared with those for S. alba. Significant differences were also found in dominant OTUs between R. apiculata and B. parviflora. Bacterial Community Composition in Mangrove Sediments The abundances of bacterial taxa corresponding to the sediments are shown in Figure 3. Dominant bacteria (>5% at class level, Figure 3A) in mangrove sediments included Actinobacteria, Gammaproteobacteria, Deltaproteobacteria, Alphaproteobacteria, Anaerolineae, Bacilli, and Bacteroidia. However, the abundances of these dominant bacteria varied significantly among the three mangrove populations ( Figure 3B). The abundances of Alphaproteobacteria, Deltaproteobacteria, and Bacilli were found to be higher for R. apiculata and B. parviflora, while the highest abundance of Gammaproteobacteria was observed in S. alba among the three mangroves. The dominant bacterial taxa at the order level ( Figure 3C) were Bacillales, Rhizobiales, Desulfobacterales, Pseudomonadales, Anaerolineales, and Bacteroidales. The highest abundances of Desulfobacterales were observed in the sediments associated with R. apiculata, followed by B. parviflora and S. alba ( Figure 3D). Relative higher abundances of Bacillales and Rhizobiales were also detected in R. apiculata and B. parviflora than S. alba. In contrast, the highest abundance of Pseudomonadales was observed in S. alba among the three mangroves. Principal Coordinates Analysis and Redundancy Analysis of Microbial Community The results from PCoA ( Figure 4A) showed that microbial communities from the same mangrove population clustered together well. RDA was performed to explore the relationships between environmental parameters and microbial community ( Figure 4B). The microbial communities with R. apiculata and B. parviflora were highly correlated with TOC, TN, and salinity, whereas bacteria with S. alba were highly affected by TP and pH. The results (Figure 4C) showed that the dominant microbial taxa were grouped into two clusters. Cluster I, including Chromatiales, Corynebacteriales, Rhodobacterales, and Anaerolineales, showed positive correlations with TP or pH (p < 0.05). Cluster II, including Desulfobacterales, Rhizobiales, and Micrococcales, exhibited positive trends for correlations with TOC, TN, and salinity. More detailed information of the dominant bacterial taxa at family level is shown in Supplementary Figure 2. Metagenomic Analysis of Microbial Community Function in Different Mangrove Sediments Metagenome analysis generated a massive amount of sequence information, ranging from 90,898,652 to 139,622,818 reads among samples. In total, 14,664,335 non-redundant genes were detected in the metagenomes, and 9,005 KEGG Orthogroups (KOs) were identified. The results from Supplementary Figure 3 show a high correlation coefficient between the microbial community and functional diversity, with values of 0.84 and 0.87 for α diversity and β diversity, respectively. The data further illustrated the differences in key metabolic pathways (e.g., carbon-related metabolism) among the three mangroves. The sediments associated with R. apiculata exhibited the highest carbon metabolism, ABC transporters, prokaryote carbon fixation, and methane metabolism among the three mangroves ( Figure 5 and Supplementary Figure 4). Contribution of Microbial Community to Kyoto Encyclopedia of Genes and Genomes Function As shown in Figure 6 and Supplementary Figure 5, Desulfobacterales (e.g., Desulfobacteraceae) and Rhizobiales (e.g., Xanthobacteraceae) were two major contributors to metabolic functions. The contribution of microbial taxa to the function varied significantly among the different mangrove species. Desulfobacterales consistently had the highest contributions to carbon-related metabolism (e.g., carbon metabolisms, prokaryote carbon fixation, methane metabolism, and ABC transporters) in the sediments associated with R. apiculata. When compared with S. alba, Rhizobiales also exhibited higher contributions to carbon-related metabolism in sediments of R. apiculata and B. parviflora. In addition to Desulfobacterales and Rhizobiales, the highest contributions of Micrococcales and Bacillales to metabolic functions were also observed in sediments associated with R. apiculata. In contrast, the lowest contributions of Corynebacteriales, Burkholderiales, and Planctomycetales to the aforementioned metabolic pathways were observed in the sediments associated with R. apiculata. Microbial Diversity Was Highly Affected by Tidal Transitions and Mangrove Succession Significant differences were found in the microbial community among the three mangrove populations along the Merbok river estuary, and specific microbial communities were formed in each mangrove population (Figures 2, 3). Our previous data also indicated that microbial diversity was highly influenced by the structure of mangrove populations (Wu et al., 2016). Root exudates and secondary metabolites, which also could serve as carbon sources and antimicrobial substances, vary significantly among mangrove species (Gao et al., 2003;Koh et al., 2013). Changes in root exudates might not only affect benthic microbial density but also strongly affect the structure and function of microbes (Berendsen et al., 2012;Zhuang et al., 2020). Moreover, the present data showed that sediments associated with R. apiculata exhibited Frontiers in Microbiology | www.frontiersin.org the highest microbial diversity. R. apiculata is a dominant mangrove species in South Asia and often develops into a stable and fully successional mangrove community. In this study, R. apiculata occupied the majority of ecological niches in the central Merbok river estuary, whereas the genera of Avicennia, Bruguiera, Sonneratia, and Nypa occupied a smaller habitat in the lower and/or upper estuary. Higher microbial diversity in sediments associated with R. apiculata suggested that mangrove succession could enrich benthic microbial community. Coinciding with high microbial diversity, the sediments associated with R. apiculata also exhibited higher TOC than those with B. parviflora and S. alba, indicating a higher capacity for plant productivity, carbon fixation, and burial. As an important carbon source for microbes, previous investigators also claimed that the accumulation of organic matter in sediments could promote bacterial diversity (Sjöling et al., 2005;Chen et al., 2016). Moreover, the highest TN was also observed in sediments associated with R. apiculata. The present results were consistent with previous documents (Zhu et al., 2018), in which the shift in bacterial community structure was partly driven by the increased TOC and total organic nitrogen with the succession of mangrove. Nevertheless, other environmental parameters (salinity, pH, and TP) had significant effects on the microbial community ( Figure 4B). Salinity and pH have already been reported to have large effects on the microbial communities of mangroves (Ikenaga et al., 2010;Chambers et al., 2016). Moderate salinity was also positively correlated with bacterial abundances and closely linked to community composition and diversity (Morrissey et al., 2014;Crespo-Medina et al., 2016;Tong et al., 2019). Rhizophora apiculata In this study, microbial diversity was significantly higher in sediments associated with R. apiculata, coinciding with the highest TOC. Enhanced microbial diversity would promote the transformation and utilization of organic carbon (Holguin et al., 2001;Sjöling et al., 2005;Berendsen et al., 2012). Thus, it is not surprising that the sediments associated with R. apiculata exhibited a higher carbon metabolic potential ( Figure 5). Previous data also showed that bacterial diversity and metabolic potential (especially carbon metabolism) appeared to be enhanced during mangrove succession (Zhu et al., 2018). The findings of this study further showed that sediments associated with R. apiculata also exhibited the highest abundances of genes involved in carbon-related pathways (Supplementary Figure 4). It is well known that several prokaryotes can assimilate CO 2 into organic carbon (Lynn et al., 2017), although the functions of prokaryotes in carbon fixation have not been fully reported in mangrove ecosystems. In this study, Desulfobacterales and Rhizobiales exhibited high contributions in CO 2 fixation. The enrichment of these bacteria could result in a higher capacity for prokaryotic carbon fixation, which plays an essential role in carbon storage. It should be noted that the sediments of R. apiculata also exhibited the strongest potential for methane metabolism (Figure 5). In this study, the high organic carbon contents in R. apiculata sediments might promote the growth of methanogens, contributing to the potential production of CH 4 . Moreover, CH 4 production and emission are aggravated under anaerobic conditions (Fey et al., 2004;Kutzbach et al., 2004). In addition, ABC transporters were also important indicators of microbial functions that reflected the positive activity of carbon and nutrient transformation. In this study, Desulfobacterales and Rhizobiales were the main contributors to ABC transporters. Higher abundances of ABC transporters in sediments associated with R. apiculata indicated that this mangrove had higher carbon and nutrient transportation activities than other mangroves (Wood et al., 2001). Overall, this study suggested that climax mangroves (e.g., R. apiculata) exhibited a faster turnover rate of organic matter between plants and microbes owing to high carbon utilization and transportation. Microbial carbon fixation contributed to carbon sequestration, whereas the degraded small organic molecules could be conducive to the growth and succession of mangroves. The positive feedback of microbial community and function might in turn contribute to the formation of climax mangrove populations with high productivity (Chen et al., 2016). The inherent correlations among carbon metabolism, environmental transition, and mangrove succession need to be further studied. Specific Microbial Taxa and Functional Potential in Maintaining Mangrove Survival and Succession Although mangrove sediments are rich in organic matter, they are generally nutrient-deficient. Nutrient limitations were also widely reported in mangrove forests (Feller et al., 2003;Reef et al., 2010;Wang and Gu, 2013;Cheng et al., 2020). Even worse, anaerobic conditions might further aggravate the enrichment of anaerobic microbes and reductive phytotoxins (e.g., CH 4 , H 2 S, and sulfides). It is worth exploring how mangroves maintain survival and succession in such a terrible habitat in intertidal regions. The present results could partly explain this issue from the perspectives of microbial community and function. Desulfobacterales, a type of sulfate-reducing bacteria, consistently exhibited the highest contributions to metabolic functions in sediments associated with R. apiculata (Figure 6 and Supplementary Figure 5). In this study, Desulfobacterales (e.g., Desulfobacteraceae) was one of the main microbial taxa responsible for the differences among mangroves, whereas R. apiculata also exhibited the highest abundances of these bacteria (Figure 3D and Supplementary Figure 2A). Significantly positive correlations among Desulfobacterales, TOC, and TN were also observed ( Figure 4C and Supplementary Figure 2B). Previous investigators also claimed an important role for Desulfobacterales in C, N, and S cycles (Zhu et al., 2018). Increased Desulfobacterales might be beneficial for R. apiculata by alleviating the toxicity of sulfides under anaerobic conditions. The high abundances and strong metabolic potential of Desulfobacterales could also accelerate carbon and nutrient transformation and utilization (Lyimo et al., 2002;Meyer and Kuever, 2007), facilitating mangrove survival and succession in intertidal regions. The important potential of Rhizobiales in C, N, and S metabolisms was also revealed (Figure 6). Rhizobiales was a well-studied plant symbiont that widely occurred in the rhizosphere of mangrove plants (Gomes et al., 2010). This taxon played a beneficial role for the host by providing various nutrients, phytohormones, and precursors of essential metabolites (Delmotte et al., 2009;Verginer et al., 2010;Garrido-Oter et al., 2018). The findings of this study also revealed that Rhizobiales (e.g., Xanthobacteraceae) was metabolically versatile, especially in terms of carbon-related metabolism (Figure 6 and Supplementary Figure 5). In this study, relative higher abundances of Rhizobiales (e.g., Xanthobacteraceae) were also detected in the sediments of R. apiculata and B. parviflora than those of S. alba (Figures 2A, 3D). Similarly, positive correlation trends among Rhizobiales, TOC, and TN were also observed in this study ( Figure 4C). In addition to Desulfobacterales and Rhizobiales, Micrococcales might also partly contribute to the higher metabolic function in sediments associated with R. apiculata. Although little attention has been paid to this taxon, the present data (Figures 3D, 4C, 6) indicated that Micrococcales might also be important for metabolic functions, which are involved in carbon and nutrient metabolisms. Nevertheless, Bacillus, which was often considered a bionematicide, could promote the growth of plants by protecting roots from pathogens (Mendoza et al., 2008;Mendis et al., 2018). Bacillus was also a dominant bacterial component in this study ( Figure 3D); however, relatively low metabolic potentials of this taxon were observed (Figure 6). Multi-omics analyses, such as metagenomics, metatranscriptomics, and metaproteomics, focused on the functions of microbes in mangrove ecosystems should be further conducted. CONCLUSION The present findings provided a broader understanding of the relationships among microbes, environmental transition, and mangrove succession from the perspective of microbial community and function. Benthic microbial community was highly correlated with environmental factors and aboveground mangrove species, whereas the highest microbial diversity and metabolic potential (carbon metabolism, prokaryote carbon fixation, methane metabolism, and ABC transporters) were observed in sediments associated with R. apiculata. Specific microbial taxa (e.g., Desulfobacterales and Rhizobiales) involved in C, N, and S cycles might facilitate mangrove survival and succession in intertidal regions. The present data indicated that mangrove succession could enrich microbial diversity and carbon metabolism. More detailed multi-omics researches focused on the roles of microbes in mangrove succession should be further conducted. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors. AUTHOR CONTRIBUTIONS ZM carried out manuscript writing and revisions. FS and HC designed the research, writing, review, and editing; YW carried out writing -review. SF contributed to the sample collection and data analysis. LW carried out data analysis. All authors contributed to the article and approved the submitted version.
2021-12-07T14:17:29.239Z
2021-12-07T00:00:00.000
{ "year": 2021, "sha1": "0e75f87390e39ac59e4882692e30463219799c7a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2021.764974/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "0e75f87390e39ac59e4882692e30463219799c7a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
259342265
pes2o/s2orc
v3-fos-license
A Prototype for a Controlled and Valid RDF Data Production Using SHACL The paper introduces a tool prototype that combines SHACL's capabilities with ad-hoc validation functions to create a controlled and user-friendly form interface for producing valid RDF data. The proposed tool is developed within the context of the OpenCitations Data Model (OCDM) use case. The paper discusses the current status of the tool, outlines the future steps required for achieving full functionality, and explores the potential applications and benefits of the tool. Introduction As RDF datasets grow larger, more complex, and are getting highly used/integrated into several services, ensuring the quality and validity of such data is a crucial aspect [1]. RDF data validation plays a pivotal role in enhancing interoperability, facilitating data integration, and ensuring data consistency across different applications and domains. To consider RDF data valid, it has to follow some specific constraints. Using a validation schema, organizations and developers can detect errors/inconsistencies, leading to improved data quality and reliability. Despite the presence of many standards for inference like RDF Schema and OWL, these technologies employ Open World and Non-Unique Name Assumptions which creates difficulties for validation purposes [2]. Using Shape Expressions language (ShEx) we can fulfill a role similar to that of Schema languages for XML but specifically applied to RDF graphs [3]. Shape Expressions (ShEx) defines a concise, formal, modeling and validation language for RDF structures [4]. Another valid alternative is the Shapes Constraint Language (SHACL) [5]. In SHACL, validation is based on shapes, which define particular constraints and specify which nodes in a graph should be validated against these constraints. A set of constraints can also be interpreted as a "schema", functioning as one of the primary descriptors of a graph dataset. A SHACL document stores a set of SHACL shapes also called shapes graph [6]. While SHACL has a more verbose and complex syntax compared to ShEx, on the other hand, it provides a rich set of constraints and validation schemas, that allows complex shape definitions and advanced rule-based validations [7]. Also, SHACL has wider adoption (a larger integration into different RDF processing tools, e.g., including validators, editors, and integration with popular RDF libraries), so it also benefits from bigger ecosystem support [8]. Following these considerations, we decided to adopt SHACL into the work presented in this paper. Here we propose a first tool prototype that combines the potentials of SHACL with other ad-hoc validation functions, to create a controlled user-friendly form interface for the production of valid RDF data, ready to be further integrated into a triplestore. The process is tested in the context of the OpenCitations Data Model (OCDM) use case [9]. We conclude this paper through the discussion of the current tool status, the next steps to be accomplished to make it fully functional, and the future perspectives and potential usages of the tool. Prototype The production and modification of semantic data can be a particularly challenging task for inexperienced users of the Semantic Web, as it requires a certain degree of familiarity with the RDF language. For this reason, this work proposes the use of a user-friendly web interface (HTML form) that allows users to submit their data in a more familiar and intuitive environment. The prototype logic is conceptualized in Figure 1, the core of the tool is a software component that takes as input a SHACL-expressed schema and a series of ad-hoc validation functions. Then, following the definitions made in the two modules, a web interface is generated through which users can enter and modify data related to entities and their properties. Following user intervention and data submission, the data is validated in two subsequent phases: • Validation against SHACL shapes: This validation ensures the validity of the data regarding aspects such as the mandatory presence of a specific RDF property for a given entity (or class of entities), specific data types for property values, a range of possible values for certain properties, a minimum and/or a maximum number of properties for certain nodes, etc. SHACL also allows for the verification of more complex relationships between entities in the data graph by applying conditional constraints expressed through the use of SPARQL queries. • Validation against property ad-hoc validation functions: using specifically implemented validation functions, we further restrict what the user can add or modify, ensuring greater granularity in controlling aspects of the data that cannot be represented and controlled with SHACL shapes. For instance, to verify the availability and accessibility of the resources, by definig specific functions to perform API requests to external services. Furthermore, targeted programmatic solutions can filter correct values more precisely and be adaptable to the use case. For example, while SHACL allows the use of regular expressions to filter possible values for certain properties, custom validation functions could enable the management of more complex patterns, along with live validation, which would assist the user during the form-filling process even before the submission required for the validation of the remaining data. Once the values to define a new resource are compiled using the web interface, the tool first checks whether the SHACL rules are respected and then it performs the additional validation functions for each property. Although the SHACL rules could be applied in the real-time compiling process, we might consider applying the rules of the validation functions only once, at the end of the compilation, i.e., once the submission is triggered. If the inputted values are valid and respect all the rules and constraints, the RDF data produced is ready to be submitted to the corresponding triplestore. OpenCitations Data Model The OpenCitations Data Model (OCDM) is based on a set of classes and properties that reflect the basic structures of the bibliographic domain, used to represent information about bibliographic resources and their related citations. It reuses entities defined in SPAR Ontologies to represent bibliographic entities (fabio:Expression), identifiers (datacite:Identifier), agent roles (pro:RoleIn-Time), responsible agents (foaf:Agent), and publication format details (fabio:Manifestation). These entities can be mapped using SHACL to define constraints that can then be used for the automatic generation of forms that enforce user adherence to these constraints. For instance, let's consider a bibliographic resource, fabio:Expression. We can define a SHACL shape that represents the constraints of this resource as in Figure 2a. This schema provides a formal definition of the constraints that an entity of type fabio:Expression must adhere to. It indicates, for example, that the type of entity can be one of the listed ones (like fabio:Book, fabio:JournalArticle, etc.), and that the entity can have only one title, represented as a string (xsd:string). Thanks to this formal definition, it is possible to automatically generate a data entry form that ensures compliance with the defined rules. For example, for the creation or modification of an entity of type fabio:Expression, a form generated based on this SHACL shape will have a "select" type input field for the selection of the resource type among those allowed, and a textual input field for the title entry (see Figure 2b). In addition, a user could be also asked to insert a corresponding DOI in a text input box if (a) SHACL shape (b) Web form Conclusions and Future Works In this paper, we presented a prototype that currently is meant for users willing to create and add controlled/valid RDF data to a particular triplestore using a web form. Yet ideally we would like to generalize more this process through the inclusion of other common services used to interact with triplestore, such as API requests or SPARQL editors. In addition, a further feature to be considered is the bulk upload of the data to be added. We consider the presented tool not as a stand-alone software, but rather as a plugin to be adopted in the future into larger systems. With this regard, two systems come to mind. One is CLEF (Crowdsourcing Linked Entities via web Form) [10], an agile LOD-native platform for collaborative data collection, peer-review, and publication. The second one is ResearchSpace, an open source platform designed to help establish a community of researchers, where their underlying activities are framed by data sharing, active engagement in formal arguments, and semantic publishing [11]. We are currently working toward the integration of this tool inside CLEF, and are willing to perform the first tests using the OpenCitations use case.
2023-07-06T06:43:20.438Z
2023-07-04T00:00:00.000
{ "year": 2023, "sha1": "412da74218b6c97c1eca1274c219475d80ca5094", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "412da74218b6c97c1eca1274c219475d80ca5094", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
46538006
pes2o/s2orc
v3-fos-license
Thermoplastic Elastomers Containing Zinc Oxide as Antimicrobial Additive Under Thermal Accelerated Ageing Styrene-ethylene/butylene-styrene (SEBS) copolymerbased thermoplastic elastomers (TPE) are applied in the production of household items used in places with conditions for microbial development. Metal oxides like zinc oxide (ZnO) and others can be added to the TPE composition to prevent microbial growth. The aim of this study is to evaluate the effect of thermal accelerated ageing on mechanical, chemical and antibacterial properties of SEBS-based TPE containing 0%, 1%, 3%, and 5% zinc oxide. Zinc oxide was characterized by laser diffraction, X-ray diffraction, superficial area, porosity and scanning electron microscopy. Both aged and unaged samples were analyzed by infrared spectroscopy, tensile at rupture, elongation at rupture, hardness and antimicrobial activity against Escherichia coli and Staphylococcus aureus. Following thermal exposure, a reduction of antimicrobial activity was observed. No significant difference was observed in the chemical and mechanical characteristics between aged and unaged samples. Introduction Styrene-ethylene/butylene-styrene (SEBS) copolymerbased thermoplastic elastomers (TPE) are used in a broad range of applications such as the automotive industry, wire and cable coating, medical devices, footwear, personal products, household items, and others 1,2 .Under working conditions, polymers are degraded by environmental agents such as mechanical stress, heating/cooling, chemicals and hydrolysis 3,4 .These injuries make the polymer prone to bacterial colonization that results in the transmission of diseases, product deterioration, undesirable staining 5 and economic losses 6 .The TPE materials susceptibility to microbial attacks increases in humid environments with high concentrations of organic matter 7,8 . Long-term protection from such damage can involve the incorporation of antimicrobial agents into the polymer matrix.This way, in order to prevent microorganism access, organic and inorganic antimicrobial additives have been used to produce antimicrobial materials 9 .However, TPE production comprises extrusion and injection steps that submit the compound to elevated temperatures and high shear rate that degrade the organic additives 2 .From this angle, inorganic additives such as zinc oxide (ZnO) 10 are being used due to their chemical and thermal stability 11 .A field of application for products with antimicrobial properties is on utensils that are used in high humidity environments such as bathroom and kitchen (including bath mats, sponges), as well as outdoor products, including furniture and decking 12 . Studies of the effects of ZnO incorporation in TPE properties are needed in order to define the effectiveness and applicability of the material both initially and during the product life cycle.The aim of this study is investigate the effect of accelerated ageing on mechanical, chemical and antibacterial properties of SEBS-based TPE containing zinc oxide. Materials The materials used in this study were two different ZnObased additives referred to as follows: zinc oxide Perrin (ZnO-Pe) supplied by Perrin S. A. and zinc oxide WR (ZnO-WR) supplied by WR Cerâmica.The additive concentrations used were 1.0%, 3.0% and 5.0% weight of ZnO (Table 1).The additives were added to a TPE formulation compounded by styrene-ethylene/butylene-styrene copolymer (SEBS), polypropylene homopolymer (PP), mineral oil, calcite and an antioxidant to avoid thermal degradation during processing.A compound with no antimicrobial additive (C-00) was also tested. Characterization of the additives The ZnO particle size was described by laser diffraction, using a CILAS 1180 particle size analyzer, with scanning ranging from 0.04 µm to 2500 µm.The X-ray diffraction (XRD) was performed by Philips X'Pert MDP diffractometer with Cu Kα radiation to define chemical composition and purity. The superficial area and porosity of the additives were measured by Barret-Joyner-Halenda (BJH) method with a Nova Station A (Quantachrome Instruments).The patterns of adsorption and desorption isotherms of nitrogen were measured at -196 °C, before adsorption, the sample was degassed at 300 °C for 0.4 h. Scanning electron microscopy (SEM) and energy dispersive X-ray spectrometry (EDS) was carried out to verify the ZnO morphology and chemical composition.The analyses were performed in a carbon type stuck to stub.To obtain images were used a Jeol JSM 6010LA microscope operating at 15 kV and the samples were metalized with gold. Preparation of compounds The samples were prepared using a co-rotating double screw extruder (L/D 40 and 16 mm screw diameter, AX Plásticos) with temperature profile from 150°C to 190°C at a speed of 226 rpm.During testing, the extrusion parameters were kept constant.Test specimens in plate form with 2 mm thickness were prepared using an injection molding machine (Haitian, PL860) at 190°C. Characterization of compounds The specimens were submitted to accelerated ageing in an oven at 105°C during 168 hours according to ASTM D 573. Mechanical properties were tested in aged and unaged samples.The tensile at rupture and elongation at rupture of the compounds were obtained by tensile testing and analyzed according to ASTM D 412C in an EMIC DL 2000 machine.For tensile at rupture and elongation at rupture analyses were run ten replicates.Hardness was tested according to ASTM D 2240 in durometer Bareiss model HPE A, with an indentation hardness time of 3 seconds.For hardness analyses were run twenty five replicates. Fourier transformed infrared spectroscopy (FTIR) with attenuated total reflection (ATR) was recorded on a PerkinElmer spectroscope (Frontier).Each spectrum was recorded from a total of 10 scans at a resolution of 4 cm -1 and at room temperature.Spectrum software was used for spectra analysis. Reduction of bacterial population Japan industrial standard (JIS) Z 2801 13 was applied to evaluate antibacterial efficiency of samples against the bacterial species Escherichia coli ATCC 8739 (E.coli) and Staphylococcus aureus ATCC 6538 (S. aureus).Prior the test, the TPE samples (plaques -50 mm x 50 mm) were disinfected with ethanol and then exposed to ultraviolet (UV) light with the wavelength between 300 nm and 400 nm for 2 h.The distance between the UV light and the specimen was kept at 10 cm.After that, each TPE sample was placed separately in a sterile Petri dish and a bacterial suspension was applied to the sample surface.All of them were incubated for 24 h at 35 ± 1°C. The variation of bacterial population was calculated by applying the equation ( 1): ( Where, V = variation of bacterial population in percentage (%); Ni = number of bacteria incubated on TPE sample at zero hour; Nf = number of bacteria after 24 hours of incubation on TPE sample. Statistical analyses Analysis of variance and t-test were carried out for comparing averages of tensile at rupture, elongation at rupture, hardness and antimicrobial results using MYSTAT student version 12 (Systat Software, Inc., CA, USA).The level of significance was set at (p≤ 0.05). Characterization of the additives To understand the physical, mechanical and antimicrobial properties of ZnO based TPE compounds it is necessary to know the additives characteristics, such as size and morphology [14][15][16] . The XRD results show the presence of one phase of ZnO in both particles tested.The additives showed hexagonal symmetry named as wurtzite structure of ZnO 17,18 .The wurtzite unit cell parameters of both additives were near to ideal crystal.The deviation in ZnO-Pe and ZnO-WR wurtzite is natural and can be due to the lattice stability and ionicity 19 .Table 2 shows particles size determined by laser diffraction, surface area and porosity determined by BJH method.The zinc oxide A has an average size of 1.52 µm higher than zinc oxide B that presented the size of 1.05 µm.In addition, zinc oxide B has higher surface area and porosity than zinc oxide A. Figure 1 show micrographs of the morphology of zinc additives used.Both particles present irregular shape and an average size of 0.4 µm, lower than that found in XRD assay.In Figure 1 it is possible to see that the ZnO particles have tendency to agglomeration. The dispersive energy analysis showed peak in 0.2 keV typical of carbon (C) which originated from the technique of sample preparation.The peak 0.5 keV is typical oxygen (O), and the peaks 0.9, 1.0, 8.6 and 9.6 keV are typical of zinc (Zn) 20 from the additives ZnO-Pe and ZnO-WR.Further, no analysis found any impurity in both additives, ZnO-Pe and ZnO-WR. Mechanical properties Tensile strength at rupture and elongation at rupture values of aged and unaged compounds are shown in Figure 2 and Figure 3, respectively. An increase in tensile strength at rupture of unaged compounds (Figure 2) can be observed.Because the inorganic particle usually promotes a reinforcing effect on the SEBS/ PP matrix 21 , an increase was expected in tensile values with the rise in the additive amount. The differences in tensile strength at rupture and elongation at rupture between aged and unaged samples were not significant (Figure 2 and Figure 3).The same trend was observed in hardness values (Table 3).TPE compounds usually feature a decrease in mechanical properties after being submitted to accelerated ageing, a behavior that has been related to chain scission 22 .However, in this case the blend of SEBS with PP forms a co-continuous phase which provides thermal stability to the compound due the high temperature degradation of PP as well as the saturated middle block of SEBS 1,4 .Materials Research Chemical properties Figure 4 shows the FTIR-ATR spectrum of aged and unaged ZnO loaded TPE compounds.The spectrum depicts bands typically from TPE based on SEBS/PP/oil/calcite, such as the peaks at 2952 cm -1 , 1493 cm -1 , 757 cm -1 and 698 cm -1 that are common in aromatic compounds, the peaks at 2920 and 2852 cm -1 attributed to C-H vibrations.Also, bands were found at 1455 and 1377 cm -1 corresponding to methyl group and at 876 cm -1 which represents carbonyl group from CaCO 3 23-27 .There were no differences in the FTIR-ATR profiles between aged and unaged compounds.Similar behavior has been previously reported in a polymer composite based on SEBS, which presented no modifications on chemical profile after accelerated ageing 28 . with better biocide action observed in samples loaded with 5% of ZnO.The antimicrobial mechanisms of ZnO may be related to reactive oxygen substances produced by hydrogen peroxide which causes damage to prokaryotic cells 29 . After thermal ageing, the ZnO loaded compounds showed a loss of antibacterial activity when compared to unaged compounds.This difference was significant against the E. coli population (in C-ZnOWR-1 sample), and against S. aureus (in C-ZnOPe-1, C-ZnOPe-3, C-ZnOWR-1, C-ZnOWR-3 and C-ZnOWR-5 samples).This increased susceptibility to microbial attack in aged samples may be related to the presence of chemical substances from polymer degradation. Carbonyl groups resulting from the oxidation of polypropylene 30 and residues from the spin-off of ethylenebutylene links with styrene chains of SEBS 31 may have produced a favorable environment for adherence and proliferation of bacteria.Furthermore, although it was not verified in the FTIR profile, the waste layer of polymer degradation may have prevented contact between the bacterial cell and the additive. Conclusion The ZnO loaded TPE compounds did not present any significant modification in the mechanical properties even after exposure to thermal ageing.Although a reduction in biocide action was observed, the aged samples still featured an antimicrobial property.Further studies are needed to learn about the reasons for the reduction in antibacterial activity even with no polymer degradation, and also about what amount of additive offers the best efficacy. Lastly, the findings reveal the potential of ZnO loaded SEBS-based thermoplastic elastomers to produce daily use Antimicrobial properties Figure 5 shows the variance in E. coli (Figure 5a) and S. aureus (Figure 5b) population in aged and unaged ZnO loaded compounds.A significant difference (p<0.05) was observed in E. coli and S. aureus counts between the compounds loaded with different amounts of ZnO. The ZnO loaded TPE compounds featured an antimicrobial action, with a reduction of between 42.0% and 79.4% in the E. coli population (Figure 5a), and between 49.2% and 75.0% in the S. aureus population (Figure 5b).The antibacterial activity improved with increased concentration of the additive, Figure 2 . Figure 2. Variation in tensile strength at rupture of aged and unaged ZnO loaded TPE compounds.Error bars means ± SD of ten replicates. Figure 3 . Figure 3. Variation in elongation at rupture of aged and unaged ZnO loaded TPE compounds.Error bars means ± SD of ten replicates. 1 * Means ± SD of twenty five replicates. Figure 5 . Figure 5. Variation in (a) E. coli and (b) S. aureus population in aged and unaged ZnO loaded TPE compounds.Error bars means ± SD of three replicates. Table 1 . Identification of zinc oxide loaded in TPE compounds. Table 2 . Particle size, surface area and porosity of the additives used. Table 3 . Median values of hardness in aged and unaged ZnO loaded TPE compounds.
2017-08-30T13:53:33.243Z
2017-08-17T00:00:00.000
{ "year": 2017, "sha1": "53906c161619ccb1473bdb45825897a8f35087a9", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/mr/v20s2/1516-1439-mr-1980-5373-MR-2016-0790.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "13acb2b2fc3e326f627721a6b749621c981175b6", "s2fieldsofstudy": [ "Materials Science", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
264340634
pes2o/s2orc
v3-fos-license
Calcaneal Osteosarcoma: A Rare Instance in Adolescent Patient Introduction: Calcaneal osteosarcoma is extremely uncommon, accounting for <1% of all osteosarcomas. They typically exhibit swelling and chronic heel pain and are frequently clinically misdiagnosed as traumatic or inflammatory process. Case Report: We report a case of a 19-year-old girl with calcaneal osteosarcoma who initially complained of heel pain that was refractory to analgesic medications over a period of 4 months. Conclusion: The case highlights the importance of early diagnosis and management of osteosarcoma in patients with chronic heel pain and also highlights the importance of considering osteosarcoma as a differential diagnosis in adolescents who present with chronic heel pain, despite the rarity of the condition. Introduction Osteosarcoma is a rare type of bone cancer that typically occurs in adolescents and young adults.It is the most common primary malignant bone tumor in this age group and can occur in any bone but is more commonly found in the long bones of the extremities, such as the femur and tibia.Osteosarcoma is an osteogenic tumor of bone characterized by the formation of neoplastic osteoid tissue.It is the most common primary nonhematopoietic malignant tumor of bone in children and adolescents, second only to chondrosarcoma and Ewing's sarcoma [1,2].Calcaneal osteosarcoma is a very uncommon form of osteosarcoma, accounting for <1% of all cases.These tumors commonly present with swelling and persistent heel pain, and they are usually misdiagnosed on clinical examination as traumatic or inflammatory pathology [3].Diagnostic uncertainty occurs due to the rarity of this entity and the lack of understanding of osteosarcomas in uncommonly affected areas.This typically results in delay in diagnosis and treatment, which may negatively impact the prognosis [1].We report a case of a 19year-old girl with calcaneal osteosarcoma who initially complained of heel pain that was refractory to analgesic medications over a period of 4 months.Due to its uncommon nature, and the fact that it may present as chronic heel pain, calcaneal osteosarcoma is often missed. Case Report A 19-year-old girl was seen in the outpatient department with a diffuse, dull aching right heel pain that was the insidious onset and gradually progressive in nature.There was no history of any injury or surgery.The right heel pain was associated with a limp and did not respond to analgesics.There was no diurnal variation of pain.She had been experiencing difficulty in walking and carrying out daily activities for the past 6 months due to this heel pain.A local examination revealed mild swelling but no overt inflammatory signs, such as erythema or the local rise of temperature, and there was no tenderness.Considering her age of presentation, the character of pain, and no signs of inflammation or infections, further investigations were undertaken.A plain X-ray of the foot showed an ill-defined sclerotic area in the calcaneum with radiating spicules, thinned overlying cortex, and soft-tissue edema over the heel (Fig. 1).Contrastenhanced magnetic resonance imaging (MRI) of the left ankle confirmed the sclerotic lesion in the calcaneum with extraosseous component and enhancement with contrast.Alkaline phosphatase and lactate dehydrogenase were within the normal range and renal functions were also normal.An ultrasound-guided tru-cut biopsy was undertaken from the lesion, which on microscopy showed abundant osteoid matrix interspersed by pleomorphic cells with elongated oval to spindle hyperchromatic nuclei with increased areas of fibrous tissue, which was the hallmark of osteogenic sarcoma (Fig. 2).Fluorodeoxyglucose positron emission tomography-computed tomography scans (FDG PET-CT) showed no distant metastasis.Treatment: The patient was given 3 cycles of chemotherapy (cisplatin+adriamycin) at 3-week intervals.After the chemotherapy, a repeat contrast MRI and FDG-PET-CT were undertaken to determine the size and extent of the disease and micrometastasis, which revealed a decrease in the standardized uptake values in the primary lesion and also the extent of the disease.A limb salvage surgery was undertaken, which involved a posteromedial incision extending 5 cm from above the ankle to the base of first metatarsal along the watershed line (Fig. 3).The biopsy scar was excised, and the neurovascular bundle was isolated and separated.The flexor retinaculum was released, and the lateral plantar vessels had to be sacrificed because of their adherence to the tumor.The medial plantar vessels and nerves were isolated and separated (Fig. 4).The capsule was incised all around the subtalar joint and removed from the navicular and cuneiform bone.The tendon Achilles was resected 1 cm from its insertion, and the plantar fascia was resected 1 cm from its margins.After thorough dissection, a wide en bloc resection of the calcaneum was taken (Fig. 5).The procedure was uneventful.The sample was sent for histopathology, which confirmed osteosarcoma. Post-operative protocol After the surgery, a below-knee splint was applied for 3 months, and the patient was advised non-weight-bearing ambulation. Sutures were removed at 2 weeks, and then three cycles of adjuvant chemotherapy were given.At 3 months of follow-up, partial weight bearing was allowed with elbow crutches and an ankle-foot orthosis.The patient was followed up every 6 weeks.At a 1-year follow-up, a customized silicon heel cup and shoes were given for full weight-bearing ambulation.At the end of 1 year, a PET-CT revealed no evidence of metabolically active disease. Discussion Osteosarcoma most commonly affects the metadiaphysis of long bones and very rarely flat bones or small bones such as the calcaneum.Calcaneal osteosarcoma is an extremely rare form of osteosarcoma, accounting for <1% of all cases [2,4].This rarity and the lack of understanding of osteosarcomas in uncommonly affected areas make the diagnosis of calcaneal osteosarcoma challenging [5].In this case, the patient presented with chronic heel pain, which was initially misdiagnosed as an overuse injury.It is important to note that the majority of osteosarcoma cases occur in individuals without any underlying genetic predisposition.However, some rare inherited syndromes have been associated with an increased risk of developing osteosarcomas, such as Werner syndrome, hereditary retinoblastoma, Li-Fraumeni syndrome, and Rothmund-Thompson syndrome [2,6].In this case, the patient did not have any of these syndromes.Routine radiographs are often sufficient to make a primary diagnosis of calcaneal osteosarcoma [3].The characteristic finding of irregular spiculated interrupted periosteal reaction with dense sclerotic bone on radiographic images is a strong indication of calcaneal osteosarcoma as they typically show an ill-defined sclerotic area in the calcaneum with radiating spicules, thinned overlying cortex, and soft-tissue edema over the heel.However, to confirm the diagnosis and determine the local and distant extent of the disease, cross-sectional imaging studies such as contrast-enhanced MRI and FDG PET-CT scans are usually performed.Histopathological examination of a biopsy sample is always necessary to confirm the diagnosis and differentiate it from other lytic sclerotic lesions [7].With more advanced treatments and improvements in surgical techniques, the survival rate for osteosarcoma has continued to increase.At present, 5-year survival rate for patients with localized osteosarcoma is approximately 60%, and after recurrences or metastases, it is only 20% [8].It is important to note that early diagnosis and treatment are crucial for improving the prognosis in osteosarcoma.Neoadjuvant chemotherapy, before surgery, reduces the size of the primary tumor and makes it more amenable to surgery thereby increasing the chance of limb-salvage surgery and decreasing the risk of local recurrence.It also allows us to evaluate the response of the tumor to Lingala H, et al chemotherapy and make decisions about the surgery accordingly.Adjuvant chemotherapy after surgery reduces the risk of micrometastasis and improves the outcome in patients with osteosarcoma.The choice between neoadjuvant and adjuvant chemotherapy depends on the patient's condition, the size and location of the tumor, and the patient's overall health.Both approaches have been used in the treatment of osteosarcoma with similar outcomes, but the use of neoadjuvant chemotherapy is becoming increasingly common [9, 10].In this case, the patient was treated with a combination of chemotherapy and limb salvage surgery.Neoadjuvant chemotherapy is used to reduce the size of the tumor before surgery and to prevent micrometastasis.The patient underwent a wide en bloc resection of the calcaneum and received adjuvant chemotherapy after the surgery.The patient was followed up for 1 year and had no recurrence of tumor.She was able to bear weight fully with a customized heel cup. Gross Appearance Received oriented fragment of bone covered by soft tissue altogether measuring 9×6×3.5 cm.Cut section shows an illdef ined bony hard sclerot ic areas measuring 4×3.1×3 cm.(P/E) Distance of tumor from resected margins : Superior soft-tissue margin -0.3 cm, Inferior softtissue and bone margin -0.2 cm, Medial soft-tissue and bone margin -0.2 cm, Lateral soft-tissue and bone margin -0.2 cm, Anterior soft-tissue and bone margin -1.8 cm, and posterior bone margin -1.0 cm. Figure 1 : Figure 1: A plain X-ray of the foot showing an ill-defined sclerotic area in calcaneum with radiating spicules thinned overlying cortex and soft-tissue edema over the heel. Figure 2 : Figure 2: Microscopy showed abundant osteoid matrix interspersed by pleomorphic cells with elongated oval-to-spindle hyperchromatic nuclei with increased areas of fibrous tissue, which is the hallmark of osteogenic sarcoma. Figure 3 : Figure 3: Posteromedial incision was taken extending 5 cm from above the ankle to the base of first metatarsal along water shed line. A -Ill-defined sclerotic bony hard areas (A1 -A6) B -Superior soft-tissue margin C -Inferior soft-tissue margin D -Inferior bone margin E -Medial soft-tissue margin F -Medial bone margin G -Lateral soft-tissue and bone margin H -Anterior soft-tissue and bone margin J -Posterior bone margin ( J1 -J2) Microscopic Appearance-Post-chemotherapy status -Sections from ill-defined sclerotic bony areas show predominantly large areas of osteosclerosis and intertrabecular marrow fibrosis along with scattered residual viable foci of osteosarcoma (<10%) showing polygonal to spind le cel l s w ith high N/C ratio, large irreg ular hyperchromatic nuclei and prominent nucleoli.-Anterior and medial bone margin shows intertrabecular osteosclerosis without any viable tumor cells.-The superior, inferior, lateral, and posterior soft-tissue and bone margins are free of tumor.Impression/Comment Post-chemotherapy status, p r e d o m i n a n t a r e a s o f osteosclerosis with scattered re s i d u a l v i a b l e f o c i o f osteosarcoma (<10%) with free resected margins.Conclusion The case highlights the i m p o r t a n c e o f e a r l y diagnosis and management of osteosarcoma in patients with chronic heel pain.The Immediate Post-Operative X-ray. Figure 5 : Figure 5: Wide en bloc resection of calcaneum was done which was later sent for histopathology which revealed a bony hard sclerotic area measuring 4×3.1×3 cm .
2023-10-20T15:26:32.832Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "d1e1b86f12e8ffa508001966cdcfe112b8e7f592", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.13107/jocr.2023.v13.i10.3978", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b95b6b3ee579cca2c24abae442ca5f0f58130793", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225336729
pes2o/s2orc
v3-fos-license
Revisiting disturbance accommodating control for wind turbines Disturbance accommodating control has received considerable interest in the wind turbine research community for its ability to explicitly account for disturbances in the incoming wind field. Early work was based around estimating the disturbance from feedback information, while more recent research into disturbance accommodating control (as well as other feedforward control laws) has considered disturbance measurements produced by lidar. This work compares the two methods (estimating and measuring the disturbance) while keeping all other aspects of the controller the same. By doing so, we shed light on the performance improvements that can be attained using preview disturbance measurements of the wind. Introduction Disturbance accommodating control (DAC) is a long-standing method of multivariable control developed in the 1970s and 1980s to make use of known disturbance structures. Several competing methods exist, but the version that appears most commonly, at least for wind turbine control applications, is the observer-based disturbance-minimization mode [8]. In this mode, an existing feedback control law is augmented with a term that minimizes the effect of an estimated disturbance, the latter being generated by a state observer. DAC was investigated for wind turbine control in the late 1990s and 2000s with studies focused on the observer-based DAC methodology [12,1,23,5,27,28,14,29]. The disturbanceminimization mode of DAC is the most commonly used in the literature [23,5,27,28,14], although several other techniques have been presented [12,1,29]. More recently, Wang et al. [26,25] have considered DAC for wind turbine control using extensions to the disturbanceminimization mode with the aim of addressing some of its downsides. Interest in DAC for wind turbines has been renewed recently [24,15], along with other feedforward control methods [18], since lidars were shown to produce useful preview disturbance information about the incoming wind [6]. Lidars are capable of providing characteristic measurements of the oncoming wind by scanning at a location some distance upstream of the turbine for use in feedforward control [7]. Lidar preview measurements are only coherent with the turbine-incident wind field up to a point [19], but can be filtered to produce a good estimate of the low-frequency disturbance [22]. The authors have interest in using feedforward DAC as a point of reference for other advanced feedforward control methods, but prior to doing so, aim to provide DAC (both observer-based and feedforward) a thorough treatment. While there have been several studies of observer- based and feedforward DAC for wind turbines, to our knowledge, no study has compared the two techniques to each other. With this paper, we aim to fill this gap, and in doing so provide further justification for using feedforward disturbance measurements. In this work, we will not consider retuning the feedback controller after the feedforward action has been added-rather, we will assume that the feedback control law has already been designed and should not be altered. For work that considers retuning after the addition of a feedforward controller, refer to Haizmann et al. [4]. We also point out that a similar study to the present work was carried out recently by Khaniki et al. [13], where observer-based DAC was compared to a nonlinear feedforward control law based on a static curve for the appropriate blade pitch angle [17]. Our study differs by considering various formulations for DAC and looking at both idealized and realistic cases for feedforward DAC. This paper is organized as follows. Section 2 briefly overviews standard wind turbine controls. Section 3 presents the disturbance accommodating control technique, discusses differences based on whether or not a preview measurement is available, and applies DAC to wind turbines. Sections 4 and 5 present our testing methodology and results, respectively, before Section 6 concludes this paper. Background on wind turbine control Standard wind turbine controllers utilize generator torque and blade pitch actuation. Generally, operation is split into two regions: below-rated wind speed, or Region II, operation, where the blade pitch angle is held constant and the generator torque is varied to extract maximum power from the wind; and above-rated wind speed, or Region III, operation, where the winds are too high to continue with maximum power extraction and steps are taken to mitigate structural loading on the turbine components. In the latter, the blades are pitched actively to regulate the rotor speed and produce steady 'rated' power, which avoids excessive loading of the turbine drive-train, blades, and tower, while generator torque plays a lesser role. For more information, see Pao & Johnson [16]. Benefits from DAC for wind turbines have mainly been reported in above-rated winds, where DAC is applied to blade pitch control to assist in rotor speed regulation and load mitigation [23,5,14,27]. We therefore consider only above-rated operation in this study. Disturbance accommodating control Disturbance accommodating control is generally formulated around a state-space plant model. In the present work we will focus on a discrete-time plant where x ∈ R nx , u ∈ R nu , d ∈ R n d , and y ∈ R ny are the system state, control input, disturbance (or exogenous) input, and measured output, respectively; and A, B, B d , C, D, and D d are the discrete-time system, input, disturbance input, output, feedthrough, and disturbance feedthrough matrices. In the commonly-used disturbance-minimization mode of DAC, hereafter referred to simply as DAC, the control input u is constructed as the sum of two terms, i.e. u = u 1 + u 2 . (2) u 1 is a traditional feedback term that is used to perform the main control task such as stabilization, regulation, or reference tracking, which is assumed to have been already designed. u 2 is a term that is dedicated to minimizing the impact of the disturbance d on the state x. Assuming that u 1 has been designed to drive the state to zero in the case of zero disturbances (d ≡ 0), the DAC control term u 2 is designed according to Most commonly, Q DAC is chosen as the identity and u 2 minimizes the standard 2-norm in (4). This is the choice for Q DAC that we use in this work. In this case (and assuming that n u ≤ n x for uniqueness), the disturbance-minimizing control law is where it is necessary that the intersection of the range spaces (column spaces) of B and B d is nonempty. This method cannot handle a pitch actuator model (since the R(B) ∩ R(B d ) = ∅ in this case) nor guarantee complete disturbance rejection at the output unless there exists a u 2 such that Bu 2 = B d d ∀ d [25]; however, we choose to use a simple DAC method that has provided reasonable results in rotor speed regulation [27,14] and load mitigation [5,23] in the literature. Observer-based disturbance accommodating control In most DAC applications the disturbance d is unknown and must be estimated online. However, a linear model for the general wave-form structure of d is assumed to be known [8]. We therefore model d as the output of an uncontrolled state-space system with known dynamics (A d , C d ) but unknown initial condition x d (0) ∈ R nx d , and design an observer around the disturbance model (6) to produce a disturbance estimated(k). To do this, we create an augmented system with state x (k) and dynamics (combining (1) and (6)) Under the condition that (A , C ) is observable [3], we can then design an observer gain L ∈ R (nx+nx d )×ny (using, for example, a Kalman filter or pole placement method) and implement the observerx where y(k) is the measured output of the physical system at time step k. The observability condition means that both the plant state x and disturbance state x d can be reconstructed from a history of output measurements y and control inputs u, and depends strongly on the disturbance model chosen as well as the properties of the plant. However, a necessary condition for the augmented system (7) to be observable is that the plant (1) is observable-see Appendix for details.x (k) is then broken down into its constituentsx(k), which may be used for the feedback control u 1 , andx d (k), from whichd(k) = C dxd (k) is used to replace d(k) in (4) and (5) ( Figure 1a). Adding disturbance measurements On the other hand, if a direct measurement of d(k) is available online, there is no need to use the disturbance estimator described above. In this case, the DAC control law (5) can be applied directly. Since this case (preview measurement of the disturbance) is less common, we refer to it here as feedforward disturbance accommodating control. In doing so, we stress that observerbased DAC is not a true feedforward law, since it still relies solely on feedback of the measured plant output y. The feedforward DAC configuration is represented in Figure 1b. Wind turbine application For wind turbine applications, a lidar is used to sample the wind field upstream of the turbine and (after filtering) produce a measurement of d. In our case, we consider d to be the (scalar) rotor-averaged wind velocity perpendicular to the rotor plane. For this study, we model the rotor rotational and tower fore-aft bending degrees of freedom, so that we can focus on improving rotor speed regulation without increasing tower loading. Thus, x = ω rot x TẋT where ω rot , x T , andẋ T are the rotor speed, tower-top position, and tower-top velocity (in the fore-aft direction), respectively. We consider the measured output y = ω genẍT , i.e. the generator speed and tower-top fore-aft acceleration. We use a simple constant disturbance model (A d , C d ) = (1, 1) for observer-based DAC. The majority of the wind turbine DAC literature focuses on blade pitch control in aboverated operation, where promising results are obtained [5,14,15,25]. We therefore consider u = β, the collective blade pitch angle. Based on this, we generate a discrete-time linear-time invariant model (1) of the wind turbine by linearizing a FAST nonlinear turbine model [11] at a steady above-rated wind condition. Because we have linearized the wind turbine about a nominal operating point (x 0 , y 0 , u 0 ), x, y, and u in model (1) represent deviations from nominal operation. Test scenario We carry out tests on the NREL 5MW reference turbine [10] implemented in FAST [11], embedded in a Simulink environment for ease of controller design. Each controller (see Section 4.3) is tested using six turbulent wind fields with mean wind speed 18 m/s, well into above-rated operation for the NREL 5MW, that are constructed using TurbSim [9] with class B turbulence. Lidar simulator Accurate simulation of the measured disturbance signal d meas is critical to this study. In particular, the wind evolves between the lidar measurement location and the turbine. To account for this, we generate decorrelated upstream wind fields based on Bossanyi & Hassan [2] that we sample to generate lidar measurements. We also simulate the sampling behavior of the lidar. We include the effects of sequential sampling, probe volume averaging, and line-of-sight limitations for a continuous-wave lidar, as detailed in Simley et al. [20] and sketched in Figure 2. The focus distance of the lidar is set to be one rotor diameter upstream of the turbine. Figure 3 provides an example output from the lidar simulator (denoted Feedforward, see Section 4.3). Although we keep this section brief, simulating the lidar is crucial and we strongly recommend that researchers follow the literature on lidar simulation [2, 20, among others] when investigating lidar-based feedforward control. Measurement noise and Kalman filter tuning The performance of observer-based DAC depends heavily on the quality of the wind speed estimate [13]. Most DAC studies either explicitly [12,23] or implicitly ignore the influence of measurement noise in y when designing the observer gain L and simulating the system response. In an effort to simulate an accelerometer, we add white Gaussian noise to the towertop acceleration measurement produced by FAST with a standard deviation of approximately 5% of the peak accelerations observed in simulation. On the other hand, we assume that the generator speed measurement contains very little noise, and simply use the true generator speed (as is used by the NREL 5MW baseline controller [10]). We design the observer gain L as the steady-state optimal Kalman filter gain [21] with process noise covariance The noise covariance matrix entry [R KF ] 22 = 0.025 is the true accelerometer noise covariance, and the generator speed entry [R KF ] 11 = 1 > 0 is required for positive-definiteness of R KF (and represents a signal-to-noise ratio of approximately 100), although no real noise was added to the signal for simulation. We found (via simulation testing and tuning) that Q KF produced satisfactory state estimation performance, where the disturbance state process noise variance term q dist > 0 can be varied to tune the 'aggressiveness' of the disturbance estimation (see Section 4.3). Smaller values of q dist indicate a stronger trust in the disturbance model (6) relative to the measurements y, while larger values of q dist indicate a lower trust in the model relative to the measurements. Controllers tested The focus of this study is to quantify the improvements gained using feedforward DAC in place of observer-based DAC for above-rated wind turbine operation. To do so, we compare various test cases in simulation, each producing the disturbance measurement d using a different method: (i) Baseline control (Baseline) Feedback-only control with no DAC. This can be thought of as setting u 2 ≡ 0. (ii) Ideal feedforward DAC (Ideal) Feedforward DAC with an idealized measurement of d that is noiseless and taken at the rotor plane. (iii) Lidar-based feedforward DAC (Feedforward) Feedforward DAC with a realistic upstream lidar measurement (Sections 3.2 & 4.1) that is filtered using a noncausal moving average filter [22] with 1001 samples to remove high-frequency turbulence from the lidar measurement. (iv) Observer-based DAC DAC with disturbances estimated from feedback measurements (Sections 3.1 & 4.2). We test three variations: q dist = 0.1 (Observer 0.1), q dist = 1 (Observer 1), and q dist = 10 (Observer 10). In all cases, we use the standard NREL 5MW feedback control law for u 1 [10]. The disturbance measurements/estimates for the methods described above are shown in Figure 3. The plot shows that the Feedforward measurement is essentially a smoothed version of the Ideal case, while the Observer-estimated disturbances vary significantly with the choice of q dist , as expected [13]. In particular, Observer 1 has a similar level of frequency content to the Feedforward measurement, but is slightly delayed because it is based on feedback signals. Results Illustrative results for the Feedforward controller are shown in Figure 4, while full results from this study are shown in Figure 5. Considering Figure 4, which shows the contributions of the feedback and feedforward components of the control signal (u 1 and u 2 , respectively) for the Feedforward DAC, we see that the feedforward component handles most of the large, low frequency variations, leaving the feedback term to respond to smaller, higher frequency components in the disturbance. This separation may be used to retune the feedback controller to further improve operation [4]. In Figure 5, we see that all DACs are able to reduce the variations in generator speed (left plot) without significant increases in tower loading (middle plot) compared to the Baseline NREL 5MW controller. However, pitch actuator usage varies significantly between the controllers (right plot), with Feedforward DAC the only controller that offers a significant reduction in pitch actuator velocity compared to the Baseline. The Feedforward and Observer 1 DACs are in fact able to reduce the tower motions while improving generator speed regulation. Compared to the Baseline, the Feedforward DAC reduces the peak tower base moment over the six simulation cases by 20.6%, while the Observer 1 DAC achieves a 15.6% reduction. Perhaps counterintuitively, the Ideal case (true rotor-average horizontal wind velocity) is not the best performer. Although, compared to Baseline control, generator speed regulation is improved with little change in tower loading, blade pitch actuation is very high. This is due to the high-frequency content present in the ideal disturbance measurement (Figure 3), which [22]. Comparing the three Observer DACs, we see a range of behaviors from 'not aggressive enough' (Observer 0.1) to 'too aggressive' (Observer 10). This transition can also be seen in Figure 3, where the Observer 1 disturbance estimate has a similar form to the Feedforward measurement, albeit delayed slightly. The aggressive Observer 10 more closely follows the Ideal disturbance measurement, but again injects considerable high-frequency content into the pitch signal ( Figure 5, right). On the other extreme, Observer 0.1 produces an estimate that is too slow to be of use, and again produces considerable pitch activity as the feedback control u 1 makes up for inaccurate disturbance rejection in u 2 . A final point of interest is that all DACs produce a non-zero median error in the generator speed signal of approximately 1%-see Figure 5, left. The reason for this is unclear, especially since we see the error in the Observer DACs as well as the feedforward versions. Conclusions and future work This work reaffirms the benefits of using lidar for feedforward control of wind turbines by comparing the performance of the same disturbance accommodating control law over various methods for producing a disturbance measurement/estimate. We found that the lidar-based disturbance measurement was the best performer, achieving both tighter generator speed regulation and lower peak tower loading while requiring less pitch actuation than a baseline feedback controller. We also confirm that observer-based DAC can perform well, although tuning is critical-the best-performing observer-based DAC produced a disturbance estimate that contained similar frequency content to the lidar disturbance measurement, although a delay is invariably present since the observer estimate is based only on feedback signals. We use disturbance accommodating control in this work as a simple way of comparing the performance of controllers based on measured disturbances and estimated ones; however, we do not consider the DAC law that we present to be the state of the art. Much research has been carried out to improve on the simple DAC law we present (see references in Section 1), as well as many other feedforward control laws [18], and investigations into the best way to utilize lidar measurements are ongoing. We have claimed that observability of the plant (1) is a necessary condition for observability of the augmented system (7). To see this, let be the observability matrix for the augmented system. The necessary and sufficient condition for (A , C ) to be observable is that O is full rank, i.e., rank (O ) = n x + n x d [3]. By computation,
2020-10-28T18:34:22.846Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "c26c98c00f9d356a466198e57195df14995b7aca", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1618/2/022021", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "187ad9b93d2cd0be6486b6854c2ace143be90ec2", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
260357988
pes2o/s2orc
v3-fos-license
Hydro-organic mobile phase and factorial design application to attain green HPLC method for simultaneous assay of paracetamol and dantrolene sodium in combined capsules The greenness of any analytical method has become a very important aspect of a good analytical method. However, most chromatographic methods depend on the usage of relatively large amounts of lethal and un-decaying chemicals and solvents. So, a green approach based on the full factorial design was employed to develop a simple and rapid HPLC technique for concurrent determination of paracetamol and dantrolene sodium in their combined capsules. Both drugs are highly recommended to be administered together in patients with severe musculoskeletal disorders. Avoiding the routine methodology and resorting to the modern technology represented in the usage of experimental design allows rapid determination of the studied drugs using the optimum quantity of chemicals to avoid any waste of resources. Simultaneous separation of a binary mixture of paracetamol and dantrolene sodium was accomplished using a reversed phase Hypersil C18 column using an eco-friendly isocratic eluent. The used mobile phase consisted simply of ethanol: water (40:60, v/v). Orthophosphoric acid was used to adjust the pH of the mobile phase to 4.5. Triethanolamine (0.2%) was added aiming to reduce the peak tailing. The assay was completed within less than 6 min adopting 0.8 mL/min as a flow rate. The detection was carried out using a UV-detector at 290 nm. The suggested technique shows a linear correlation over concentration ranges of 1.0–200 and 1.0–40 µg/mL for paracetamol and dantrolene sodium, respectively. The suggested technique allowed the simultaneous analysis of the two co-formulated drugs in their synthetic mixture and combined capsule. The suggested technique is considered a greener substitute for the other reported HPLC techniques through the usage of safer solvents and chemicals, along with decreasing both waste output and analysis time. The method is accurate with recoveries between 97.85 and 101.27%, precise, as %RSD for the intraday and interday precision were between 0.39 and 1.72% and very sensitive with limits of detection (LOD)’s 0.15 and 0.18 µg/ml and limits of quantification (LOQ)’s 0.48 and 0.61 µg/ml for paracetamol and dantrolene sodium, respectively. The method greenness was ensured through its assessment by four greenness metrics. It is also validated following the International Conference on Harmonization Guidelines. The recommended technique could be a good alternative to traditional methods in the routine quality control analysis of the studied drugs due to its minimum harm to the planet or human beings. Supplementary Information The online version contains supplementary material available at 10.1186/s13065-023-00990-7. Introduction Paracetamol and dantrolene sodium are used in combination for the treatment of musculoskeletal disorders as their combined form is superior to a single agent alone [1]. Both drugs should be considered a wise therapeutic option for patients with acute pain in the lower back [2]. Dantrolene Sodium (DAN); (Fig. 1a) is the hemi heptahydrate of the sodium salt of 1-[5-(4-nitrophenyl) furfurylideneamino] imidazolidine-2, 4-dione. DAN is a muscle relaxant drug which acts on skeletal muscles. It separates muscular contraction from excitation by interrupting the calcium release from the sarcoplasmic reticulum [3]. Paracetamol (PAR); (Fig. 1b) is 4´-Hydroxyacetanilide; N-(4-Hydroxyphenyl) acetamide. It has analgesic, antipyretic effects and some anti-inflammatory properties [3]. Green analytical chemistry (GAC) started to attract attention in 2000s [4,5]. This developing field is associated with optimizing the standards of analytical procedures through the decreased usage of hazardous solvents and increased safety for analysts and the planet earth [6,7]. It has recently been preferred not only for pharmaceutical analyses but also for food analyses [8,9]. Food analysis and quality control can be safely done by employing greener alternative techniques instead of traditional analytical methods, which require tedious, time-consuming sample preparation and are frequently linked to environmental pollution [10]. The success of any technique in science and technology is measured by its simplicity, environmentally friendly, and its applications [8]. HPLC is considered one of the most frequently used techniques in pharmaceutical field, especially in the analysis of drugs in pharmaceutical preparations. However, HPLC methods usually consume large quantities of organic hazardous solvents that may have a damaging influence on the environment and the analyst [11]. Most mobile phases contain methanol and acetonitrile as organic solvents. Although these solvents have astonishing elution abilities, there are some concerns related to their negative effect on humanity safety. The suggested technique employs the hydro-organic mobile phase which is composed of water and ethanol as a greener substitute for the unsafe traditional mobile phases. Experimental design (DOE) is a process that depends mainly on making systematic plans that make full use of minimum experimentation to obtain maximum information. A full factorial design (FFD) is a type of DOE 'multivariate optimization' which allows investigating the effect of all the factors simultaneously based on the responses of the dependent factors and the interactions between the independent factors [12]. FFD ensures optimal performance and reliability of the used parameters and the results of the proposed method [13]. The aim of this research is to present a modern chromatographic method by introducing a greener and nontoxic mobile phase as a substitute for the traditionally quite unsafe ones. This can be achieved via the usage of ethanol [14], while maintaining the method performance unaffected. Ethanol is a good greener substitute to methanol and acetonitrile [15] as declared by American Chemical Society Green Chemistry Institute. Ethanol is the most trusted solvent from the viewpoint of environmental standard solvent guide [16,17]. Also, the usage of the experimental design allows the usage of optimum chemicals which decreases the waste and enhances the method greenness [18]. Herein, a validated, rapid, green and sensitive HPLC technique is presented for simultaneous determination of PAR and DAN in their combined capsules. The determination of the binary mixture was done by employing an eluent that consisted of water: ethanol (60:40, v/v, pH 4.5, adjusted by phosphoric acid and 0.2% triethanolamine (TEA)). The separation of the binary mixture was accomplished in a very short time, less than 6 min. Literature review revealed that the separation of that mixture was performed by other researchers using different chromatographic methods, such as TLC densitometry and HPLC [19,20], spectrophotometric methods [20][21][22][23][24]. Most of the optimized separation methods cited in the literature for the analysis of the studied drugs by RP-HPLC involve studying of a large number of variables in the separation process. In addition, those methods use large quantities of organic Fig. 1 The Chemical structural of a dantrolene sodium b paracetamol solvent in the mobile phase, which produces a negative effect on the environment. For this reason, it is needed to design a more effective, green and time-saving method using the experimental design procedure. To the best of our knowledge, no research involving a full factorial design experiment for the separation of these drugs has been reported. This was motivation to look for a greener solvent, such as ethanol, to separate that binary mixture. The proposed technique was found to be less time-consuming and more eco-friendly when compared to others. The greenness metric reports were used as a reference to compare the suggested method with those earlier reported HPLC methods [19,20]. Instruments and software Knauer Chromatograph equipped with a Knauer, D-14163 injector valve with a 20 µL loop (Berlin, Germany) was used. Eluent was filtered using 0.45 µm membrane filters (Millipore, Cork, Ireland). Consort NV P-901 calibrated pH-Meter (Belgium) was used for pH measurements. Sonication was done by Digital Ultrasonic Cleaner, Model: Soner 206 H, MTI Corporation (USA). Factorial design statistical analysis was done using Minitab ® 16.2.0 software, USA. Materials and solvents Authentic samples of PAR and DAN were provided from Alexandria, Eva-Pharma Co., and Chemipharm Pharmaceutical Industries, Cairo, Egypt, respectively. HPLC grade ethanol was bought from Fischer Scientific (USA). Triethanolamine (≥ 99.5%) was bought from Sigma Aldrich (Germany). Orthophosphoric acid (85%, w/v) was obtained from Riedel-deHäen, Honeywell Research Chemicals (Germany). Dantrelax compound ® capsules, batch no. # 201123A containing 25 mg DAN and 300 mg paracetamol/capsule, are product of Chemipharm Pharmaceutical Industries and were purchased from a local Egyptian pharmacy. Standard solutions 200.0 μg/mL of both PAR and DAN were prepared separately in the mobile phase as stock solutions. Working standard solutions were prepared on demand by further dilution of different volumes of the stocks with mobile phase. The prepared stock solutions were stored at 4 °C in the fridge and remained valid for 2 weeks. Construction of calibration graphs Accurately measured volumes of both PAR and DAN standard solutions were moved into separate two sets of 10 mL volumetric flasks. The flasks were completed to the mark with mobile phase to obtain the concentration range of the two drugs: (1.0-200.0 and 1.0-40.0 µg/mL for PAR and DAN, respectively). 20 µL of the previously prepared solutions were introduced into the sample loop, injected into the column and eluted under the formerly adjusted parameters. Finally, the calibration graphs were performed by plotting the area under the peak Vs concentration of the drugs in µg/ mL and the regression equations for each drug were derived. Assay of the PAR and DAN in the Synthetic mixtures Synthetic mixtures of PAR and DAN with a ratio of 12:1, respectively, which is the ratio in their co-formulated capsule, were prepared. These solutions were then treated as mentioned under "2.5.1. Construction of the calibration graphs". The percentages found of PAR and DAN were then calculated referring to the calibration graphs or the regression equations. Assay of the binary mixture in their co-formulated capsules The content of ten capsules of Dantrelax Compound ® were carefully weighed and thoroughly mixed. An accurately weighed amount of the powder equivalent to one capsule was moved into 100.0 mL measuring flask and about 40.0 mL of ethanol were added. The flask was subjected to sonication for thirty minutes to ensure thorough mixing of the contents. Then, the flask was completed to full volume with water. Finally, the mentioned procedure under "2.5.1. Construction of calibration graphs" was performed. The capsule contents of the two drugs were calculated referring to the calibration graphs or regression equations. Experimental design Experimental design is a process that depends mainly on making systematic plans that make full use of minimum experimentation to obtain maximum information then employing it using statistical models to make significant conclusions from the obtained results [25]. Multilevel factorial design, 2 3 FFD was applied in this study for determination of the optimal conditions that produced the ideal response values. Minitab optimizer is provided with upper, target, and lower values for each response (retention time of PAR, tailing factor of DAN peak and retention time of DAN). Minitab calculates the optimum requirements of organic solvent, % of TEA and pH and draws a plot. The optimization plot displays the influence of each factor (column) on the responses (rows), as shown in Fig. 2. The optimization plot shows the effect of each parameter on the responses and chooses the optimum of each factor for best responses. All details of how to carry out DOE process and how it calculated the optimum conditions are explained in detail in EL-Shorbagy et al. [26]. Results and discussion The proposed method presents a green, fast, sensitive and economic RP-HPLC technique for resolving a binary mixture used for treatment of muscle spasms related diseases like lower back pain. The proposed technique employs factorial design to optimize and maintain the optimum parameters used for separation; hence it saves time and resources. Method development and optimization Different parameters were investigated for the sake of obtaining the optimized ratios of mobile phase and suitable column that produces good separation without wasting any extra solvents or chemicals using 2 3 FFD to ensure the reliability and optimal performance of the method. Good optimization led to decreasing the environmental hazards through the usage of eco-friendly and relatively safe solvents, such as ethanol and water. The optimization also resulted in shrinking the required time for chromatographic analysis and consequently reducing waste production while maintaining the best resolution and sensitivity. Typical chromatogram of symmetrical peaks of a synthetic mixture of PAR and DAN is shown in Fig. 3a. The chromatographic parameters adopting the optimum conditions were calculated and shown in Table 1. The two drugs were well resolved and separated using isocratic elution of an aqueous mobile phase consisting of 40% ethanol, 60% water and 0.2% TEA, in less than 6 min. Selection of suitable column Three columns were put on trial for choosing the best one for separation of PAR and DAN including: The third column (Hypersil C18 column) was found to be the best one regarding the resolution of the peaks and run time. Selection of suitable wavelength UV detection was carried out at 290 nm. This choice was adopted based on the UV spectra of PAR and DAN [21] as shown in Fig. 4. The spectra showed that 290 nm is the most suitable especially for DAN, as it has the lowest amount in the capsule and thus higher sensitivity is required. Eluent composition (screening experiment) This method was mostly directed at avoiding the use of hazardous solvents and using ethanol as a green organic solvent for RP-HPLC. Ethanol is believed to be a safe and less hazardous eluent, as it is distinguished by its high viscosity, low vapor pressure consequently a less evaporation and inhalation potential, thus reducing the necessity of thorough waste cleaning. All these advantages give superiority to ethanol for usage in mobile phase [27]. Also, the usage of hydro-organic mobile phases as ethanol/water mixtures allows decreasing the amount of organic solvent essential to achieve separation [28]. Compared to acetonitrile and methanol, ethanol has lower disposal costs. This is mostly because of its environmentally compatible waste, especially with the high expenses associated with the other solvents waste disposal [29]. The ratios of ethanol and triethanolamine (TEA) in the eluant and its pH were studied to choose the best separation of the studied drugs in the shortest possible time using experimental design. Different ethanol percentages (10-60%) were tested and the results of the studied dependent parameters were inserted to Minitab. Addition of triethanolamine was very important for DAN elution and its peak shape, as the mobile phases missing TEA led to tailing in DAN peak. Thus, various concentrations of TEA from 0.05 to 0.3% were tried individually. Lesser amount of TEA caused insufficient improvement of DAN peak shape, while increasing its concentration caused shorter retention time, which negatively affects the resolution of the peaks. Different pH values (3.0-6.0) were also tested to study their effect on the separation. The pH of the mobile phase had no effect on the peak shape or retention time of PAR. However, DAN peak became closer to PAR with pH values less than 4.0. Meanwhile, pH greater than 5 increases the DAN retention time. Experimental design The main goal of experimental designs was to reach the optimum conditions with the minimum number of trials needed while examining the maximum number of factors. Some initial chromatographic experiments were required before performing an experimental design to determine the chromatographic factors which have significant effect on the chromatographic responses (screening experiment). In this experiment, three factors were found to affect the chromatographic performance including: % of organic modifier, ethanol, TEA and pH. These factors were mainly affecting the retention time of full factorial design was applied for optimization of the current study using two level combinations and three independent factors (pH and % of ethanol and TEA). FFDs are the form of factorial designs in which all influencing independent factors (k) with (m) level combinations are investigated. The number of experimental runs needed for a FFD depends on the number of independent factors (k) to be studied. As a general rule, the design requires a total of m k experiments [30] to be performed. From the screening step, it was found that the optimum input ranges in the 2 3 FFD design is as follows: organic modifier in the range of 10-60%, the % of TEA was between 0.1 and 0.25% and a pH in the range of 4.0-5.0. These critical factors were inserted into Minitab software to find their optimum conditions. The design suggested a set of 8 experiments (Additional file 1: Table S1) are needed to represent interactions of the mentioned factors and their effects on selected chromatographic responses (t R of DAN, t R of PAR and T f of DAN). For the choice of the most critical factors influencing the method, a synthetic mixture containing 30 µg/mL of each drug was prepared. The suggested eight runs were carried out, then the obtained chromatograms were interpreted and the results were inserted into Minitab software to determine the dependent factors. Finally, response optimizer compromise between different responses then the optimum setting of the input variables and hence desirability values were determined. In response optimizer; lower, target and upper values are defined for dependent responses. Optimal setting for the input variables along with desirability values are calculated by Minitab response optimizer. To ensure that the optimum conditions are obtained, Minitab response optimizer calculates the composite desirability (D) which evaluates if the responses are in their acceptable limits and it ranges from zero to one. Zero is not accepted as it means that many of the responses are out of their accepted limits, while one means that the condition reached is optimum, so its value is better to be one or near one ( Table 2). According to the response optimizer and optimization plot (Fig. 2), it was proven that the optimal chromatographic conditions were 40.0% v/v for ethanol, 0.20%v/v TEA and pH of 4.5. Pareto charts in Additional file 1: Fig. S1 showed the effect of the factors on the responses. It was found that % of the organic modifier highly affected the retention time of PAR. Additional file 1: Figs. S2 and S3 illustrate the interaction plots and the main effect plots of the independent factors on the dependent ones. Finally, the used mobile phase consisted of 40:60 (v/v) of ethanol: water, 0.2% TEA at pH 4.5 ± 0.02 and 0.8 mL/min flowrate at 290 nm UV detection were employed to allow simultaneous analysis of the two drugs with acceptable sensitivity. The adopted chromatographic conditions are summarized in Table 1. Suggested technique validity The proposed technique was validated according to International Conference on Harmonization Guidelines (ICH) [31]. Linearity, limit of quantitation (LOQ) and limit of detection (LOD) The linear range of quantification for PAR and DAN was studied adopting the proposed method and the results were presented in Table 3. Statistical analysis of the produced data [32], proved the linearity of the calibration graphs. Linear regression equations of PAR and DAN were as follow: Where: AUP is the area under the peak, C is the concentration in µg/mL and r is the correlation coefficient. The limits of detection and quantitation were calculated practically following signal to noise ratio as in USP [33] and the results are shown in Table 3. Accuracy Statistical analysis was applied for comparison between the obtained results from the suggested method and those by the official USP methods [33] adopting the Student t test and the variance ratio F test [32]. The official reference methods adopted HPLC technique to assay each of PAR and DAN. The results showed that there was not a significant difference between the performance of both methods in terms of accuracy and precision, respectively (Table 4). Precision Two levels precision were performed on each drug by examining them on three successive times in the same day or on three successive days to test intra-day and inter-day precision, respectively and the precision results are shown in Table 5. 5 The green assessment report for the suggested HPLC method comparing to reported methods, using the GAPI tool Table 10 Eco-scale penalty points for the reported and the proposed HPLC methods [40] Reported HPLC method [19] Reported HPLC methods [ Robustness Some chromatographic conditions were subjected to minor changes to test the proposed method robustness. Those changes were carried out univariately. The investigated variables were pH of the mobile phase (4.5 ± 0.1), ethanol percentage (40 ± 1%) and TEA concentration (0.2 ± 0.01%) as shown in Table 6. The proposed method was proven to be robust as such minor changes did not affect either the resolution or the area under peak of the two drugs. System suitability System suitability assessments were done referring to the USP [33] and ICH Guidelines [31] on mixture of PAR and DAN to calculate the chromatographic parameters. The obtained parameters are presented in Table 1. Applications The proposed technique was employed effectively to analyze both PAR and DAN simultaneously in their synthetic mixture and combined capsule as shown in Tables 7 and 8, respectively. The results in both were in a great agreement with those obtained adopting the official USP methods [33] in regards to accuracy and precision [32]. Chromatogram for the two studied drugs in their combined capsule is illustrated in Fig. 3b. Greenness estimation Although the studies focused on eliminating the waste and adopting ecofriendly and sustainable methods [17,34,35] were started in 1995, they were not assessed by the analytical society. One of the priorities of green analysis is to reduce the use of harmful substances without affecting the efficiency of the chromatographic performance [36]. The usage of environmentally friendly solvents in the mobile phase is one of the most important ways to obtain greener analysis [37]. The goal of this work Fig. 6 The evaluation of the proposed method greenness using analytical greenness metric (AGREE) Fig. 7 A graphical abstract that summarized the suggested approach is to declare that traditional quite dangerous techniques can be replaced by ecofriendly ones while maintaining same analytical behavior. Recently, green analysis as well as indexing the method greenness has become very important. Indexing the method greenness allows the possibility of ranking the methods according their greenness which is very helpful [16,38,39]. Four assessing methods were employed to assess the greenness of the recommended technique and compare it with reported ones. First, National Environmental Methods Index (NEMI) has been applied on the proposed and reported methods. NEMI is a tool using greenness profile and regarded as one of the first appeared methods [16]. Table 9 shows that the proposed method achieves the four criteria of the greenness profile and is greener than the reported HPLC methods according to NEMI profile. Water and ethanol are neither classified as PBT nor hazardous by the EPA's Toxic Release Inventory [17,34], the pH of the mobile phase is not corrosive and the waste is less than 50 g/run. Second, Green Analytical Procedure Index (GAPI) [38] was also applied on the proposed and reported methods. The green assessment GAPI profiles for the proposed and reported HPLC methods are presented in Fig. 5. Additionally, analytical Eco-scale was utilized for evaluating the proposed and reported methods, as represented in Table 10. The proposed method's score was 95 which referred to an excellent green methodology (the closer the score to 100, the greener the method) [40]. Finally, the greenness of the proposed method was investigated using AGREE-Analytical Greenness Metric Approach and software through evaluating 12 parameters of green analytical aspects. Figure 6 represents the twelve parameters with different colors ranging from dark green to orange based on information reported by Francisco Pena-Pereira et al. [41]. The score was found to be 0.83 indicating the greenness of the method (the closer the score to 1.0 the greener the method). As described previously by the four assessment tools, it is concluded that the suggested HPLC technique has an environmental advantage over the two reported methods, and thus it could be employed for the routine analysis of PAR and DAN without affecting the environment. Conclusion HPLC is the most commonly used technique for analysis of pharmaceutical compounds, so it is very important to minimize its bad effect as much as possible on both analysts and nature. However, most HPLC methods still do not consider the consequences of using unsafe compounds and solvents on the environment. The recommended mobile phase was mainly chosen for substituting unsafe solvents (such as methanol and acetonitrile) without influencing the chromatographic performance. The proposed technique for the determination of PAR and DAN was designed to avoid using harmful chemicals or create hazardous waste products in order to make it eligible for routine analysis. The proposed method was optimized and developed using a two-level FFD to predict the system suitability parameters. Employing FFD participated in decreasing the chemicals consumption, analysis steps and time. The recommended technique has low environmental impact which was ensured by investigating the method's greenness using four assessment tools. Also, the proposed technique is rapid, repeatable and straight forward with no need for pretreatment. It was successfully applied for analysis of the studied drugs either in synthetic mixtures or combined capsules. All these benefits of the proposed method made it qualified to be used as a greener substitute for routine analysis of PAR and DAN in quality control laboratories and food analysis. A graphical abstract that summarized the suggested approach was presented in Fig. 7.
2023-08-02T13:36:06.736Z
2023-08-02T00:00:00.000
{ "year": 2023, "sha1": "7aeb348c833494c4a51497d9d43a2267eda9bca5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "7aeb348c833494c4a51497d9d43a2267eda9bca5", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233968990
pes2o/s2orc
v3-fos-license
Mathematical modelling of power skiving for general profile based on numerical enveloping Power skiving is an effective generating machining method for internal parts like gears with respect its high productivity. The general mathematic modelling for power skiving is the basis for cutting tools design, machining precision evaluation, and machining process optimization. Currently, mainly studies are focus on the involute gear machining with adopting the analytical enveloping equation. However, these analytical methods have failed to deal with overcutting for general profile skiving tasks. Moreover, little attention has been devoted to investigate the power skiving process with taking variable configuration parameters, which is significant to control the machined surface topography. Herein, we introduce a mathematic modelling method for power skiving with general profile based on the numerical discrete enveloping. Firstly, the basic mathematic model of power skiving is established, in which the center distance is formulated as polynomial of time. By transforming the power skiving into a forming machining of the swept volume of cutting edge, a numerical algorithm is designed to distinguish the machined transverse profile via the discrete enveloping ideology. Especially, the precise instant contact curve is extracted according to the feed motion speed inversely. Finally, simulations for involute gear and cycloid wheel are carried out to verify the effectiveness of this method and investigate the influence of variable radial motions on the machined slot surface topography. The results validate capability of this method on simulate the dynamic power skiving process for general desire profiles and evaluate the machined results. Rotation angle of cutter φ w Rotation angle of workpiece Ω Section angle between axes of cutter and workpiece E 0 Center distance between cutter and workpiece E c Install eccentricity of cutter f w Section plane of workpiece N t Dividing number of time N p Dividing number of profile N s Dividing number of cutting edge K e (t) Polynomial of time for center distance R c Pitch circle radius of cutter Introduction Power skiving is a typical generating machining method, which provides high productive in machining parts with periodic features like gears. With installing as a pair of cross-axis gear meshing, the cutting edges on the end face of cutter remove a layer of material via the relative axial motion for the part [1]. This working principle indicates that power skiving not only adopts continuously generating motion as gear hobbing to ensure high machining efficiency, but also retains the advantages of gear shaping that is capable to manufacture both external and internal part, non-through slots, doublelinked gears, and so on. As invented in 1910 by Pittler [2], however, power skiving underwent slow development with subject to the poor stiffness of the machine tools and the short tool life. In recent years, with the development of tool materials, especially, the spindle technology and numerical control systems, the power skiving has performed superiority in gear production. Relevant power skiving solutions [3][4][5] such as machine tools and cutters have been provided by companies like Gleason and Pittler. Meanwhile, theories on power skiving have been studied in difference aspects to enhance its application. As the basis of successful machining, mathematical model of power skiving and the cutting edge curve were studied widely [6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Jin [6] investigated the analytical theory of gear skiving and pointed out that the action line of skiving is as same as the action line of spiral gears. Li et al. [7][8][9] analyzed the working principle of skiving and proposed a cutter design approach to error-free gear skiving. Guo et al. [10,11] investigated the skiving tool design and cutting mechanism of cylindrical gears, and introduced a multiple blades taper skiving tool [12]. Radzevich [13] introduced the design and computation principle of the skiving cutter for gear skiving. Tsai [14] established the mathematical model for design and analysis of power skiving tool for involute gear cutting. Stadtfeld [15] introduced the power skiving technology of Gleason, including the generation kinematics, cutter geometry, chip geometry, machine tool configuration, and processing software. Tomokazu et al. [16] established a calculation model for internal gear skiving with a pinion-type cutter having pitch deviation and run-out. Moriwaki et al. [17,18] investigated the cutting tool parameters of cylindrical skiving cutter with sharpened angle for internal gears to optimize skiving cutter design. Shih and Li [19] proposed an error-free conical power skiving cutter design method via meshing theory with considers the variation of center distance. Jia et al. [20] developed a discrete enveloping-assisted cutting edge curve calculation method for skiving cutter. Li et al. [21] studied the design of power skiving tool by considering the interference and minimizing the machining deviations. These researches provided the general theory and mathematical model for power skiving as well as the calculation method of the cutting edge curve design. In addition, the simulation of power skiving process is significant to estimate and optimize the skiving process and cutting tools. Research works are mainly devoted to geometric precision simulation [22][23][24][25][26][27][28][29][30][31] and the working condition analysis of cutter [32][33][34][35][36][37]. Commonly, the geometric simulations include three kinds of manner. First, the analytical ones adopted the meshing equation for geometry calculation, like Guo et al. [22,23] studied the theoretical tooth profile errors of gears in skiving and investigated the tool setting errors on gear skiving accuracy; further, they [24] studied the cutting edge correction method for conical cutter; Zheng et al. [25] generalized the machine kinematics correction and TCA to gear skiving. Secondly, the numerical methods represent the machining process in a discrete way for concrete calculation, like Jia et al. [26] developed a numerical simulation method for non-involute gear power skiving with adopt approximate external enveloping; Zheng et al. [27] developed a novel z-mapbased numerical method to calculate tooth flank and investigate the influence of eccentricity error on the surface roughness; Inui et al. [28] developed a triple-dexel based geometric simulation method for power skiving process with GPU computing acceleration. Thirdly, the CAD-based methods perform the cutting process as Boolean operation and finished it via commercial CAD package, like Antoniadis et al. [29,30] simulated the kinematics of the cutting process with the aid of commercial CAD software, which allows the precise determination of the non-deformed chips and cutting forces; Tapoglou [31] simulated the chip thickness of skiving with the aid of CAD. In the basis of geometric simulation, cutting force of power skiving is modelled via dividing the cutting edge as a serial of oblique cutting element and integrating the dynamic force of all the elements. McClosky et al. [32] and Onozuka et al. [33] reported that these modelling results are consistent with the experiment cutting force; Li et al. [34,35] further discussed the temperature during power skiving process. Besides, some comprehensive simulations were developed such as Klocke et al. [36] investigated the influences of skiving configuration parameters on the working performance like tool wear and chip welding, and Schulze et al. [37] studied the kinematic process of skiving and investigated the chip formation mechanisms of skiving with adopting 3D-finite element simulation. Most of the researches are focused on the machining of involute gears with adopting analytical mathematic model. However, it is difficult to process the conditions like self-intersection and overcutting, particularly for universal target profiles. The discrete numerical enveloping method proposed in [26] demonstrates excellent performance to deal with these drawbacks in general profile skiving simulation, but it lacks the capability to investigate the entire skiving process with combining time-varying configuration and kinematic parameters. Aiming to investigate the dynamic machining process for power skiving by overcoming the shortcoming of [26], this work developed a discrete enveloping-based mathematic modelling method for power skiving, which is the effect of the general profile tasks with strong robustness. The remainder is organized as follows. The basic mathematic model for skiving is introduced in Section 2. The numerical simulation method for power skiving is studied in Section 3. Then in Section 4, the deviation estimation is described. In Section 5, several skiving tasks are simulated and concluded at last. 2 The mathematical model for power skiving Configuration of power skiving In a general power skiving system, the cutter and workpiece is set up as a pair of cross-axis in Fig. 1. Workpiece coordinate system S w : O w -X w Y w Z w is attached to the workpiece, and its Z w -axis is coincided with the cylindrical workpiece axis. Similarly, cutter coordinate system S c : O c -X c Y c Z c is attached to the cutter, and its Z c -axis is coincided with the cutter axis. In initial, we define that X w -axis is going through the origin O c and X c -axis is coinciding with X w -axis. Meanwhile, with respect to the meshing of cutter and workpiece, the shaft angle Ω between their axes and the nearest distance E 0 between their axes are given as follows: where j w denotes the helix direction of workpice (j w = 0 indicates spur, j w = 1 indicates right hand, and j w = − 1 indicates left hand), and j c denotes the helix direction of cutter (j c = 0 indicates spur, j c = 1 indicates right hand, and j c = − 1 indicates left hand); β w and β c are the helix angles on the pitch circles, respectively, for the workpiece and the cutter; and k io denotes the type of skiving (k io = 1 denotes external skiving, and k io = − 1 denotes internal skiving). Moreover, in practical applications, the cutter takes an eccentricity E c along the Z c -axis from the origin O c , looking to improve the working condition of cutting edges and to avoid the interferences during the machining process. Motions of power skiving Power skiving consists of two kinds of coupled motion as illustrated in Fig. 1, i.e., the meshing motion and the feeding motion. The cutter and workpiece rotate φ c and φ w around Z caxis and Z w -axis, respectively, with constant transmission ratio, which is termed as main meshing motion, contributing to material cut off. Meanwhile, a differential rotation Δφ c is implemented on the cutter to ensure the meshing motion when the cutter performs a synchronous linear feed motion f along Z w -axis, which contributes to produce the complete slot. The rotation angles of the power skiving system are given as below: where z w and z c are the slot number of workpiece and teeth number of cutter, respectively; Δφ c is the differential rotation angle of the skiving cutter, and it is determined by both the axial feeding Δf of the skiving cutter along Z w -axis and the configuration of power skiving system as follows: Aiming to satisfy the diverse requirements in practical applications of skiving, such as tooth profile crown correction and machining error sensitivity analysis, we involve an additional feed motion K e to enhance this virtual kinematic model of power skiving as in [38], which expresses the constant radial distance E 0 between the cutter and the part. The variances of feed motion K e is formulated as polynomial of time as follows: where q is the polynomial order of the radial feed, c e is the polynomial coefficients of radial feed, and ΔE is the variance in radial direction. Furthermore, proper rotation directions of the cutter and the workpiece are crucial to ensure successful material removal during skiving. The rotation of cutter must ensure that the cutting edge is sliding inside the slot of workpiece along the feeding direction. 3 Mathematical modelling of power skiving process by numerical enveloping Equivalent expression of power skiving as form machining The aforementioned working model of power skiving indicates the engagement of each tooth cut off a layer of material in slot with the help of both meshing motion and feeding motion, while the succession of engagements produces the whole slot via the feeding motion. This revels that the skiving process can be taken as a forming machining process for helical or spur gear like drawn in Fig. 2, in which the swept volume of one cutting edge (SVC) relative to the gear slot during engagement works as the wheel and its instant contact curve generates the slot surface following the feed motion. Consequently, one can deduce the generating process of power skiving as follows: each cutting edge generates one or several points on the desired surface at every engaging moment, and these points further develop a forming curve on the desired slot surface during each engagement of one cutting tooth. Numerical enveloping of power skiving Commonly, the generating points during power skiving process are determined by enveloping equations. It tells that the external normal of generating points on the SVC surface is perpendicular to the velocity vector of their feed motion. However, the analytical method might invalid in cases like singular points, overcutting, and multiple cutting edge. For this reason, this work models the skiving process as a forming machining process in a numerical way. The key points are approximating SVC by a serial of cutting edges, and then obtaining the transversal profile of workpiece and the contact curve via identifying the external enveloping curve of the intersection curves between SVC and the workpiece transversal section. As shown in Fig. 3, the detailed numerical modelling and investigating for the skiving process are arranged in 7 steps. Step. 1: Dividing the cutting edge curve r(u) = [x(u),y(u),z(u),1] T into continuous points for the purpose of maintaining the generality of this method, where u is the radial parameter of the cutting edge. Step. 2: Determining the rotation interval [φ s , φ e ] of one tooth engaging cutting by figuring out the rotation angles of cutter touching the addendum circle of worpiece. Step. 3: Generating the cutting edge curve r w (u,t) in workpiece system S w based on the mathematical model of power skiving as given as below: where M w c (t) is the transform matrix from cutter system S c to workpiece system S w at t moment. It is given by basic homogeneous coordinate transform matrices according to the kinematic model of skiving. Step. 4: As shown in Fig. 3a, establishing the swept volume of one cutting edge SVC through assemble N t cutting edges r w (u,t i ) with rotation angle uniformly within where t i is the time moment of the i th rotation angle. It is given by the angular speed of cutter ω c as follows: Step. 5: As shown in Fig. 3b, taking SVC to perform a forming machining on the transversal section of workpiece f w (z = z 0 ). Essentially, it is intersecting all the cutting edge curve r w (u,t i ) of SVC to the transversal section f w along the feed motion f(τ), which is formulated as an independent motion as the function of τ. In fact, additional rotation Δφ c (τ) is involved to ensure that the intersection curves are symmetrically distributed along the X w -axis. In this basis, one can figure out the intersect point E i,j for each i th cutting edge with specified radial parameter u j which satisfies Eq. (10). Obviously, the proper τ 0 can be ascertained by numerical searching methods, since Eq. (10) is a function of variable τ, in which t i specifics a constant locus of cutting edge curve. Consequently, the point E i,j can be directly given as: Step. 6: Distinguishing the external envelop profile of all the intersect curves on f w . The external enveloping profile can be derived numerically as a serial of points along the X w -axis since the intersection curves are distributed around X w -axis. At each radius within the specified radial range [r f ,r a ], setting a line perpendicular to X-axis, then its intersection point for every intersect curve on f w , is easy obtained as the intersection of two straight lines segment in X-O-Y plane. After all the intersection points between the intersect curves and current line have been identified, the point that taking the maximum/minimum y-component is specified as the enveloping profile points C k as shown in Fig. 3c. Concurrently, for each profile point, the time serial number of corresponding cutting edge indicates the cutting occur time of machined profile point C k . Step. 7: Extracting the instant contact curve on the machined slot surface as in Fig. 3d. At first, transforming each machined profile point C k inversely to the rake flank S R that defined by the cutting occur time t k according to the feed motion and additional rotation Δφ c (t). The intersection point Q k that cutting occurs on the cutting edge can be solved via searching proper variable τ p satisfying Eq. (12). Subsequently, the instant contact point P k on the machining surface can be determined by translating Q k from cutter coordinate system to workpiece coordinate system based on its corresponding time moment t k as follows: In final, all the instant contact points on the slot surface compose a spatial curve, which is the generating line of one cutting edge on the slot surface during one engagement of skiving process. Simulation frame of power skiving Simulating the complete slot surface of workpiece undergoing power skiving is significant to analyzing the machining result like gear crowning. For this purpose, the workpiece is assembled by a serial of disks with same thickness along axial direction. Meanwhile, only one slot is simulated with respect to the symmetric of workpiece. The resulted generating line of cutting edge that works on each axial plane of workpiece composes the machined slot surface. The flowchart of this numerical simulation for power skiving is provided as in Fig. 4; the concrete seven steps for calculating the generating line are demonstrated as a submodule. The simulation frame is implemented to following case studies. In practical machining, the cutting edge and the slot of workpiece are meshing periodically; consequently, the axial displacement of every cutting engagement must comply with this meshing motion. For simplicity, we are taking the assumption that the axial position of f w can be given arbitrarily in this study, without considering the cusps on the machined slot surface that are produced by the inherent interrupted meshing cutting mechanism. Machined error evaluation In order to evaluate the simulation precision of proposed method, the deviation between the simulated profile and the theoretical one is calculated. As shown in Fig. 5, for each generating curve of every cross section on the workpiece, every generating point decides a cross section. The shortest distance from the generating point to the point on the theoretical profile on this cross section is specified as machining error, instead of the minimum distance from the simulated point to the theoretical surfaces along its normal vector. Therefore, the machining error is expressed along the generating curve on the slot surface. Numerical examples Aiming to demonstrate the effectiveness of this method, simulations of power skiving with various configurations for an internal involute gear and a cycloid wheel are performed, and the machining errors are investigated. Simulation for internal gear power skiving For the convenience of evaluating the accuracy of this method, an internal helical involute gear power skiving was simulated with the parameters listed in Table 1, since the cutter and the gear blank are meshing as a pair of standard involute gear with the same normal module and normal pressure angle. At first, the standard involute curve like cutting edges for two types of rake flanks was calculated by a method in [20]. Then, the cutting edge curve and the enveloping curves on the gear transversal section for case 1 are demonstrated as in Fig. 6(a), in which the distinguished enveloping curves were also involute curves in Fig. 6b. As shown in Fig. 6c, the cutters for case 1 and case 2 adopted different rake flanks. The swept volumes of cutting edge relative to the gear blank for case 1 and case 2, which are demonstrated respectively in Fig. 6d and Fig. 6e, show difference as well as the geometries of their instant contact curve. Compared with the standard involute curve, the deviations of machined profile for these two cases are illustrated in Fig. 7, in which the maximum deviations were no more than 0.15 μm. The accuracy indicates that the developed method is capable to simulate the machining error. In practical machining process, various teeth surface like crowning and conical might be adopted to avoid the interferences on the part gear shoulder and to obtain specific working performance. In common, the teeth surface can be corrected by properly changing the polynomial coefficient of cutting path along the full axial feeding range. Unlike traditional analytical methods, the developed numerical enveloping simulation method was used to study the machined teeth surface by power skiving with two types of center distance K e (t) for both the two cutting edge configurations. A linear radial motion K e (t) = E 0 +0.0067t-0.04 and a quadratic radial motion K e (t) = E 0 -0.0025t 2 + 0.03t-0.09 are carried out on the power skiving cutter as shown in Fig. 8a and b, respectively. A constant axial feed speed 1 mm/s was performed for the full axial length that started at L = − 6 mm and exited at L = 6 mm. In Fig. 8, the theoretical tooth surface is represented by the gray grid, and the machined tooth surface is defined by the contact curve as blue grid. One can find out the teeth surface performed a taper shape after linear radial motion, in which the deviations were about 24 μm at L = 6 mm, and the deviations were nearly − 23 μm at L = − 6mm, and both sides of the tooth surface showed the similar topography but following their contact curve, respectively. The simulation for the quadratic radial motion is provided in Fig. 8b, where the machined tooth surface performed a parabolic variation over the full length of tooth. The deviations were about − 34 μm at the two ends of the tooth surface and approached to the minimum nearly 0 μm at the middle. Besides, the small differences of the corresponding grid nodes of tooth surface between case 1 and case 2 are reasonable, since the distributions of contact curve are different in these two cases are different. In all, these results were consistent with the anticipated correction for teeth; i.e., linear polynomial radial motion generates a cone-shaped correction, and quadratic polynomial radial motion produces a drum-like correction. Simulations implemented by commercial software VERICUT are provided to proof the effectiveness of developed numerical simulation method. As shown in Fig. 9, a raw gear blank is fixed for machining with specific cutting edge curve, and the linear radial motion and quadric radial motion with cutting edge as case 1 are performed, respectively. Through the color map of machining residual of the slot surface, without considering the machining residual cusps, one can see that the top section curve shows undercut about 25 μm, while the bottom section curve shows overcut about 20 μm, consisting with the simulation in Fig. 8. Meanwhile, the quadric radial motion demonstrates overcut about 25-30 μm on the two end section while zero-machining error in the middle section of the slot. The same trend as provided in Fig. 8 indicates that the developed model is capable to simulate the power skiving machining process with radial motion like gear crowning. Simulation for cycloid wheel power skiving Looking to prove the generality of developed method, power skiving for cycloid wheel that is adopted in RV reducer was simulated. The parameters for the short epicycloid equation, the power skiving configuration, and the numerical simulation are listed in Table. 2. The whole simulation process is demonstrated as in Fig. 10. Firstly, according to the equidistance line of short epicycloid shown in Fig. 10a, the cutting edge curve is calculated by [20], and the external power skiving is set up as in Fig. 10b. Then, the machined profile is distinguished through enveloping all the intersection curves on transversal of wheel as in Fig. 10c. Through inversely tracing the intersection point on the rake flank specified by the recorded time serial number of each profile point, the instant contact point is extracted in work coordinate system. Consequently, the complete contact curve and the swept volume of cutting edge are obtained as in Fig. 10d. Additionally, to verify the robustness of this method, a complex machining configuration with cubic radial motion K e (t) = 0.000125t 3 -0.0045t 2 + 0.054t-0.216 for t = [0, 12] and K e (t) = − 0.000125t 3 + 0.0045t 2 -0.054t + 0.216 for t = (12,24) is applied on the power skiving cutter for the cycloid wheel machining as shown in Fig. 11. The theoretical tooth surface was represented by gray grid, and the machined surface is illustrated by contact curves in blue grid by sampling nodes. The deviations on the two ends of tooth surface were maximum and were reduced to the minimum near to the middle at each axial path (from A to K). It is consistent with the desired drum-like tooth correction via cubic polynomial radial motion. As the presented simulations shown, the proposed mathematic model for general profile power skiving is capable to simulate the skiving motion and to evaluate the results of power skiving, and it may be useful for power skiving CNC machine developers as well as the gear manufacturers. Besides, properly changing the polynomial coefficient of feed motions is a solution of gear correction for power skiving. Conclusions This work developed a mathematical model of power skiving with strong robustness to simulate the instant contact curve for dynamic machining process and general profile tasks via adopting numerical enveloping ideology. This model is verified and applied by simulations with polynomial radial motions. The main conclusions are drawn as follows: (i) The proposed mathematic model for power skiving includes not only the general configuration parameters but also the center distance formulated as polynomial of time. It is effective for diverse dynamic skiving process modelling rather than the traditional constant kinematic model. Besides, this method takes a strong robustness thanks to the numerical discrete enveloping manner, which ensures that this method is successful in dealing with general profiles like cycloid and in overcoming the overcuts in analytical methods. the contact curves in axial distribution since the numerical calculation principle. The simulations for involute gear and cycloid wheel machining validate that the machining deviations on teeth surfaces are consistent with the assigned polynomial radial motion on the center distance as well as the simulation results via VERICUT, i.e., linear, parabolic, and cubic, which indicate that properly changing the polynomial coefficient of radial motions is a solution of gear correction for power skiving. (iv) As a numerical method, its accuracy will be affected by the number of profile point and the number of locus curves. Moreover, this method takes the assumption that cutting tooth performs continuous axial feeding motion along one slot, leading to it lack to identify the residual machining height on teeth surface. Further study will be devoted to involving the periodical meshing of cutting teeth during skiving process.
2021-05-08T00:03:05.020Z
2021-02-24T00:00:00.000
{ "year": 2021, "sha1": "807a189c8258131b8df001c1373e8261d0fa18d0", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-212100/latest.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "0692b4a50c03bc2d556a0ac5c5d88e4858c2d105", "s2fieldsofstudy": [ "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
237907839
pes2o/s2orc
v3-fos-license
COMPATIBILITY OF THE JAPANESE EDUCATION PATTERN “KYOIKU MAMA” WITH ISLAMIC EDUCATION Japan as a developed country is certainly born from a superior education pattern so that it can outperform other countries. The "kyoiku mama" education pattern used by Japanese people has a major impact on the progress of the Japanese State. Mother as a child educator leads Japan to advances in science and technology. Not only Japan, Islamic education is also a mother as an early educator for Muslim children. This research is discussed using the literature study method that uses a variety of references in achieving the objectives of this study. The purpose of this study was to find a common thread between the "kyoiku mama" education pattern and the Islamic education pattern. The results obtained are three common threads, namely a) the degree of the mother is equally exalted, b) the mother is used as an example for her children, and c) the mother becomes a generation-producing agent that will change the environment, society and the country towards something better. Introduction The progress of a nation lies in education. If education in a nation is good, eating will lead to good national progress as well. Japan is a developed country. Advancing in this research is focused on advancing in the field of education. The advancement of Japanese education is proven by the birth of scientific and technological advances in Japan that can be recognized for its reliability by other countries. A good Japanese education is very much determined by the education pattern used by the Japanese government. One of the educational patterns used by Japanese people is known as "kyoiku mama". This term describes the Japanese education system as being strongly influenced by the role of a mother. Mother as an educator is the slogan of Japanese education. The mother is responsible for educating the child's character as early as possible. It is believed that good character instilled in the family will lead to the progress of the nation. The character education which is the flagship of the Japanese State originates from a family, especially a mother. A mother instills character from the process of conceiving until the child becomes an adult. Mother instills simple characters as an example, saying thank you, asking and giving forgiveness, putting things back in their place, using money only for important things, not giving extra pocket money, and teaching simplicity in life. This simple matter is not only a theory but also a direct practice given by a mother to her child. A career in an industry is not something to be proud of for mothers in Japan, but being a housewife who raises and educates her child to succeed is the true success of a mother in Japan. The community will really appreciate a mother who is successful in educating her child so that it can be useful for the community. Domestic work is not an underrated job in Japan. Prophet Muhammad SAW who brought goodness to all people through the teachings of Islam also explained the role of women in Islam. Women in Islam are glorified. This is evidenced by the hadith of the Prophet Muhammad which says that heaven is located on the feet of the mother. That is the important role of mother in this life. Mother becomes a very central position in the family in Islam. A woman in Islam is responsible for raising her child in a loving way. Women are also responsible for providing good to their families and environment. Women in Islam must also be able to become leaders for themselves. Mother is a woman who should be emulated by her child. A child is a deposit that must be looked after by a mother and prayed for kindness. There is an immense reward that is promised by Allah for the role of a mother. The purpose of this research is to obtain a theoretical common thread related to the "kyoiku mama" education pattern with Islamic education. Literature review is carried out in order to obtain a clear thread between the two. The benefit to be obtained in this research is the existence of a clear additional reference which is able to show the suitability of the Japanese education pattern "Kyoiku Mama" with the education recommended by Islam, especially in the role of mothers in educating children in realizing an advanced civilization of the nation. The limitation in this study is the educational pattern in the form of the role of a mother in instilling children's character that can advance a nation. The research relevant to this research is the research of Nani Sunarni and Eka Kurnia Firmansyah in 2020 entitled "Citra Perempuan dalam Peribahasa Jepang menurut Norma dan Pandangan Islam". This study discusses several Japanese proverbs and is studied in Islamic terms. This discriminatory assessment of women in Islam is corrected in this article. Islam has never demeaned women, and this has been explained in theory in this article. The difference between this study and previous research is that this study wants to see the compatibility between the role of mothers in educating Japanese children and the role of mothers in educating children in Islam. This maternal education pattern in Japan is known as "Kyoiku Mama". This term wants to be found in relation to the pattern of Islamic education taught by the Prophet Muhammad SAW. Japanese Education Pattern "Kyoiku Mama" The curriculum is an educational program designed with attention to the development and needs of students as well as the expectations of parents and other communities for the school and the expertise of teachers in educating their students. In determining the educational goals of Japanese students, the Japanese Government relies on "Fundamental Law of Education" and School Education Law". Basic education law and school education law are the basis for determining educational goals, education delivery guidelines, and school goals at every level of education (Komatsu, 2002). The curriculum in Japan focuses on character education. Character education is deeply instilled in students in Japan. Character education in Japan is better known as moral education which is part of teaching education in Japan. The purpose of moral education for elementary school children is to make their students blend in social life both as individuals and as members of society (Cipta, 2017). The objectives of providing moral education to students in Japan are to: a) Build mutual respect for life and humans, b) Participate in developing traditional Japanese culture into a quality culture, c) Producing individuals who uphold the democracy of their country, d) Producing individuals who can maintain international peace, e) Build a spirit of independence, f) Build character that can uphold (McCullough, 2008). Character education in Japanese society starts from the family. The family in Japan is still the same as the family in any country. The Japanese family of course also consists of father, mother, and several children born to their mother's womb. Friedman (1998) states that a family is a collection of two or more people who have a bond that always shares experiences, has an emotional approach, and knows that they are part of a particular family. Family is the main element in a society. The family is an important figure in Japanese education. In the family, Japanese children can learn character. Moral cultivation is the duty of schools, families and communities (Junaedi, 2017). Mothers have a very important role in the education of children in Japan. The term Kyoiku Mama is an important term for children's education in Japan. The meaning of this term is that a mother will always encourage her child to learn, balancing children's education in physical, emotional and social terms (Syamsurrijal, 2018). Furthermore, Widisuseno (2018) also states that kyoiku mama is a term used by Japanese people as a mother educator. Kyouki mama also has an educational meaning related to women (Benedict, 1979). Kyouki mama is a concept of thinking in Japanese society which describes the figure of a mother who is assertive and disciplined in motivating their children towards the implementation of formal and non-formal, physical, social, and emotional education. Even a mother who has a higher education in Japan is willing to leave her career to carry out her noble duties as a mother in Japan (Sunarni, 2020). Women are the masters of the house who control household chores, finances and children's education Widisuseno (2018). Kyoiku mama is a slogan for Japanese public education which is the key to success in the success of education in Japan (Srimulyani, 2016). Mothers in Japan prefer to be housewives at home. Mothers in Japan prefer to stay with their children at home and outside the home (Suseno, 2018). Mothers in Japan are very serious in caring for their children, supporting every stage of children's education starting from choosing their child's kindergarten to taking seriously the best university for their children (Simons, 1991). Parents and teachers in Japan synergize with each other in educating their children, especially in terms of character (Burke, 2013). Women in Japan are Japan's greatest strength in building their nation. Women raise and educate their children in order to build a better quality Japanese society. There is a slogan used in the Meiji government regarding women, namely ryosai kenbo, which means a good wife and a wise mother. Mothers who are wise for their children are especially wise in educating their children (Ariefa, 2020). In a Dickensheets (1998) paper entitled "The Role of Education Mother" explains that the mother is the manager of the household and the caregiver of the child while the sibu's husband works outside. Kyoiku mama is a concrete form of the role of women in instilling the character of children in Japanese families. Kyoiku mama has the meaning of education from a mother. A mother is responsible for educating and teaching her child to instill the correct character, morals and ethics. Cultivating character has been started since the child was born by the mother. Kyoiku mama began to be developed in Japan in the second half of the 20th century. Japanese women have the belief that if they succeed in educating their children, society will consider them successful in society. Women in Japan get appreciation from society for their success in educating their children. The success of a mother in society is judged by her success in educating her children, especially in terms of character. Through such a mindset, it encourages mothers in Japanese society to compete in educating and teaching their children to be successful, have character, and be beneficial to society and the nation (Mulyadi, 2014). The success of kyoiku mama in instilling children's character from an early age from the family, so the school does not need to be too serious in instilling character. Through kyoiku mama, students have received character education directly and indirectly by their parents at home, especially by their mother. This causes a teacher in Japan to have sufficient time to provide knowledge without having to have branching thoughts to shape the character of their students (Mulyadi, 2014). The character values found in society that are developed in a child are simplicity, cooperation, discipline, hard work, order, shame, mutual respect and respect. A mother considers her child as "ikigai" which means that the child is a valuable asset that will affect the good name of the family. Therefore, it is important for a mother to educate the character of her child (Mulyadi, 2014). Parents in Japan do not teach their children to spend money on unnecessary things. Japanese parents teach their children to use money for things that are really necessary or important. This illustrates the simple life instilled in children by their parents. Parents in Japan will not buy their children a motorbike or car to go to school, children in Japan are educated to ride a bicycle or ride a train as a public facility that they can use to go to school. The allowance that is given to children in Japan is not given excessively and there are also those that are saved directly by the children. With such a pattern, children who have capable parents will not behave arrogantly, while children who have poor parents will not have a sense of self-doubt (Mulyadi, 2014). Simple discipline is taught by Japanese parents from an early age. Simple discipline is meant for example, such as putting an object back in its place after use, being on time for dinner, on time for sleeping and waking up, watching TV at TV watching hours, and playing during playtime. Parents in Japan give warnings or light sentences to their children if they break the rules they have agreed upon. The habit of saying greetings when going out of the house or entering the house, thanking anyone for getting help, apologizing if you have made a mistake and correcting it have been taught with great seriousness in Japanese families. This simple habit is believed to be able to bring Japanese children into a generation with character and advance their country (Mulyadi, 2014). The family is the place to instill the first character education for Japanese children. Family plays a very important role in shaping the character of children in Japan. This role is of course inseparable from the awareness of parents in Japan that the responsibility to instill character is not the responsibility of the school or the community where they live but it is their responsibility that they must carry out. This awareness is what makes Japanese parents serious in educating good character in their children (Mulyadi, 2014). Simple things that are instilled in Japanese children include a) Post it Acknowledgments (Arigatou Posuto Itto), these words are attached with the meaning of wanting to cultivate gratitude in the classroom environment, for example, a child is able to say thank you to a friend who has lent him an object or sharing food at lunchtime, through this kind of culture a child will not forget to thank others who have helped with his work; b) Environmental safety map (Chiiki Anzen Mappu), is a concept that teaches Japanese children to care for the environment around them. The map is made by students with the aim of reminding themselves and the community to always protect the environment; c) Mutual cooperation, making class picket schedules that are agreed upon by students together and posted in the classroom. There is no janitor who will clean the classroom where they study, the students will clean the classroom where they study; d) Having a goal, students are accustomed to writing their targets under their respective photos. The targets written by students are simple targets in a monthly or annual period; e) Newspaper (Tegaki Shinbun), This newspaper is made by students with an attractive appearance; 6) Build empathy, do not need to talk a lot which will make students bored but say enough, which is important so that students are taught to have high morale, discipline, and high creativity (Gumilang, 2019). Islamic Education Women in Islam have several obligations including a) as mothers for their children, mothers in Islam have a role in educating their children properly. In Islam, children are entrusted and mandated by Allah which must be taken care of by a mother who will continue the khalifah fil ardi on this earth. A child will pass what their parents have taught to their next child; b) as khalifah fil ardhi, Allah said in Sura Al-Baqarah verse 30 which means "Remember when your Lord said to the angels: Verily I want to make a caliph on earth." They said: "Why do you want to make (caliph) on earth a person who will cause damage to him and shed blood, even though we always praise you and purify you?" God says: "Verily I know what you do not know". Women in Islam must also be able to build a better life for themselves, their families, their communities and all humans; c) responsible for the environment and society, a woman in Islam is responsible for advancing the environment and society. What women can help is by advancing their children to benefit the environment and the surrounding community (Sunarni, 2020). Women at the time of the prophet are often told in the Koran. Women who are role models who have noble character and women who have bad attitudes are written in the Koran (Sunarni, 2020). The status of women has been exalted since the advent of Islam brought by the Prophet Muhammad SAW. The Prophet Muhammad SAW put heaven on the feet of the mother is a form of grace that is found in women (Bin Ladjamudin, 2015). Women in Islam are also described as being given advantages in conceiving, giving birth, breastfeeding, caring for, and educating a child. The mother educates the child from the womb to the adult. If women carry out their duties properly, it is tantamount to jihad which rewards the same as men's jihad in the battlefield against evil. Material and Methods The research that has been carried out is analyzed or discussed using the literature study method by collecting data and information from various references. The references that the researchers took came from national and international articles, national and international proceedings, as well as web systems related to the research being carried out. A research result that uses the librarian study method will have high credibility if it is accompanied by real physical evidence (Sugiyono, 2005). Results and Discussion Education is very important for a child. The education of a child is certainly a determinant of the sustainability of a nation's progress. The quoiku mama education pattern implemented by the Japanese government was in fact very supportive of the progress of the Japanese nation. Kyoiku mama means that a mother in Japan educates her children wholeheartedly. A mother must be passionate about educating her child in Japan (Burke, 2013). Children in Japan are educated by their mothers from womb to adulthood. Educating is a very fun job for mothers in Japan. Children in Japan are not cared for by paid caregivers but are cared for directly by their birth mother. Japanese mothers instill character from childhood to their children. Simple things can lead their children to success. Simple character is believed to be able to change a nation. This simple character is able to produce a habit that leads to big changes in children, families, communities, and even the nation (Mulyadi, 2014). The simple things that are taught to children are immediately exemplified by the mothers of children in Japan and then a child will imitate the habits of the mother. Japanese mothers get their children used to being on time in doing something that has been mutually agreed upon. Children in Japan are accustomed to say thank you, say sorry, and correct their mistakes. Children in Japan are taught to say positive things that are fun for all elements of society (Gumilang, 2019). This is also suitably supported by the Japanese curriculum where children make posters with their own works of character, create their own disciplinary schedule and follow it with a commitment to implement it appropriately. Schools and families together build national civilization. Women in Japan are also highly valued in society, especially if these women are able to produce successful children who are useful for the life of society and the nation. Japanese mothers are highly valued in society if they are successful in educating their children (Mulyadi, 2014). Japanese mothers prefer to be comfortable mothers at home with a myriad of responsibilities attached to them. Japanese mothers are not interested in a career in the world of work to raise their degrees, simply by succeeding their children, they will be able to raise the status of Japanese mothers to a higher level. Islam also exalts mothers. Mothers are honored in Islamic Education (Bin Ladjamudin, 2015). A child will not enter heaven if he is not devoted to his parents. If you want to get to Allah's heaven, then devote yourself to the mother who has cleansed, raised, and educated you. Mothers in Islam are also responsible for providing education to their children. Mothers in Islam support every stage of their child's education including character education. Mother is a role model for children in Islam. Mother in Islam is believed to be able to build a proper national civilization (Sunarni, 2020). There appears a common thread between Japanese education and the education recommended by Islam. This is only in one aspect, namely the mother, let alone in other aspects. Mother was equally exalted and exalted in Japanese and Islamic rule. Mother provides improvements to the civilization of a nation. Mother can advance the nation. This has been proven by Japanese society. Mother becomes the pillar in the family, society and the State. If a country wants to be good, then the mother must be good first. Japanese education hands over the responsibility of educating a child's character to a mother, this is also in accordance with Islamic education which educates children from the start of the womb until the child reaches maturity. Education "Kyoiku mama" requires a mother to be an example for her children in character. Islam emphasized to a mother that her child is a trust from Allah who will be directed to goodness by the mother. If good is taught, then the results will be good too. Furthermore, Japanese education "kyoiku mama" believes that the sacrifice of a mother in Japan today will be able to bring change to society and the country. Islam gives a big task to mothers in which mothers are responsible for the environment and society. Mothers are responsible for producing generations that provide the goodness of the world and the hereafter to the surrounding environment. Recommendations This research is limited to finding a common thread in the Japanese educational slogan kouiku mama with Islamic education that has been brought by the Prophet Muhammad. Mother is the focus of this research. In further research, it is expected to find a common thread on the role of fathers in educating children in Japan with those taught by the sunnah of the Prophet Muhammad SAW. Of course, it must be supported by clear references and proper evidence. Conclusion The conclusion obtained in this study is that the Japanese education pattern "kyoiku mama" is in accordance with the pattern of Islamic education. There are several common threads that state the suitability between these two educational patterns including a) the Japanese education pattern "kyoiku mama" and the Islamic education pattern both elevate the status of women, especially in educating the character of a child, b) the Japanese education pattern "kyoiku mama" and patterns Islamic education is equally a mother as a role model in life, and c) the Japanese "kyoiku mama" education pattern and the Islamic education pattern equally give responsibility to a mother in bringing about good change for the environment, society, and Country.
2021-09-28T01:16:07.863Z
2021-06-15T00:00:00.000
{ "year": 2021, "sha1": "44bf833c4ef42b8c1cc5dd9fdc3152fce0834820", "oa_license": "CCBY", "oa_url": "https://oapub.org/edu/index.php/ejes/article/download/3794/6430", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e78a7855725dcaf4166a22ac437119aa174807f5", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
4827161
pes2o/s2orc
v3-fos-license
Electron energy-loss spectroscopy of boron-doped layers in amorphous thin film silicon solar cells . INTRODUCTION The roll-to-roll production of n-i-p thin film Si solar cells is a promising way to achieve large volume production of cheap and efficient devices on lightweight, flexible supports, since the layers can be deposited directly onto opaque substrates at low temperature.Both amorphous (a-Si:H) and microcrystalline (lc-Si:H) hydrogenated Si solar cells are currently grown on glass, plastic, and steel foil substrates, both in the laboratory and on industrial scales. 1 For n-i-p solar cells, the Si deposition sequence starts from n-doped Si, followed by the intrinsic (i) and p-doped Si layers.The i-Si:H layer is therefore sandwiched between $10-nm-thick n-and p-doped a-Si:H layers, which need to satisfy the two opposing requirements of high conductivity and low carrier recombination rate.In addition, the p-doped layer should be as transparent as possible to reduce parasitic optical absorption.One approach that can be used to increase its transparency is to alloy it with C to increase its optical band gap. 2 It is also important to confine the dopant to the p layer to limit B contamination of the i-Si layer, which degrades the carrier mobility 3 and the local atomic order 4 and weakens the strength of the electric field in the i-layer near the p/i interface resulting in a lower carrier collection efficiency. 5On flat thin film Si solar cells, Kroll et al. showed a clear correlation between degradation of solar cell performance and B contamination of the i-Si:H layer by using secondary ion mass spectrometry (SIMS). 5The impact of boron-oxygenrelated recombination centres has been studied extensively for B-doped Czochralski-grown Si, 6 as they ultimately limit the carrier lifetime. 7For all of these reasons, it is important to be able to measure the B concentration in p-and i-doped a-Si:H layers with nanometre spatial resolution and to correlate these measurements with solar cell performance. Our preliminary measurements of B concentration using core-loss electron energy-loss spectroscopy (EELS) in the transmission electron microscope (TEM), which are reported elsewhere, 8 required the use of a long acquisition time to detect low B concentrations in a-Si:C:H and a-Si:H layers and careful analysis to extract the B K edge from the Si L 2,3 fine structure.Other attempts in the literature to measure low B concentrations using EELS have reported detection limits for B of 0.2% with 10% accuracy in B-doped C, 9 0.5% in Ni 3 Al (Ref.10), and 1% in Si (Ref.11).In a recent study, Asayama et al. managed to detect 0.2.%B in a p-type Si device using a spherical aberration corrected scanning TEM (STEM). 12The difficulty of such measurements results in part from the fact that the energy-loss near-edge structure (ELNES) from the Si L edge, caused by scattering of innershell electrons to the conduction band, extends from 99 to 300 eV and interferes with the B K edge at 188 eV.In addition, the Si L 2,3 edge cross-section is five times larger than that of the B K edge.When combined with the fact that the B concentration is only a few percent, the Si peak intensity is hundreds of times larger than that of the B K edge. In addition to measurements of B concentration, it is equally important to determine the proportion of B atoms that is electrically active in the p-doped layer in an a-Si:H solar cell.In the free electron model, the volume plasmon energy is proportional to the square root of the valence electron density, while the line-width of the plasmon resonance a) Author to whom correspondence should be addressed.Electronic mail: m.duchamp@fz-juelich.de. is inversely proportional to the relaxation time of the plasmon oscillation. 13For high B concentrations (up to 24%), a volume plasmon energy shift of 0.8 eV has been measured using X-ray photoelectron spectroscopy. 4n the first part of this paper, we describe the growth of our a-Si:H layers using plasma-enhanced chemical vapour deposition (PECVD).We then present a detailed explanation of the methodology that we use to determine the B concentration with core-loss EELS.Our measured B concentrations are compared with SIMS measurements from the same samples.In the third part of the paper, variations in volume plasmon energy across the doped layers are measured and discussed with reference to their local chemical and electrical properties. EXPERIMENTAL DETAILS Layer deposition was carried out in a Flexicoat300 industrial pilot roll-to-roll system, which is used for PECVD growth of doped and intrinsic Si layers on steel foils that have widths of up to 300 mm using three in-line deposition chambers (Fig. 1).Two of the chambers are equipped with linear symmetric RF (13.56 MHz) sources 14 that are suitable for the deposition of amorphous or micro-crystalline doped Si layers.The intrinsic Si:H absorber layers are deposited using a linear very high frequency (VHF) plasma source (70 MHz).The use of different chambers to deposit the differentlydoped layers minimises possible cross-contamination and subsequent contamination of the intrinsic layer.Test samples (10  2.5 cm 2 in size) were fixed onto a custom-made sample holder, which was placed on the steel foil.Real solar cell samples were grown directly on steel foil. Samples were prepared for TEM observation using a standard lift-out procedure in a dual-beam FEI Helios focused ion beam (FIB) workstation.$200 nm of electron-beam-deposited Pt and $1 lm of ion-beam-deposited Pt were used to avoid degradation of the samples due to Ga 2þ implantation.Coarse FIB milling was carried out using a 30 kV ion beam, while final milling was performed using a 1 kV ion beam.Each sample was cleaned at 500 V using a focused Ar beam in a Fischione Nanomill system to remove the Gacontaminated surface layer. Core-loss EEL spectra were acquired in TEM diffraction mode (image coupling to the spectrometer) at 120 kV in an FEI Tecnai microscope.We chose a collection semi-angle of 10 mrad using an objective aperture and a low camera length to maximise the signal when using a 2 mm Gatan imaging filter (GIF) entrance aperture, while maintaining a reasonable signal-to-noise ratio.For core-loss measurements, the nanobeam mode of the microscope and a small condenser aperture were used to form a 50 nm parallel electron beam for the test samples and to reduce the beam diameter to $3 nm for measurements from the p-doped layer in the real solar cell.For long acquisition times (with a high number of counts), channel-to-channel gain variation in the GIF camera is the dominant source of artefacts in the recorded spectra.We therefore used an iterative averaging procedure, which is described in detail elsewhere, 10 to reduce channel-to-channel gain variation.The sample thickness was measured by applying the "log-ratio method" to the low-loss EELS intensity and found to be $80 nm.In order to check for the effect of plural scattering, the Si ELNES from the $80-nm-thick sample was compared after Fourier deconvolution and with measurements obtained from a $20-nm-thick sample.Apart from a small difference arising from the contribution of the volume plasmon peak due to plural scattering, the same peaks were present in the ELNES in each case.As the contribution from the volume plasmon peak was small and in order to avoid introducing additional noise in the spectra due to deconvolution, all of the results presented below were obtained from $80-nm-thick TEM lamellae without performing any deconvolution. Low-loss EEL spectra were acquired at 120 kV in STEM mode using a dispersion of 0.02 eV/pixel, while simultaneously collecting high-angle angular-dark-field (HAADF) images in an FEI Titan probe-corrected TEM.The typical acquisition time was $1 s for each spectrum and the total number of spectra was $500.Sample drift during acquisition was taken into account by using cross-correlation of images acquired every five spectra.The collection semi-angle of 10 mrad was defined by the objective aperture.The cut-off angle for scattering from volume plasmons in Si is 6.5 mrad (for higher angles, plasmons become highly damped as they can transfer all of their energy to single electrons by the creation of electron-hole pairs 13 ).A 10 mrad collection angle is therefore small enough to avoid too large a contribution to the spectrum from single electron excitations, although contributions from different scattering angles will cause a slight broadening of the plasmon peak and a shift in its energy when compared to the plasmon peak energy expected for completely free electrons. A scanning electron microscope (SEM) image of a typical cross-sectional FIB-prepared solar cell is shown in Fig. 2(a contact.The C concentration in the top layer is expected to be $20 at.%.We refer to this layer as p-doped a-Si:C:H.A bright-field (BF) TEM image of the p-doped a-Si:C:H layer, which is studied in detail below, is shown in Fig. 2(b).The a-Si:H and p-doped a-Si:C:H regions have similar BF contrast and cannot be distinguished from each other in Fig. 2(b).The total Si layer thickness is $400 nm while the expected thickness of the p-doped a-Si:C:H layer is $15 nm.The $5 nm roughness of the p-doped a-Si:C:H/ITO interface reduces the depth resolution of SIMS measurements of the B concentration.For such a thin layer, a TEM beam diameter that is smaller than the layer thickness must be used, while optimizing the beam current density and acquisition time to limit irradiation damage.Due to these limitations, two additional test samples were prepared to provide complementary EELS measurements.The layer structure in test sample 1 (Fig. 2(c)) is: a-Si:H (i), p-doped a-Si:H with a B concentration similar to that of the p-doped layer in the solar cell (p) and a-Si:H deposited using twice the diborane concentration (p þ ).Test sample 2 is identical to test sample 1, but all of the Si layers are alloyed with C in the same proportion as in the p-doped layer in the real solar cell.The thickness of each layer in the test samples is $200 nm. MEASUREMENT OF B CONCENTRATION USING CORE-LOSS EELS Background-substracted EEL spectra recorded at the position of the Si L 2,3 edge from the three layers with different B concentrations (undoped, lightly doped, and highly doped) in the two test samples (a-Si:C:H and a a-Si:H) are shown in Fig. 3.The shoulder at 284 eV is attributed to the C K edge for the a-Si:C:H sample and is not observed for the a-Si:H sample.Elemental Si and C concentrations were determined from the spectra by subtracting the background from the Si L 2,3 and C K pre-edge regions using standard inverse power laws and integrating the resulting signals over an energy range between 5 and 20 eV beyond the edge onset.Cross-sections were determined using the Hartree-Slater model 15 and are given in Table I.Although the measured concentrations should be independent of the width of the integration window, variations result from errors in background subtraction and in the calculated cross-sections.An average of the C concentrations measured using different integration windows provides a value of 15.2% 6 1% for the a-Si:C:H layers (Table I). The B K edge has a much smaller cross-section than the Si L edge (see Table I) and lies on an oscillating background arising from the fine structure of the Si L 2,3 and L 1 edges, making it difficult to determine the origin of the peaks observed in this energy range.In the present study, interpretation of the recorded spectra was facilitated by comparing results obtained from differently doped specimens and by removing the background under the B K edge in stages.First, the contribution of the structure of the Si L 2,3 edge to the background below the Si L 1 edge was removed from the background-substracted Si L 2,3 edge spectra by using a lognormal function of the form: where E is energy loss and the constants A, r, E 0 are fitted over a chosen energy range using a least-squares fitting method implemented in Gnuplot software. 16A similar approach was used by Poe et al. to remove contributions to the atomic continuum after the Si K and L 2,3 edges to examine mixed Si coordination compounds varying in Si VI :Si IV ratios. 17The resulting background-substracted Si L 1 edge spectra are shown in Fig. 4 for the two test samples.The effect of the presence of C on the local structure of the amorphous material can be seen in the form of differences between Figs. 4(a) and 4(b), which result from the different atomic coordinations and interatomic distances of Si atoms when they are surrounded by C instead of Si.The Si-Si and Si-C bond lengths have been computed and measured experimentally for a-SiC:H compounds to be 0.23 and 0.19 nm, respectively. 18The B K edge at 188 eV is now more visible for the two highly doped (p þ ) layers.For the lightly doped (p) samples, the B K edge is not as clearly visible on the Si L 1 edge fine structure. In order to complete the separation of the B K edge from the Si fine structure, a conventional power law background subtraction was performed on the spectra shown in Fig. 4 and both the B K edge and the remaining Si fine structure at $225 eV were fitted to log-normal functions (Eq.( 1)). Figure 5 shows the best-fitting functions overlaid on the experimental spectra and compared with the results of realspace ab initio calculations of the B K edge performed using FEFF9 9.05 code 19 for the experimental values of acceleration voltage and collection angle.The calculation was performed for 5 at.% B in a disordered cluster, with core hole effects included by using the random phase approximation and the Hedin-Lundqvist self-energy to take inelastic losses into account.A reasonable match is obtained between the fitted and calculated B K edge shapes, although the fitted log-normal function has a slightly larger width.The concentration of B relative to that of Si was estimated from the areas under the log-normal functions for 5, 10, and 20 eV energy windows from the onset of the B K edges at 188 eV, making use of the Hartree-Slater cross sections 15 given in Table I.The observed variation in B concentration with energy window size is likely to result primarily from differences between the calculated and fitted edge shape, from the use of simple log-normal fitting functions that do not take the fine structure of the edge into account and from errors in background subtraction.In the present study, we simply averaged the results obtained using the different energy window sizes to determine final values for the measured B concentrations of 4.9 6 1% and 3.8 6 1% for the highly doped a-Si:C:H and a-Si:H layers and 1.1 6 0.5% and 0.9 6 0.3% for the lightly doped a-Si:C:H and a-Si:H layers, respectively (Table I).These values are within a factor of two of the SIMS measurements, which are also given in Table I.For the highest B concentrations (>1 at.%), the discrepancy between the two measurement techniques may be explained by matrix effects when high concentrations are measured using SIMS. 20he same procedure was used to measure the B concentration across the p-doped layer of the real solar cell from a linescan of EEL spectra acquired using 3 nm probe and step sizes.Figure 6 shows individual spectra after background subtraction of the Si L 2,3 edge.The C K edge is visible in the C-rich p-doped layer.At the same time, a variation in the fine structure of the Si L 1 edge is visible before the onset of the B K edge.The relative B concentration was extracted from Fig. 6 using a 5 eV energy window and is plotted in Fig. 7 alongside SIMS measurements obtained using two different ion energies.(The best depth resolution is obtained using the lower ion energy at the expense of increased acquisition time).The shaded areas show the interfaces between the ITO and the p-doped a-Si:C:H layers on the left side and between the p-doped a-Si:C:H and a-Si:H layers on the right side.Further details about the sources of error for SIMS and core-loss EELS quantification are discussed elsewhere. 2The results suggest that it is possible to use core-loss EELS to measure the concentration of B in the p-doped layer in real solar cells for a concentration of $1 at.% with a spatial resolution of $4 nm. CHARACTERIZATION OF p-DOPED a-SI:H USING VOLUME PLASMON MEASUREMENTS The position and the width of the volume plasmon peak depends on the local chemical and electrical properties of a material.Here, we assess whether variations in plasmon energy with B concentration can be measured in the two test samples.The energy dependence of the inelastically scattered intensity is proportional to =½À1=eðEÞ, where e is the dielectric function and can be approximated, in the freeelectron model, 12 by the expression where E p is the plasmon energy and s is the plasmon relaxation time.The full width at half maximum of this function is given by DE p ¼ h/s.In the free electron model, the valence electron density (n) is related to the volume plasmon energy by where h, e, e 0 , and m 0 are the reduced Planck constant, electron charge, permittivity of free space and electron mass, respectively.In order to determine values for E p and s, we fitted volume plasmon peaks measured using EELS to the function in which the last two terms are included to take into account the unknown background contribution to the EELS signal, using a least-squares fitting method implemented in Gnuplot software. 16For each recorded spectrum, the fits were used to obtain values for E p and DE p , as well as the fitting error in each parameter.In order to determine as precise a measurement of E p as possible, the zero-loss peak (ZLP) was always recorded in the same spectrum as the volume plasmon peak and its position was determined by fitting a Lorentzian function to it.The values of E p and DE p were typically obtained with precisions of $0.1%.Examples of volume plasmon spectra measured from the two test samples are shown in Fig. 8 for different B and C concentrations, with experimental data points shown using symbols (only one point out of ten is shown for clarity).The plasmon energies obtained by fitting each spectrum to Eq. ( 4) are indicated by vertical black lines and are slightly higher in energy than the maximum peak positions, in agreement with theory (Eq.( 4)).Plots of measured plasmon energy and peak width derived from linescans of spectra acquired across the p þ , p, and i layers in the Si:C:H and Si:H test specimens are shown in Fig. 9, alongside SIMS profiles measured from the same layers.The similarity between the plots is remarkable, especially the correlation between the decrease in the B concentration and the plasmon energy in the p þ layers.Slight differences are measured at the p/i and i/contact layer interfaces which may be related to charge accumulation.The measured plasmon energies are $17.3, $17.23, and $17.2 eV for the p þ , p, and i Si:C:H layers (Fig. 9(a)) and $17.25, $17.13, and $17.1 eV for the p þ , p, and i Si:H layers (Fig. 9(b)), respectively.All of the experimental values are higher than that reported for crystalline Si (16.5 eV).This difference may have resulted from contributions to the spectra from different scattering vectors in the 10 mrad collection angle, or from the fact that H atoms have a lower ionization energy (1312 kJ mol À1 ) than Si (786 to 4355 kJ mol À1 for the 4 outer shell electrons), such that electrons provided by H atoms can contribute to the valence electron cloud.These effects are partly compensated by the decrease in atomic density of a-Si:H by $0.827 compared to that of crystalline Si. 21n the assumption that the volume plasmon energy varies linearly with doping concentration C x in the form 22 Table II gives average values of dE p /dC x measured from the experimental data points shown in Fig. 9 different numbers of valence electrons), (iii) effective electron mass, and/or (iv) hydrogen concentration between the layers.For plasmon oscillations, the electron mass in vacuum typically provides reasonably accurate values of plasmon energy for many solids, 13 suggesting that variations in effective electron mass with B concentration may be neglected.Assuming for the moment that there is no significant change in hydrogen concentration, the effects of changes in valence and density can be considered in the form where V at Si and V at B are the volumes of Si and B atoms derived from their covalent radii (84 and 111 pm, respectively), N val Si and N val B are the valences of Si and electrically activated B atoms (4 and 3, respectively, assuming that no electrically active B atoms are passivated by H atoms) and R el is the fraction of B that is electrically active (see Appendix). Table II also gives values of dE p /dC B calculated on the assumption that R el takes a representative value for 1% B doping of $10%, 23,24 that the covalent radius of a C atom is 70 pm and that the average valence of a-Si:C:H is 4. The calculated dE p /dC B values reflect the lower rate of increase in plasmon peak energy for a-Si:C:H compared to a-Si:H, resulting primarily from the smaller average radius of the C-containing compound.Nevertheless, a-Si:H is a complex system and many approximations and assumption have been made in this calculation.In particular a larger covalent radius may be appropriate in a-Si:H for both Si and B (Ref. 21) and a value for the valence of Si atoms of $2.4 has been reported for a-Si. 25he plasmon peak width DE p is inversely proportional to the plasmon relaxation time (see Eq. ( 2)).The measured values of DE p shown in Fig. 9 increase with decreasing B concentration, taking values between 4.7 and 5.4 eV, compared to a value of 3.7 eV calculated for crystalline Si for a low collection angle. 13The 10 mrad collection angle used in the present study may contribute to the broadening of the plasmon peak.The number of oscillations that occur during the plasmon relaxation time E p /(2pDE p ) is 0.7 for crystalline Si. 13 In our samples, the lower number of oscillations (approximately 0.5-0.6) is attributed to the amorphous structure and the presence of hydrogen, which may act as a scattering centre and contribute to ionized impurity scattering. 26 find that the relaxation time increases with increasing B concentration.Although this trend is not understood, it suggests that B atoms introduced by doping do not act as strong scattering centres.It has been reported that a direct effect of plasmon scattering is a reduction in mobility 27 (in the doping range 5  10 17 to 5  10 19 cm À3 , which corresponds to the typical B concentration measured in our solar cells).The lower plasmon width in the B-doped layer or in a-Si:H compared to a-Si:C:H may therefore be related to different mobilities. CONCLUSIONS We have demonstrated that core-loss EELS can be used to measure B concentrations in p-doped layers in a-Si:H and a-Si:C:H n-i-p solar cells.A fitting procedure is used to separate the B K edge contribution from the Si fine structure.B concentrations as low as 1 at.% are measured using EELS and compared with SIMS measurements made on the same samples.Low-loss EELS is used to measure changes in volume plasmon peak energy and peak width with B concentration.The plasmon energy is observed to increase by 0.1 eV with increasing B concentration, while the peak width is observed to decrease by $0.2 eV.Possible explanations for these changes are discussed. FIG. 1. Schematic diagram of the PECVD roll-toroll deposition chambers used to grow thin film Si solar cells on metallic foil.Each chamber is dedicated to the deposition of a particular Si layer.From left to right, P-doped, undoped, and B-and C-codoped Si layers. FIG. 2 .FIG. 3 . FIG. 2. (a) Secondary electron SEM image of a FIB-prepared cross-sectional TEM sample of a solar cell.From the bottom to the top: Ag back contact reflector deposited on steel substrate (not shown), ZnO, a-Si:H, ITO, e-beam, and i-beam deposited Pt.(b) Bright-field TEM image of the B-doped a-Si:C:H (p)/ITO interface in the solar cell sample shown in (a).(c) Bright-field TEM image of a-Si:H test sample 1. FIG. 5 . FIG. 5. Black lines: Background-subtracted EEL spectra acquired near the B K edge for three different B concentrations labelled i, p, and p þ in the (a) a-Si:C:H and (b) a-Si:H test samples.Coloured dotted lines: Log-normal fits of the EXELFS peak centred at $225 eV and of the B K edge.Different colours correspond to different B concentrations.Black lines with crosses: calculated B K edge for two different B concentrations.The corresponding B concentrations measured using SIMS are given in Table I.The spectra have been offset from each other vertically for clarity. FIG. 8 . FIG. 9. (a,b) Plasmon peak energy and (c,d) peak width measured across the i, p and p þ layers of the (a,c) a-Si:C:H and (b,d) a-Si:H test samples.The transitions between the differently-doped layers are shown using vertical dashed lines.The black lines show the B doping profiles measured using SIMS. TABLE I . B and C atomic concentrations measured using core-low EELS from the a-Si:H and a-Si:C:H test samples.The hydrogen concentration is not taken into account for the calculation of the atomic concentrations.The concentrations measured using SIMS assume an atomic density of 5  10 22 cm À3 .Crosssections are calculated using the Hartree-Slater model (Ref.15).Calculation errors in cross-sections are estimated to be $20% (Ref.2). þ in the (a) a-Si:C:H and (b) a-Si:H test samples.The spectra have been offset from each other vertically for clarity.The corresponding B concentrations measured using SIMS are given in TableI.
2018-04-14T05:25:59.127Z
2013-03-07T00:00:00.000
{ "year": 2013, "sha1": "bc4388789bdd72774c52a0d6d4df8ff111ae49f2", "oa_license": "CCBYNCSA", "oa_url": "https://ri.conicet.gov.ar/bitstream/11336/112654/2/CONICET_Digital_Nro.cc70cd94-a16e-4984-86a2-d054be84a039_A.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "0d1ac3c04bbbc48dba635ec9a1b66f18085229c6", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
96455645
pes2o/s2orc
v3-fos-license
Study of the Electrical Characteristics , ShockWave Pressure Characteristics , and Attenuation Law Based on Pulse Discharge in Water Strong shock waves can be generated by pulse discharge in water. Study of the pressure characteristics and attenuation law of these waves is highly significant to industrial production and national defense construction. In this research, the shock-wave pressures at several sites were measured by experiment under different conditions of hydrostatic pressure, discharge energy, and propagation distance. Moreover, the shock-wave pressure characteristics were analyzed by combining themwith the discharge characteristics in water. An attenuation equation for a shock wave as a function of discharge energy, hydrostatic pressure, and propagation distance was fitted. The experimental results indicated that (1) an increase in hydrostatic pressure had an inhibiting effect on discharge breakdown; (2) the shock-wave peak pressure increased with increasing discharge voltage at 0.5m from the electrode; it increased rapidly at first and then decreased slowly with increasing hydrostatic pressure; and (3) shock-wave attenuation slowed down with increasing breakdown energy and hydrostatic pressure during shock-wave transfer.These experimental results were discussed based on the mechanism described. Introduction Controllable high energy can be released during an instantaneous high-voltage pulse discharge in water [1].The phenomenon is similar to an explosion, has huge engineering potential, and has been widely used in many fields such as resource prospecting, oil extraction, gas processing, and national defense construction [2,3].The breakdown channel of a high-voltage pulse discharge in water generates plasma and rapidly releases a huge amount of energy to form the shock wave [4].The shock-wave pressure and attenuation law during propagation is an important topic of study for understanding high-voltage pulse discharges in water with different discharge energies and hydrostatic pressures.Moreover, this phenomenon is also widely used in engineering.Shock waves in water have been investigated before [5][6][7].However, the pressure load characteristics and attenuation law of shock waves propagating in water with different hydrostatic pressures have not been systematically studied.Therefore, the simultaneous influence of discharge energy and hydrostatic pressure on shock waves needs to be more deeply investigated. Experimental 2.1.Fundamental Physical Process.During the breakdown of a water dielectric by a high-voltage pulse discharge, the liquid phase-gas phase-plasma transformation is finished within a very short time, forming a high-temperature, high-pressure plasma channel.The huge pressure gradient in the channel due to the high pressure inside and the temperature gradient at the plasma boundary leads to rapid outward expansion of the plasma channel, which achieves a high-speed transformation from electrical to mechanical energy [8].Because the compressibility of water around the plasma channel is weak, its mechanical energy is mainly released outward in the form of a wave.The generated high-energy wave is called a shock wave. The shock-wave peak pressure depends on various factors including discharge energy, distance to the electrode, and hydrostatic pressure.The shock wave is the carrier that transfers the discharge energy in water, and hence the shockwave peak pressure is directly affected by the discharge energy.Meanwhile, the measured peak pressure is inevitably different at different distances from the wave source due to energy attenuation of the shock wave during its propagation in water.In addition, the damping effect of water is influenced by hydrostatic pressure, which has a direct influence on the formation of the initial shock wave.Moreover, the extent of shock-wave attenuation during propagation varies with the hydrostatic pressure of the water dielectric [9]. The pressure characteristics and attenuation law of shock waves in water are affected by three main factors: discharge energy, hydrostatic pressure, and propagation distance.This relation can be expressed in function form as = ( , , ), where is the shock-wave peak pressure, is the discharge breakdown energy of the water dielectric, is the hydrostatic pressure, and is the distance from the shock wave to the discharge electrode.In this research, the corresponding parameters related to hydrostatic pressure were further introduced into the function expression above based on previous studies of shock waves in water.Shockwave attenuation during propagation was analyzed systematically under different hydrostatic pressure conditions. Experimental Equipment. The experimental equipment was composed of the experimental device and the system for measuring the pulse discharges in water.The experimental device for measuring pulse discharges in water is shown schematically in Figure 1.The self-triggering discharge switch, shown in Figure 2, can effectively reduce errors caused by artificial triggering.The pulse power supply provided DC at high voltage in the 6 KV-15 KV range.The rated capacitance was 60 F, and the energy storage limit was 7000 J.The electrodes were made of coaxial steel tube and copper bar.A schematic diagram and photograph of the electrode structure The appearance of discharge switch The internal structure of discharge switch a conductivity of about 1.3 S/m.The pre-set hydrostatic pressure was adjusted by an external hydraulic pump with a maximum capacity of 12 MPa. The measuring system in this study was composed of seven parts: a Rogowski coil, a P6015A high-voltage probe, a DSO6014A oscilloscope, piezoelectric pressure sensors, a charge amplifier, a signal recorder, and a computer.The sensitivity of the Rogowski coil was 38 KA/V, and it was suitable for measuring pulse current due to its high measurement precision [10].The P6015A high-voltage probe could meet the acquisition accuracy for transient voltage [11]: the maximum input voltage was 20 kV, the bandwidth was 75 MHz, the rise time was 4.0 ns, and the compensation range was 7-49 pF.The sampling rate of the DSO6014A oscilloscope was 4 GSa/a.The piezoelectric pressure sensor was a CY-214 model with a range of 200 MPa.The charge amplifier was a YE5853 model with a bandwidth of 2 Hz-100 KHz.The signal recorder was a YE6231 model with a sampling frequency of 96 KHz. Experimental Scheme. The capacitance of the high-voltage pulse power supply was constant at 60 F.The discharge voltage and hydrostatic pressure were variables in this study.The shock-wave pressures at the six sensor connectors were measured.The effects of discharge energy, hydrostatic pressure, and distance on the pressure characteristics and attenuation of the shock wave were investigated.The whole tube was completely filled with water.The external hydrostatic pressure was classified into seven levels: 0, 1, 2, 3, 4, 6, and 8 MPa.The charging voltage ( ) was classified into six levels: 8, 9, 10, 11, 13, and 15 KV.A total of 42 groups of independent experiments were conducted under different hydrostatic pressure and voltage conditions (seven levels for hydrostatic pressure and six levels for voltage).Five discharge tests were performed for each group of experiments. The measurement of discharge energy and shock-wave peak pressures at each site were important constituents of this study.If the breakdown energy of the water dielectric by high-voltage discharge was directly calculated using the formula = (1/2) 2 (where is the discharge energy, is the capacitance, and is the charging voltage), the results would be larger.Due to the influence of water conductivity, residual capacitance energy, and circuit energy loss, the actual breakdown energy of the water dielectric was determined by the specific breakdown voltage [12,13].Therefore, a highprecision current test coil and a high-voltage probe were used in this study to measure the transient current and voltage during the whole breakdown process of the water dielectric.The actual discharge energy involved in breaking down the water dielectric was thus obtained. The shock-wave peak pressure at each site was collected by the sensor.To reduce the interference of electromagnetic waves and stray currents on the sensor during the highvoltage discharge, the sensor base was made of nylon materials to insulate it effectively from the influence of stray currents at the tube wall.A 1 mm thick lead protective cover was used to prevent the electromagnetic waves from interfering with the sensor and the acquisition equipment.The shock-wave pressure data were transmitted to the signal recorder through the charge amplifier and displayed synchronously on the computer.The whole acquisition system ran on DC power to avoid the conduction coupling interference of each part formed by the power supply during the high-voltage discharge [14]. Hydrostatic pressure in the tube was provided by the external hydraulic pump.The precision of this equipment was high, with good pressure stability.The bearing pressure of the whole experimental apparatus could be as high as 12 MPa. Pulse Discharge Characteristics in Water During breakdown of the water dielectric by electrode discharge, a large amount of energy was released within a very short time.In this process, the measurement of transient current and transient voltage could accurately reflect the highvoltage pulse discharge characteristics.The transient current and the transient voltage were measured and collected using an oscilloscope.The charging voltage was 11 KV, and the hydrostatic pressure was 1 MPa.The yellow curve in Figure 4 represents the current function (), and the green curve in Figure 5 represents the voltage function ().In addition, the maximum charging voltage ( ), the breakdown voltage ( ), and breakdown time ( ) are also marked. The discharge process can be described as follows: the discharge starts when the capacitor is charged to the rated voltage.The maximum charging voltage ( ) decreases slowly before breakdown of the water dielectric due to the influence of conductivity.The voltage decreases greatly at the moment of breakdown of the water dielectric.Then the voltage decreases as the current increases, and electrical energy is rapidly released into the breakdown channel.The breakdown voltage ( ) is the remanent voltage in capacitor at the moment of breakdown of the water dielectric by arc discharge.This time is referred to as the breakdown time ( ) (the moment of discharge was time 0).Among the high-voltage pulse discharge characteristics, the breakdown voltage ( ) is the major factor determining the breakdown energy ( ) of the water dielectric, and it directly influences shock-wave formation.Hydrostatic pressure in this study was classified into seven levels from 0 MPa to 8 MPa.Different values were obtained by adjusting .The measurement results are shown in Figure 6. The breakdown voltage ( ) was the result measured by experiments.It can be also deduced and verified by electrical formulas.Figure 7 presents the current waveform and breakdown time ( ) under different hydrostatic pressure conditions for a charging voltage of 8 KV. Figure 7 shows that the high-voltage pulse discharge is determined by the differential equation of the RLC circuit [15]: where is the lead inductance, which is a small constant; is the rated capacitance; and = 60 F.The electrode spacing was 5 mm, and the conductivity of tap water was = 1.3 S/m.The equivalent resistance () is preliminarily determined by initial voltage and liquid conductivity and can be expressed as = + , where is the pure resistance of the circuit and is very small, and is the equivalent resistance between the electrodes, which can be expressed by a time-varying second-order homogeneous differential equation.An accurate numerical solution could be obtained, and the calculation results were all within 0.2 Ω [16].The relationship between breakdown voltage and breakdown time could be expressed as [17] Equation ( 2) states that can be obtained from the charging voltage ( ), the breakdown time ( ), and the resistance ().The breakdown voltage ( ) actually measured was compared to the calculated value .It was found that the measured was very close to , with deviations of less than 10%.The experimental results indicated that the breakdown voltage ( ) increased with increasing charging voltage ( ). and showed an approximately linear relationship.Moreover, the higher the charging voltage, the smaller the attenuation of the breakdown voltage, as shown in Figure 6.The reason can be explained as follows: with an increase in charging voltage, both breakdown time and breakdown channel resistance decreased, whereas the peak current of the circuit increased.For the experimental electrodes, the relationship between discharge end field strength and electrode end voltage can be expressed as [17] where is the end curvature radius of the electrode; is the end voltage of the electrode, called the charging voltage ( ); is the gap distance; and is the end electric field strength.The end field strength of the electrode ( ) increases with increasing charging voltage ( ).As for the breakdown time of the water dielectric, the qualitative theory and empirical equation [18] (Martin equation) for breakdown field strength in a water dielectric can be referenced: where is the breakdown field strength; is the effective area of the electrode; is the breakdown time; , , and are constants related to the discharge process, and , , and > 0. When the voltage meets the breakdown requirements of a water dielectric, that is, = , the following equation can be obtained by combining (3) and ( 4): It is apparent that the breakdown time () definitely decreases with increasing voltage () when other constants are fixed. The increase in charging voltage ( ) enhances the field strength difference between the two ends of the water gap and accelerates the ionization and gasification velocity of the water dielectric.As a result, the rapid formation of avalanche ionization is expedited, leading to a decrease in breakdown time ( ). Equation (2) shows that the breakdown voltage ( ) increases when the breakdown time ( ) is reduced.Moreover, the higher the charging voltage ( ), the shorter the breakdown time and the less the attenuation from to .These experimental results are consistent with the inference when the hydrostatic pressure was fixed. When the charging voltage ( ) was fixed, the breakdown voltage ( ) showed an obvious decreasing trend with increasing hydrostatic pressure.Meanwhile, consuming electrical energy was increased before discharge breakdown of the water dielectric; therefore the breakdown energy ( ) in the whole breakdown process decreased significantly.The increase in hydrostatic pressure resulted in increases in breakdown channel resistance and breakdown time ( ).The experimental results are shown in Figure 7.The breakdown time increased with increasing hydrostatic pressure.Therefore, the hydrostatic pressure exhibited an inhibiting effect on the discharge breakdown process [19]. The attenuation of breakdown voltage ( ) was slow when the hydrostatic pressure was in the range of 0-3 MPa.The influence of hydrostatic pressure on the breakdown process was small.The attenuation increased suddenly when the hydrostatic pressure reached 4 MPa.After this, the attenuation tended towards stability.In this study, the experimental equipment was filled with tap water, and there were many small bubbles on the electrode surface and in the water [20].The required breakdown field strength of the electrode was not high due to the presence of bubbles, and therefore the energy loss before breakdown was small.When the hydrostatic pressure was in the range of 0-3 MPa, the small bubbles in the water were gradually removed by hydrostatic pressure.Due to the presence of these small bubbles, the hindering effect of hydrostatic pressure on discharge breakdown was inhibited.As a result, the breakdown voltage ( ) decreased slowly with increasing hydrostatic pressure.When the hydrostatic pressure exceeded a certain value (4 MPa in this study), the original small bubbles in the water had disappeared.The inhibiting effect of hydrostatic pressure, which had hindered the formation of the breakdown channel, was lost, and hence the breakdown voltage ( ) decreased greatly. The breakdown energy ( ) was the key factor in shockwave generation and also an indispensable parameter in the study of shock-wave features.The breakdown energy ( ) input into the discharge gap at the moment of breakdown was = 2 /2.The results are shown in Figure 8. Pressure Characteristics and Attenuation Law of Shock Waves 4.1.Pressure Characteristics of Shock Waves.When a shock wave is generated, energy is instantly injected into the plasma channel at the moment of high-voltage pulse discharge and then rapidly diffused all around.The surrounding water dielectric is strongly extruded, resulting in rapid increases in pressure, density, and temperature.As a result, the initial shock wave is generated.This study focuses on the relationship between breakdown energy ( ), hydrostatic pressure ( ), and shock-wave peak pressure ().The position of number 1 sensor, which was nearest to the electrode, was selected as the collection point for shock-wave pressure characteristics.The distance between number 1 sensor and the electrode was 0.5 m.The shock-wave peak pressure at 0.5 m was measured under the pre-set conditions of charging voltage and hydrostatic pressure.The results are shown in Figure 9, where is the hydrostatic pressure and is the shock-wave peak pressure.proposed by Touya et al., the shock-wave peak pressure can be expressed by the following empirical equation [21]: Influence of Breakdown Energy (𝐸 where is the shock-wave peak pressure; the distance from the electrode to the sensor, mm; is the breakdown voltage during discharge; and is the shock-wave transfer constant.Equation ( 6) states that higher energy leads to higher shock-wave peak pressure when is a constant.For example, when the hydrostatic pressure was 0 MPa, the corresponding breakdown energy values ( ) were 1696.9, when the charging voltage was fixed, the shock-wave peak pressure first increased and then decreased as the hydrostatic pressure increased from 0 MPa to 8 MPa.Within this range, the shock-wave peak pressure increased rapidly as the hydrostatic pressure increased from 0 MPa to 1 MPa and then increased gently from 1 to 3 MPa.The shock-wave peak pressure reached the maximum when the hydrostatic pressure was 3 MPa; after this, it decreased gradually as the hydrostatic pressure rose from 3 to 8 MPa. The major reason for this trend was hydrostatic pressure.The increase in hydrostatic pressure influences the shock wave in two ways: (1) the influence of hydrostatic pressure on the discharge breakdown effect: as previously mentioned, the increase in hydrostatic pressure resulted in an inhibiting effect of the water dielectric on plasma channel formation; consequently, the breakdown time increased and the breakdown energy decreased, which had an inhibiting effect on shock-wave peak pressure; (2) because the distance between number 1 sensor and the electrode was 0.5 m, the shockwave transfer was affected by hydrostatic pressure.For weak shock waves (with peak pressure less than 100 MPa), the transfer process and characteristics were similar to those of sound waves in water.At room temperature, the velocity and peak pressure of shock waves with the same distance to the source also increased with increasing water pressure.Shockwave attenuation was weakened due to the increase in water density.As a result, the increase in hydrostatic pressure had a promoting effect on shock-wave transfer. Shock-wave peak pressure reached the maximum at a hydrostatic pressure of 3 MPa in this study.The shockwave peak pressure showed an increasing trend when the hydrostatic pressure was in the range of 0-3 MPa.This range was defined as the increasing range.As the hydrostatic pressure increased from 4 MPa to 8 MPa, the shock-wave peak pressure showed a decreasing trend, and therefore this range was called the decreasing range. In the increasing range, the breakdown effect was not greatly affected by hydrostatic pressure due to the low water pressure and the presence of small bubbles.The breakdown energy ( ) decreased slightly.The increase in hydrostatic pressure enhanced energy transfer, and the inhibiting effect of hydrostatic pressure was less than the transfer-promoting effect.As a result, the shock-wave peak pressure increased with increasing hydrostatic pressure.When the hydrostatic pressure increased from 0 to 1 MPa (i.e., from no water pressure to some water pressure), the shock-wave peak pressure increased greatly.It was obvious that the transfer-promoting effect played a dominant role, whereas the inhibiting effect of hydrostatic pressure could be ignored with increasing hydrostatic pressure in low-pressure environments. When the hydrostatic pressure exceeded one critical value (the critical value was about 3-4 MPa in this study), the original small bubbles in the water disappeared.Breaking down the water dielectric was no longer easy, and therefore this range was called the decreasing range.The breakdown time and the energy leakage increased, causing an inhibiting effect on breakdown.As a result, the breakdown ( ) decreased greatly.Moreover, the higher the hydrostatic pressure is, the more obvious the inhibiting effect became.In this process, although the increase in hydrostatic pressure could enhance shock-wave transfer, the promoting effect was limited.The inhibiting effect of hydrostatic pressure was obviously greater than the transfer-promoting effect.Therefore, the shockwave peak pressure decreased gradually with increasing hydrostatic pressure. Attenuation Law of Shock Waves during Propagation. Shock-wave attenuation in water is caused by two main factors: (1) the shock heating and damping effect of shock-wave energy on water: the shock-wave strength showed exponential attenuation in the transfer process; (2) during the evanescent expansion and constriction of pulsation bubbles, the rarefaction wave from the bubble surface showed an unloading effect on formation of the shock-wave tail.Because the expansion velocity of bubble pulsations in the high-voltage discharge was very slow, it had little influence on shock-wave attenuation.Therefore, bubble action was not considered here.In this study, shock-wave attenuation during propagation was investigated under different discharge energy and hydrostatic pressure conditions. The empirical equation ( 6) basically reflects the propagation law of shock waves in a single hydrostatic-pressure environment.However, it has certain limitations because the influence of hydrostatic pressure on shock-wave transfer is not included.According to the shock-wave theory proposed by Chapman [22], an approximate formula for propagation of shock waves in water can be written as where is the shock-wave peak pressure; is the distance from the electrode to the sensor, mm; and is the breakdown voltage during discharge.The coefficient depends on hydrostatic pressure. and are shock-wave transfer coefficients that are affected by hydrostatic pressure.Of these, is mainly targeted at breakdown energy ( ), whereas is mainly targeted at shock-wave propagation distance ().Fitted results for hydrostatic pressures of 0, 1, 2, 3, 4, 6, and 8 MPa are shown in Figure 10.The figure could not be well read after adding error bars due to severe overlapping.Moreover, the error bars of data had little influence on the analysis in this study.Therefore, the error bars in Figure 10 were removed.The values of , , and in (7) were fitted using experimental data.An obvious change law with increasing hydrostatic pressure could be found.The values of , , and under different hydrostatic pressure conditions are shown in Figure 11. (8) The function expressions above can be substituted into (7), yielding a function relating shock-wave peak pressure () to The approximate formula ( 9) is an approximate fitting equation for the propagation of shock waves in water.In this study, two groups of experimental parameter values were selected to verify ( 9 parameter scope.The experimental values and the values calculated according to (9) were compared, with the results shown in Figure 12. The experimental and calculated values in Figure 12 were close to each other.Moreover, the attenuation trend of shock-wave peak pressure with increasing propagation distance was similar to results reported earlier.Compared to the empirical equation ( 6) for shock-wave peak pressure in previous studies, the influence of hydrostatic pressure on shock-wave attenuation during propagation was fully considered in (9).Therefore, the fitted results were closer to Shock and Vibration actual conditions.The fitting range was enlarged from the original low hydrostatic pressure (within 0.1 MPa) to high hydrostatic pressure (within 10 MPa). According to the experimental results and the approximate propagation formula for shock waves in water, the attenuation of shock waves formed by high-voltage pulse discharge in water was found to conform to the following laws. The higher the breakdown energy of the shock wave formed, the lower the shock-wave attenuation during propagation.This clearly demonstrated the influence of shockwave energy on shock-wave pressure during propagation.The shock-wave energy could make up for the loss of wavehead energy, and therefore higher shock-wave energy was advantageous to attaining high shock-wave pressures during propagation.This case was consistent with the explosion theory in water proposed by Cole [23]. The attenuation of the shock wave slowed down gradually with increasing hydrostatic pressure.As the hydrostatic pressure increased to a certain value, the attenuation trend of the shock wave gradually leveled off.The attenuation rate was further reduced when the distance between the shock wave and the electrode reached a certain value.Therefore, high hydrostatic pressure was advantageous to shock-wave propagation.Moreover, it effectively enhanced the scope of influence of the shock wave.This phenomenon was approximated using the explosion shock-wave theory in water proposed by Kochetkov and Pinaev [24].The results were consistent with the hydrostatic-pressure theoretical analysis of shock-wave pressure characteristics at 0.5 m, as described earlier. Conclusions (1) The breakdown of a water dielectric by a highvoltage pulse discharge was found to be similar to the explosion process, which is affected by voltage and hydrostatic pressure.If the discharge voltage was increased, the breakdown delay in the discharge process decreased, the resistance of the plasma channel decreased, the energy input into the channel increased, and the peak pressure of the formed shock wave increased.If the hydrostatic pressure was increased, formation of the plasma channel was restricted by external water pressure; in addition, the breakdown delay and the channel resistance increased, hindering the discharge breakdown process. (2) The shock-wave peak pressure at 0.5 m was directly affected by the discharge breakdown energy of the water dielectric.The higher the energy, the higher the shock-wave peak pressure.When the hydrostatic pressure was increased from 0 to 8 MPa, the shock-wave peak pressure first increased and then decreased.The shock-wave peak pressure increased when the hydrostatic pressure was in the range of 0-3 MPa and reached the maximum value at 3 MPa; it then decreased gradually when the hydrostatic pressure was in the range of 3-8 MPa. (3) During the shock-wave transfer process, the higher the breakdown energy, the lower the shock-wave attenuation.The shock-wave attenuation slowed down gradually with increasing hydrostatic pressure, which was advantageous to stable shock-wave transfer.With increasing propagation distance, the stabilizing effect of hydrostatic pressure on the shock wave became more and more obvious.An approximate formula (i.e., ( 9)) for propagation of shock waves in water with different hydrostatic pressures was fitted based on mass data.The relationship between shock-wave transfer attenuation in water and energy, hydrostatic pressure, and propagation distance was quantified. Figure 7 : Figure 7: Plot of () and with a charging voltage of 8 KV and different hydrostatic pressures. Figure 9 : Figure 9: Plot of shock-wave peak pressure at 0.5 m. 5 P w = 9 ): hydrostatic pressure = 5 MPa and charging voltage = 12 KV; and hydrostatic pressure = 9 MPa and charging voltage = 7 KV.The data for the first test ( = 5 MPa and = 12 KV, = 3020 J) were distributed within the original scope of the experimental parameters, whereas the data for the second test ( = 9 MPa and = 7 KV, = 970 J) were outside the original experimental MPa, U m = 7 kV experimental values P w = 9 MPa, U m = 7 kV calculation values P w = 5 MPa, U m = 12 kV experimental values P w = 5 MPa, U m = 12 kV calculation values d (m) Figure 12 : Figure 12: Comparison between new experimental values and calculated values.
2019-04-21T13:12:30.499Z
2016-06-07T00:00:00.000
{ "year": 2016, "sha1": "09d7eca0db373a3d1d6c3b78499f3aac611586ca", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/sv/2016/6412309.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "09d7eca0db373a3d1d6c3b78499f3aac611586ca", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
119320508
pes2o/s2orc
v3-fos-license
CLT for Lipschitz-Killing curvatures of excursion sets of Gaussian fields Our interest in this paper is to explore limit theorems for various geometric functionals of excursion sets of isotropic Gaussian random fields. In the past, limit theorems have been proven for various geometric functionals of excursion sets/sojourn times. Most recently a CLT for Euler-Poincar\'e characteristic of the excursions set of a Gaussian random field has been proven under appropriate conditions. In this paper, we shall obtain a central limit theorem for some global geometric functionals, called the Lipschitz-Killing curvatures of excursion sets of Gaussian random fields in an appropriate setting. Introduction and main result There has been a recent surge in interest in understanding the geometry of random sets. In particular, there have been many works on limit theorems of geometric functionals of random sets coming from discrete type models arising from various point processes (see [6], and references therein), or from models of smooth random fields [7], [8], [13], [16], [18], [24], [28]. The object of this paper is to go further, and provide asymptotic distributions for some global geometric characteristics of the excursion sets of random fields as the parameter space is allowed to grow to infinity. More precisely, let f be a random field defined on R d , and let T be a d-dimensional box [−T, T ] d We shall be considering the restriction of f to the subset T , and accordingly define for k = 0, . . . , d, where |T | denotes the d-dimensional volume of T and, by T → R d , we mean T → ∞. Remark 1. 1 We note here that the specific case corresponding to k = d has already been studied in [24], which is a generalization to higher dimensional setting of known results about limit theorems for sojourn times of Gaussian processes. Another interesting case, namely the Euler-Poincaré characteristic (case k = 0), has been studied in [8], whereas a more general result, a CLT for the Euler integral, was obtained in [1]. We shall adopt the now standard approach of projecting Gaussian functionals of interest onto the Itô-Wiener chaos, then use the Breuer-Major type of theorem to conclude our main result. This approach has been developed in [16] to obtain CLT for general level functionals of f, ∂f, ∂ 2 f in dimension 1, then extended to dimension 2 (see [13] for a general review on the topic). As applications, they got back CLTs for the number of crossings of f , a result first obtained by Slud with an alternative method (see [26], [27]), for the number of local maxima ( [15]), for the sojourn time of f in some interval ( [13]), and also for the length of a level curve of a 2-dimensional Gaussian field ( [16]). Note that in these papers, the last step of the method was to approximate f with an m-dependent process ( [5]) in order to conclude the CLT. In fact, this step can be removed, and simplified using what is now called the Stein-Malliavin method to conclude Breuer-Major types of theorem, as documented in [21]. Based on this general approach, CLTs have been proved recently when considering a ddimensional Gaussian random field f by Pham [24] for the sojourn time of f ; by Estrade and León [8] for the Euler-Poincaré characteristic (EPC) of the excursion set of f , and by Adler and Naitzat [1] for Euler integrals over excursion sets of f . Note that EPC shares strikingly similar integral representation as the number of level crossings in terms of functional of f but for dimension d. Largely, the sketch of the proof for CLTs and the main technical steps remain the same as in [16] when dealing with level functionals of a d-dimensional Gaussian field f , with d > 2; the difficulty lies in finding a way to avoid explicit computations. Hence the main and crucial contribution in [8] has been to come up with a neat trick to circumvent this difficulty, proving that the order of the variance of the EPC for f restricted to any subspace of T is less than |T |, hence is negligible in the limit as T grows to R d . We are going to build on those works, in order to obtain a CLT for all LKCs of excursion sets. The difficulty here is to develop similar techniques when working on A u (f ; T ∩ V * ) for any k-dimensional affine subspace V * of R d . The structure of this paper is as follows. Throughout this paper, we work with isotropic Gaussian random fields. We begin in Section 2 with setting the notation, and the necessary background for the analysis to follow in later sections. In Section 2.1, we recall the basics of the expansions of Gaussian functionals using the multiple Wiener-Itô integrals. Next, in Section 2.2, we define the Lipschitz-Killing curvatures, and also state the Crofton formula that provides a relationship between various LKCs; it is going to be a crucial element in the proof of our main result. Section 2.3 is devoted to discuss the integral representation of Euler-Poincaré characteristic of excursion sets of any random field via the expectation metatheorem. Finally, precise setup for the problems, and the assumptions, in particular on the covariance structure of the random field f , is listed in Section 2.4. In Section 3, we develop the proof of the main result, Theorem 1.1, using the standard sketch given in three main steps. First we prove that the functional of interest is square-integrable (Section 3.1) and obtain its Hermite expansion in Section 3.2. Then we prove that the limiting variance is bounded away from zero and infinity in Section 3.3. Finally, in Section 3.4, we give an extension of Breuer-Major theorem to affine Grassmannian case to conclude the Gaussianity of the limiting distribution. Section 4 concludes with a discussion and a multivariate CLT for EPCs. Formally, let Z be a m-dimensional standard Gaussian random vector, and L 2 (Z) be the set of all real square integrable functionals of Z. In short, giving a Hermite expansion is a way to approximate elements from L 2 (Z) by a series of Hermite polynomials. More precisely, for Then For more details regarding Hermite expansions, and their applications to study functionals of Gaussian random fields, we refer the reader e.g. to [4], [5], [21]. The above can also be written in a more abstract setting of multiple Wiener integrals, for which we begin with an orthonormal system . Then (see [23]) where I q denotes multiple Wiener integral. A decomposition, similar to (3), holds true for all square integrable functionals of W , and is called the Itô-Wiener chaos. We refer the reader to [23] for complete details. Lipschitz Killing curvatures and the Crofton formula There are a number of ways to define Lipschitz-Killing curvatures, but perhaps the easiest is via the so-called Weyl's tube formula (see [11], [31] for the first hand account of this formula). In order to state the tube formula, let M be an m-dimensional manifold with positive reach (see [2]) embedded in R n which is endowed with the canonical Riemannian structure on R n . Then, writing · as the standard Euclidean norm on R n , the tube of radius ρ around M is defined as Then according to Weyl's tube formula (see [2]), the Lebesgue volume of so constructed tube, for small enough ρ, is given by where ω n−j is the volume of the (n − j)-dimensional unit ball in R n−j , and L j (M ) is the j-th LKC of M . Although, it may appear from the definition above that the L j depend on the embedding of M in R n , in fact, the L j (M ) are intrinsic, and so are independent of the ambient space. Apart from their appearance in the tube formula (6), there are, at least, two more ways in which to define the LKCs (see [2]). Borrowing the notations from [2], let Graff(d, k) be the affine Grassmannian of all kdimensional affine subspaces of R d , and Gr(d, k) be the set of all k-dimensional linear subspaces of R d . Let M be a compact subset of R d and V * ∈ Graff(d, k). Then writing and setting λ d k to be the appropriate, normalized measure on Graff(d, k) (cf. [2]), and also we have the Crofton formula: whenever M is tame and a Whitney stratified space (see [2]). Setting j = 0 in the above equation (7) gives back the Hadwiger formula which we shall use to generate all the LKCs given the Euler-Poincaré characteristic of all the slices M V * . Another interesting case is when we set j = k in (7); we obtain where |M V * | is the k-dimensional Hausdorff measure of the set M V * . Euler-Poincaré characteristic and other LKCs of excursion sets Let T be a compact, tame and Whitney stratified subset of R d . For any fixed V * ∈ Graff(d, k), set ∂ l T V * as the l-dimensional boundary of T V * . Assume f be a smooth Gaussian random field, then using the standard Morse theory (see [2,Chapter 9]), we can write whenever T is tame and a Whitney stratified space (see [2]), where, with ∇ J f and ∇ 2 J f representing restrictions of the usual gradient ∇f and Hessian ∇ 2 f onto J ∈ ∂ l T V * . Applying Theorem 11.2.3 of [2], the above equation can formally be rewritten as almost surely and in L 2 , where δ is the Dirac delta at 0 defined on R d , interpreted as usual by approximating δ, as ε → 0, by the Gaussian density of a d-vector with independent components mean 0 and variance ε, or by the function (2ε) −d 1I [−ε;ε] d (see e.g. [2] for a.s., and [14] or [8] for L 2 convergence). Hence the way of obtaining a Hermite expansion of L d−k will go through a limiting process. However, this process of approximation being clearly spelled out in many of previous works going as far back as [5], we shall omit this step in the rest of the paper, and skip to the limit. We shall now combine equations (8) and (10) to express all other LKCs in terms of the Euler-Poincaré characteristic of A u (f ; T V * ). Formally, Remark 2.1 -Parametrization of Graff(d, k) Note that Graff(d, k) can be parametrized as Gr(d, k) × R d−k . Furthermore, we shall identify Gr(d, k) with the set of all k × d matrices whose rows are orthonormal vectors in R d , modulo left multiplication by a k × k orthogonal matrix (see [25]). Writing V as the matrix whose rows are k-orthonormal vectors spanning the linear space obtained by the parallel translate of V * , Note: For any V * ∈ Graff(d, k), we shall denote V for the matrix whose rows are korthonormal vectors spanning the linear space obtained by the parallel translate of V * , and we shall use the same V to denote the element in Gr(d, k) that corresponds to the k dimensional linear space spanned by the rows of the matrix V . Setup for the problem and assumptions In this paper, we consider f a mean zero, isotropic, real valued Gaussian random field defined on R d with C 3 trajectories. The assumption of isotropy means that the covariance of the Gaussian random field satisfies for some function r : R + → R. Without loss of generality, we shall assume r(0) = 1. We denote the partial derivatives of order n of any function g defined on R d as We introduce the gradient ∇f (x) and Hessian ∇ 2 f (x) of f , and recall that due to isotropy ∇f (x) and (f (x), ∇ 2 f (x)) are independent for every fixed x (in fact, stationarity suffices to conclude the same). Thus, the covariance function of (∇f (x), f (x), ∇ 2 f (x)) can be expressed as a block diagonal matrix for each fixed x. We denote the covariance matrix of where Σ 1 and Σ 2 are the covariance matrices of ∇f , and (f, Simple linear algebraic considerations imply that there exists a D × D matrix Λ such that Σ = ΛΛ T . Then we define a new field Z = (Z(x), x ∈ R d ) by Let us denote its covariance function by Note here that we have implicitly used the fact that various derivatives of a stationary Gaussian random field are themselves stationary Gaussian random fields (see [2,Chapter 5]). We can also write Z as We need more assumptions on f to ensure that various LKCs of the excursion set A u (f ; T ) of f over a threshold u, are indeed square integrable, and that they satisfy a CLT as T → R d . The required assumptions are rather standard when looking for CLT of non linear functionals of stationary Gaussian random fields, such as number of crossings [13,16], curve length [16], EPC [1,8], sojourn time [24], etc. (H1) Geman type condition: We shall assume that the covariance function r ∈ C 4 (T ), and that the function (H1) is simply the higher dimensional analog of Geman's condition ( [9]), which is needed to prove that the functional of interest is in L 2 . When d = 1, it is known to be a necessary and sufficient condition to obtain the L 2 convergence of the number of crossings of any threshold (see [17]). Observe that this condition (H1) is satisfied whenever the underlying random field is 'smooth' enough, with C 3 sample paths 3 . Hence, for simplicity, we will assume now on that f is C 3 . (H2) Arcones type condition: For the covariance function γ(·) of the joint field (Z(x)) x∈R d , we assume that there exists an integrable ψ on R d satisfying This condition is crucial in ensuring the finiteness of the limiting variance of the considered functionals. As already noted in [16] and [8], it implies in particular the existence of the spectral density, and that r ∈ L q (R d ), q ≥ 1. (H3) The spectral density, denoted by h, of the covariance function corresponding to the field f satisfies h(0) > 0. This condition is needed to ensure that the asymptotic variance obtained in the CLT is non zero. Proof of Theorem 1.1 Recall that the key argument in [8] to prove the CLT for the Euler-Poincaré characteristics of the excursion set is to consider only the highest dimensional term in (10), dropping all lower dimensional terms, and proving later that the contribution from the lower dimensional terms is negligible under the volume scaling. We will also use the same argument. Let us begin with where We shall consider only L d−k,k (A u (f ; T )), and prove that, after appropriate normalization, it exhibits a central limit theorem. Thereafter, the same arguments can be pieced together to conclude that, under the same scaling, L d−k,l (A u (f ; T )) converge to 0 whenever l < k. As spelt out in the introduction, our proof has three major steps: 1. to show that the functional of interest is square integrable, and thereby obtain its Hermite type expansion; 2. to prove that the limiting variance is bounded away from zero and infinity; 3. to use the Stein-Malliavin method for Breuer-Major type functionals to conclude to the Gaussianity of the limiting distribution. We shall provide details of the aforementioned steps in the following subsections. Square integrability Let us recall (13), and write Applying Jensen's inequality, we have We shall now focus on obtaining an appropriate upper bound for the integrand in the above expression, which in turn shall imply the square integrability of L d−k,k (A u (f ; T )). Using standard fare, notice that φ k (∂ k T V * ) can be bounded above by the cardinality of the set {t : V ∇f = 0} that we denote by N u (T V * ). Then, as usual, we compute the second factorial moment of N u (T ) and prove that it is finite to conclude the square integrability. ). Using stationarity, we can reduce the above integral to Next, using the Cauchy-Schwarz inequality and stationarity gives Invoking similar delicate analysis as in the appendix of [8], we can conclude that there exists a constant C 1 (independent of T V * ) such that Next, notice that where C 2 is a constant independent of V . Then, combining equations (23) and (24) in (22) provides for some finite positive constant Further, the above integral can be bounded from above by replacing the domain of integration by the set ) represents a d-dimensional ball of radius r, and the constant c d can be chosen appropriately so as to encompass the set T V . Then, observe that the latter simplified integral does not depend on the choice of V . Therefore choosing V as the span of any k coordinate axes, we obtain where the equality is a result of a simple polar coordinate transformation and κ is appropriate universal constant. Therefore, for some positive constant C 4 independent of the choice of V * . Finally, using the standard Gaussian kinematic fundamental formula (see [2]) for the mean of N u (T V * ) and the above computations together with equation (21) and Crofton formula, we can conclude that for some large, but finite and positive constant C. Note that this upper bound is not optimal, but still suffices to achieve the goal of square integrability to obtain a Hermite type expansion of the functionals of interest. Using the Hermite type expansion, we shall obtain much tighter bounds later in Section 3.3. Hermite expansion For x ∈ T , recall (see Section 2.4) that we can factorize Σ as Σ = ΛΛ T , such that Λ has a block diagonal form where Λ 1 is the formal square root of Σ 1 and Λ 2 is a lower triangular matrix such that Λ 2 Λ T 2 = Σ 2 , respectively. Using equation (13) and standard methods as in [8], we now obtain a Hermite expansion for φ k (∂ k T V * ). Define Clearly, for each fixed space point x, the functions G 1 and G 2 are independent. We shall obtain Hermite expansions for these two functions separately. Formally, for n ∈ N D , D being defined in (15), set n = (n 1 , n 2 ) ∈ N d × N D−d , then the square integrability implies that we have the following where the Hermite coefficients are given by writing ϕ D for the standard normal density in D-dimensions. Remark 3.1 It follows from the discussion of [2, Section 5.7], that the distribution of V ∇f (x), V ∇ 2 f (x)V T does not depend on the space point x (due to stationarity) and on V (due to isotropy). Therefore, the coefficients c 1 (n 1 , V, Λ 1 ) and c 2 (n 2 , u, V, Λ 2 ) do not depend on V , which will help simplifying the proofs. Computing c 1 Observe that First, note that the integral is to be interpreted as a limit of integral of an appropriate approximation of δ. Secondly, by Remark 3.1, we can choose a V which suits our purpose. In particular, one may define and thus define c 1 (n 1 , V, Λ 1 ) as an L 2 -limit of Noticing that the variance of ∇ i f (x) does not depend on the index i due to isotropy, we conclude that . Therefore, Equivalently, where λ k/2 ε k ϕ k ( √ λεV y 1 ) converges to the desired Dirac delta. As pointed earlier, the above computation is invariant of the choice of V , so we shall choose V to be the space spanned by (e 1 , . . . , e k ) where {e i } d i=1 is the canonical basis of R d . Thereafter, taking limit as ε → 0, we obtain However, in order to obtain estimates for the limiting variance, we shall need bounds on c 1 (n 1 , V, Λ 1 ). Using the usual technique as sketched in [14], we obtain where we have used the following inequality: sup [29]). or, equivalently, introducing Z 2 as a (D − d)-dimensional standard normal variable. Next, using Cauchy-Schwarz inequality, we can conclude that Again using the invariance of c 2 (n 2 , u, V, Λ 2 ) with respect to V , we can choose V to be the line span of (e 1 , . . . , e k ), where Then, using Wick's formula, we can obtain an upper bound for On the other hand, P[f (x) > u] can be bounded above (and below) by the standard Mill's ratio, implying there exists K 2,u ∈ (0, ∞) such that c 2 (n 2 , u, V, Λ 2 ) ≤ K 2,u . (34) Remark 3.2 Now that we have seen precise expressions for the Hermite coefficients c 1 (n 1 , V, Λ 1 ) and c 2 (n 2 , u, V, Λ 2 ), and we understand that these coefficients do not depend on the choice of V , therefore, we shall replace V by its dimension k in the above notations. In particular, we shall now redefine and c(n, u, k, Λ) With these notations, and armed with the fact that E (φ k (∂ k T V * )) 2 < ∞, we can conclude that the following infinite expansion holds in L 2 where J q (φ k (∂ k T V * )) is the projection of φ k (∂ k T V * ) onto the q-th chaos. In addition, we have the following expansion for L d−k,k (A u (f ; T )). as the projection of φ k (∂ k T V * ) onto the first Q orders of the Hermite expansion given in (36). Then we can write Writing · 2 for L 2 norm, we have, Next, using computations similar to those in Section 3.1, we can conclude that there exists a finite, positive We can then conclude, via the dominated convergence theorem, that (ii) As a consequence of [30, Lemma 3.2], we note that for any H n (Z(x)) H m (Z(y)) dx dy = 0, whenever |n| = |m|, which in turn implies that the expansion in (37) is indeed orthogonal. Variance bounds Let us define the appropriately normalized quantities of interest and We want to ensure that the variance of L # d−k,k (A u (f ; T )) converges to a finite positive quantity as T → R d , and that the variance of L # d−k,k (A u (f ; T )) for each l = 0, . . . , (k − 1) can be made as small as we wish, by choosing appropriately large set T . Proposition 3.2 With the above notation, the variance of L # d−k is given by The asymptotic variance of L # d−k (T ), as T → R d , is finite, non zero, and can be expressed as Using the Hermite expansion of L d−k,k (A u (f ; T )) and the orthogonality of the chaos expansion (see Remark 3.3 (ii)), we can formally express the variance of L d−k,k (A u (f ; T )) as The sketch and main arguments (e.g. Arcones bound) to prove Proposition 3.2 are given in [16], with an extra step for the term o(1) which follows from [8]. The main difficulty relies then, once again, in the fact that we do not integrate simply on a d-dimensional box, but on Grassmanians, which requires tricks to circumvent the difficulty of computations. Proof of Proposition 3.2. First let us show that var L # d−k,k (T ) < ∞. We have, using (20), then (36), Notice that since Z(x) and Z(y), individually, are standard Gaussian vectors, then using Mehler's formula (or equivalently, the diagram formula), we have that for |n| = |m| = q, where the inequality is a result of the observation that for any x ∈ T , we have the inclusion (x − T V * ) ⊂ 2T W * , for some W * ∈ Graff(d, k). Before proceeding any further, we may observe the following. Lemma 3.1 Let θ be a nonnegative real valued, integrable function defined on R d , then Proof: The double integral in question can be bounded from above by first replacing the integral over ∂ k (2T V * ) by integral over V * . Then, since Graff(d, k) is isometric to Therefore the above integral can be bounded above by where σ d k is the invariant measure on the Grassmannian Gr(d, k) such that which proves the assertion of the lemma. ✷ In view of Lemma 3.1, an upper bound for A(n, m, u, k, T ) can be obtained as where in the second integral we have used the Crofton formula (9). Further, under hypothesis (H2), and for |n| = |m| = q, there exists a constant C * such that Therefore we obtain that, for |n| = |m| = q, Next, we prove, asuming Arcones condition (H2), that |T | −1 A(n, m, u, k, T ) converges as T → R d . We shall check that it is Cauchy in the parameter T , the edge length of T . Let us take boxes T 1 = [−T 1 , T 1 ] d and T 1,2 = [−(T 1 + T 2 ), T 1 + T 2 ] d , and prove that A(n, m, u, k, T 1 ) Clearly, A(n, m, u, k, T 1,2 ) |T 1,2 | − A(n, m, u, k, T 1 ) Clearly, the coefficient in II can be bounded uniformly as a result of previous computations, and the volume terms converge to zero as T 1 increases to infinity. For part I, notice that the difference |A(n, m, u, k, T 1,2 ) − A(n, m, u, k, T 1 )| can be shown to be of the same order as The coefficient of the integral above, when compared with |T | −1 1,2 , converges to one. However, since the domain of integration escapes to infinity, the integral converges to zero due to integrability of ψ q . Hence, we can conclude that the sequence |T | −1 A(n, m, u, k, T ) is Cauchy in the variable T , meaning that, for |n| = |m|, |T | −1 A(n, m, u, k, T ) → A(n, m, u, k) as T → R d (or, equivalently, as T → ∞) where the limit A(n, m, u, k), using the arguments of Lemma 3.1, can be identified as which, in turn implies that var Graff(d,k) Finiteness of the limiting variance We shall proceed as usual (see [14] or [16]). Introducing Π Q L # d,d−k (A u (f ; T )) as the projection of L # d,d−k (A u (f ; T )) onto the first Q chaos, we shall show that and conclude the finiteness of the limiting variance by a simple application of Fatou's lemma. Let us begin with observing that L d,d−k (A u (f ; T )) is an additive set functional. In particular, the set T can be written, as in [8] as a union of disjoint unit cuboids (w.l.o.g. let T be integer). Therefore, L d,d−k (A u (f ; T )) can be written as a sum of a stationary sequence of random variables where these random variables are an evaluation of L d,d−k (A u (f ; ·) on [0, 1) d , and its various integer shifts. Next invoking stationarity of the field (∇f, ∇ 2 f, f ) (and Z), we know that the variance of the sum of a stationary sequence is of the order of the cardinality of the sum if the covariance decays at an appropriate rate. Using this precise argument, and following the computations of [8], we can conclude (45). In following the arguments of [8], it is important to note that our estimates for the coefficients in the Hermite expansion match with those in [8]. Now we shall show that the variance corresponding to lower dimensional faces of T V * , is indeed o(1) for large T as expressed in Proposition 3.2. Recall the decomposition of L d−k,k from equation (19). Then, It suffices to show that var (L d−k,l (A u (f ; T ))) = o(|T |) for each l = 0, . . . , (k − 1), in order to conclude the second part of the assertion in equation (41). Let us define In view of ∂ l T = V * ∈Graff(d,k) ∂ l T V * , and the above computations leading to A(n, m, u, k), we note that var(R(d, k, l, T )) can be shown to be O(|∂ l T |), or equivalently O(T l ) under the assumption (H2), implying that the lower dimensional faces, asymptotically, do not contribute to the variance of L # d−k (A u (f ; T )). Nondegeneracy of the limit Finally, it remains to show that lim Using the orthogonality of chaos, it suffices to show that V k 1 > 0. First, we shall simplify the expression for V k 1 (T ) by introducing the canonical basis (e i ) 1≤i≤D of R D in (43), and writing c(e i , u, k, Λ) c(e j , u, k, Λ) A(e i , e j , u, k, T ). for canonical basis of dimension d, (D − d) respectively, and observing that c 1 (e i1 , k, Λ 1 ) = 0 (by (31)), the limiting variance corresponding to the first chaos, again using equation (31) for precise expression of c 1 (0, k, Λ 1 ), is given by with A(e i2 , e j2 , u, k) as defined in (44), given by where γ e i2 ,e j2 denotes the covariance function corresponding to the pair of indices which correspond to the position of 1's in (0, e i2 ) and (0, e j2 ), respectively, where 0 is a d-dimensional row vector of zeros. We shall estimate separately the two terms appearing above. Let us begin with c 2 (e D2 , u, k, Λ 2 ) given in (33). We have Let us consider the lower triangular matrix Λ 2 such that its first element (Λ 2 ) 11 equals 1 (as in [8]), i.e. of the form matrix, γ T a 1 × (D − d − 1) matrix, and l > 0. With the above notation, we can write where M(Ly * 2 ) is the symmetric matrix obtained by appropriately arranging the elements of the vector Ly * 2 . We can certainly think of the map y * 2 → M(Ly * 2 ) as a linear map, therefore, there exists a b ij such that Again recalling that c 2 does not depend on the choice of V , we shall fix the matrix V as [I k ; 0], where I k is k × k identity matrix and 0 is a k × (d − k) matrix of zeros. Then, where the right side is the notation for the top left k × k minor of M(Ly * 2 ). This latter argument (51) is key, since the next computations will then be similar as those done in a d-dimensional box ( [8]). We now give a brief sketch of the major steps involved to provide an overview of the full computation. We have Subsequently, using arguments similar to those in Lemma A.2 of [8] together with isotropy, we can obtain a Hermite expansion for the determinant as follows Combining (49), (50) and (52), and using that yϕ(y) = −ϕ ′ (y) to compute the integrand on y 2D , we obtain the following (for more details, we refer the reader to the proof of Lemma 2.2 of [8]) Since we can write M(Lγ) = −λI k with λ = −r ii (0), then Moreover, as in [8], we can write where we recall that h(0) is the spectral density of the field f evaluated at 0. Finally, putting together the estimates obtained in (53) and (54) in the following from which we deduce that V k 1 > 0, hence the second part of Proposition 3.2. ✷ Extension of Breuer-Major theorem to affine Grassmannian case Here we just give a sketchy recall of the literature on CLTs of Breuer-Major type, that can be found in [21], [22]. In 1983, Breuer-Major provided a CLT for a 1-dimensional centered stationary Gaussian sequence indexed by Z ν for ν ≥ 1, satisfying some condition on its correlation function. This result was first extended by Giraitis and Surgailis [10] when considering a continuous time setting, then by Arcones [3] with a powerful result holding for vector valued random sequences. The proof, in the discrete case, is based on the method of cumulants and diagram formulae. Estrade and Léon rewrote it explicitly (see [8] where V k q is defined in Proposition 3.2. Indeed, we have (28) and (29). Considering the projection onto the first Q chaos, Π Q L # d−k,k (A u (f ; T )) , defined in (45), we can write, as in the proof of Theorem 2.2 in [22] (or in [8]), where I q (f ) denotes the multiple Wiener-Itô integral (of order q) of f with respect to W , and where b k m are such that the mapping m → b k m is symmetric on {1, · · · , D} q , and we have again used isotropy to observe that b k m depends on V * only through its dimension, which is k. Moreover, the functions (u x,j ) 1≤j≤D are orthogonal in L 2 (R d ) such that for the field Z(x) defined in (16), where W is the complex Brownian measure on R d . Note that, in writing (55), we have used the Fubini theorem to interchange the Wiener-Itô integral and the integral over the space ∪ In order to prove the CLT of Π Q L # d−k,k (A u (f ; T )) , it is enough to check that, for 1 ≤ p, q ≤ Q, (see [21] or [22], and for the notation, [8]) where V k q is defined in Proposition 3.2, and D denotes the Malliavin derivative. Standard analysis as in [22] can be invoked to conclude that it suffices to check that, for p ≤ q, which holds since, on one hand, for the case p = q we have ||g T k,q || 2 H q = V k q (T ) which is shown to converge to V k q in Proposition 3.2. On the other hand, the e-th contraction of g T k,p satisfies, for e < p, with some constant C, and under (H2), As in [8], we note that ψ e (t 3 − t 4 )ψ p−e (t 1 − t 3 ) ≤ ψ p (t 3 − t 4 ) + ψ p (t 1 − t 3 ), and by Lemma 3.1, we have which matches the estimates of [8], and thus we can follow the rest of the arguments verbatim to conclude that for some finite, combinatorial constant C(k), we have This concludes the proof of Proposition 3.3. ✷ Collating Propositions 3.2 and 3.3 leads to the main result, that is the following CLT where σ 2 d−k (u) is given by q≥1 V k q in (42). Remark 3.4 The assumption of isotropy was crucial to circumvent a direct computation of Hermite coefficients of the LKCs, providing bounds independent of the choice of V . Nevertheless the CLT should hold true under the assumption of stationarity together with hypotheses (H1), (H2) and (H3). Discussion Extension to general parameter spaces: Notice that the only place where we required the box type shape of the parameter space is when we get an upper bound on the limiting variance of L # d−k,k (A u (f ; T )). However, this can be overcome by a limiting procedure. Let us partition the space R d into small cuboids of volume η. We can identify these small cuboids by the centre of the cuboids. Let C η T be the set of cuboids which completely lie in the set T , and B η T be the cuboids which have non empty intersection with the set T and the complement of T . Denoting P i,η for the elements of the partition of R d into cuboids of volume η, we have Notice that using stationarity and the decay of covariance function γ, as in [8], we can conclude that var L # d,d−k (A u (f ; T ), 1) = O(|C η T |), where |C η T | is the cumulative volume of al cuboids which constitute C η T . Next, observe that |C η T | → |T | as η → 0. It implies that the contribution by the boundary terms to the variance is o(1), and thus can be ignored, which eventually means that the asymptotic Gaussianity can be proved by following the same methods as sketched out in this paper, when considering a d-dimensional compact, convex, symmetric 4 subset of R d , as parameter space T . Joint convergence of the various LKCs: We note here that using similar ideas, one can prove the multivariate case for different values of the threshold u. One of the important questions to look forward to, is the joint distribution of various LKCs evaluated at a fixed threshold. Although we believe the joint convergence can be proven, getting meaningful estimates on limiting covariances is likely to be challenging. Notice that those finite dimensional distributions given in Corollary 4.1 might help to obtain the CLT for general LKCs in an alternative way. Indeed, if we may ensure the tightness, then applying the Hadwiger formula (8) allows to conclude the CLT of L k (A u (f ; T )). Nevertheless, proving the tightness on such a space is still an open problem.
2017-04-03T06:31:06.000Z
2016-07-26T00:00:00.000
{ "year": 2016, "sha1": "ebc7b1a9106f8d251cc40064c85475e698fe29eb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ebc7b1a9106f8d251cc40064c85475e698fe29eb", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
237460925
pes2o/s2orc
v3-fos-license
Can Multistep Nonparametric Regressions Beat Historical Average in Predicting Excess Stock Returns? Several economic and financial variables are said to have predictive power over excess stock returns. Empirically there is little consensus among academics, whether these variables have predictive power or not. Results are often sensitive to the econometric model of choice. The econometric models can produce biased results due to the high degree of persistence in predictive variables. Apart from high persistence, the relationship between stock return and the predictive variable may also be misspecified in the model. In order to address possible non-linearities and endogeneity between the residuals and persistent independent variables in predictive regressions, multi-step non-parametric and semiparametric regressions are explored in this paper. In these regressions, the conditional mean and the residuals are estimated separately and then added to obtain the predicted excess stock returns. Goyal and Welch's (2008) predictive variables are used to predict excess S&P 500 returns. The predictive performance of both in-sample and out-of-sample of the two proposed models are compared with the historical average, Ordinary Least Squares (OLS) and non-parametric regressions. The performance of the models is evaluated using Root Mean Squared Errors (RMSEs). The explored models, particularly the two-step nonparametric model, outperform the compared models in-sample. Out-of-sample several variables are found to have predictive ability. Introduction This paper explores two multi-step non-parametric and semi-parametric methods, which estimate the conditional mean and the residuals separately. Preliminary work done in this area involved using OLS regression of returns on lagged instrument variables that have predictive power over stock returns. While this is not the first attempt to apply non-parametric to predict excess stock returns, see Jin et. al, (2013), Lee et. al, (2014), and Chen & Hong (2016), the models explored in this paper have not been applied before. Prior to the late twentieth century, the consensus in the finance literature was that excess stock returns were entirely unpredictable (Fama, 1970), attributing to the efficient market hypothesis. However, towards the end of the century, numerous studies came out that believed otherwise; several variables were found to have predictive power over excess stock return. Fama and French (1988a) and Poterba and Summers (1988) find that the statistical significance of their univariate model using only past returns improves greatly when predictive variables are added to the model. Among many economic variables that are found to have predictive powers, the most notable are short term interest rates (Fama & Schwert, 1977), yield spreads (Campbell J. Y., 1987), stock market volatility (Goyal & Santa-Clara, 2003;Yin, 2019), book-to-market ratios (Ponti and Schall, 1998), price-earnings ratios (Campbell and Shiller 1988), and dividend-price ratio (Campbell and Shiller, 1988;Fama and French, 1988b;Lettau and Van Nieuwerburgh, 2008). Li and Tsiakas (2017) find excess return to be predictable out-of-sample when many of these economic variables are used in a kitchen sink regression with shrinkage. Given the noisy nature of stock returns a sizable portion of the series tends to remain unpredictable, however, based on in-sample tests there now seems to be a consensus among the financial economists that the series does contain a significant predictable component (Campbell, 2000). Using bivariate predictive regression Goyal & Welch (2008) show that these predicting variables perform poorly, in comparison with historical average excess stock return in out-of-sample forecasts. Campbell & Thompson (2008) on the other hand, using a priori knowledge about the regression parameters, impose sign restrictions on the regression parameters; and show that many predictive variables have better out-of-sample performance than historical average return. Baltas and Karyampas (2018) attribute the sensitivity in the predictive ability to stages in the business cycle, and Tsiakas, Li and Zhang (2020) find certain variables to have predictive power during expansions and some during recessions. Controversy surrounding the out-of-sample performance of the predictive variables cast doubt over the predictive ability of these variables. Whether the contradicting results are due to model misspecification pose even serious concern. The non-robust results of return predictability may stem from the statistical tests performed (Lamoureux & Zhou, 1996). Using a linear model when the true data generation process is non-linear may seriously undermine forecasts. Chen & Hong (2016) point out that linear model might not be appropriate to capture the movements in stock return and suggest using non-parametric regressions, which can capture the linearities and non-linearities in the data without imposing parametric restrictions. According to Chen and Hong (2016) the restrictions imposed by Campbell and Thomspon are ways of introducing non-linearity into the model, they too like the latter find predictive variables to outperform historical average in a non-parametric setting. Parametric and non-parametric forecast combination models also reach a similar conclusion (Elliott et. al, 2013;Jin et. al, 2013). Another plausible reason for contradicting results on the out-of-sample predictive ability of variables noted as predictive variables in the literature is due to the non-stationarities in the explanatory variables. Roll (2002) argues that in the presence of rational expectation if the innovations are identically and independently distributed then the expectation about a future quantity must follow a random walk. Stock prices are based on expectations about a future quantity, and explanatory variables like dividend yield and book to market ratio are in turn functions of stock prices. Thus, these explanatory variables must also follow a random walk. Unbalanced predictive regression of stationary stock return and non-stationary dividend yield may lead one to conclude that dividend yield has no predictive power. Structural breaks might also be present in the data, for instance, Fama and French (2001) have pointed out a dramatic fall in the proportion of firms paying dividends in the late 1970s. If not careful these structural breaks might be incorrectly categorized as non-stationarity. Apart from the term spread prior to 1952 and dividend yield in the period 1926 to 1994, they find the presence of unit root in all popular predictive variables. Using international data Torous, Valkanov and Yan (2004) show that when dividend to price ratio is stationary it has predictive power and not when it is non-stationary. Torous, Valkanov and Yan (2004) find the presence of unit root in almost all commonly used predictive variables, within a 95% confidence interval. In pre-1926and post-1994data Torous, Valkanov, & Yan's (2004 tests indicate the presence of unit root in dividend yield and when dividend yield from those sub-periods are used to predict stock excess return, the predictive power is lost. Thus, the presence of unit root in predictive variables might explain why in certain cases they are found to have predictive power and not in other cases. Due to the possibility of a nonlinear relationship between excess stock return and predictive variables, and nonstationarities in the predictive variables this paper explores two multi-step non-parametric and semi-parametric methods, which estimate the conditional mean and the residuals separately. The motivation is to evaluate whether such augmented non-parametric regressions can predict excess stock return in-sample and out-of-sample. The empirical performances of the proposed models in this paper are compared with the historical mean model, simple OLS model, local constant and local linear non-parametric models, on the basis of the root mean squared (forecast) errors. Analysis is performed using Goyal and Welch's (2008) original data till 2005 and using the extended data till 2019. The results should be relevant to practitioners and academics attempting similar models to predict excess stock returns and help inform their decisions to proceed. Several methods have been explored to correct this bias. Stambaugh (1999) for instance uses the analytical expression of the bias in univariate linear, popularly known as Stambaugh's bias, and corrects the biased estimates accordingly. The analytical expression of bias derived by Stambaugh (1999) holds only when the dependent variable is stationary and under normality. Both stationarity of predictive variables and normality in error terms are strong assumptions in models of excess return (Roll, 2002). Amihud and Hurvich (2004) propose using a two-step augmented regression where the conditional mean and residuals are estimated separately using linear regression. The work proposed in this paper follows Amihud and Hurvich's (2004) two-step augmented regression, where the parametric models are replaced with non-parametric and semiparametric counterparts. The paper is organized as follows, section 2 presents the estimation of the two multi-step nonparametric and semiparametric regressions explored, along with the other models used for comparison, section 3 shares the empirical results, and section 4 concludes. OLS Preliminary studies use linear regression to predict excess return using other financial variables and their lags, that tend to move with excess return, such a model is shown by (1), where r t is the excess return and x t − 1 is a lagged explanatory variable. The parameters of the simple OLS regression are estimated by (2), where the t th row of matrix X and vector R are (1, x t − 1 ) and (r t ), respectively, and the predicted return, t , OLS is given by (3). OLS estimates are unbiased if all the information in x t − 1 has been used to predict r t . As most financial variables are highly persistent, there is information about the lags in x t − 1 that is not independent of u t . For instance, if the predicting variable, x t − 1 , follows an AR (1) process like (4), then E(x t − 1 |u t )≠0. If x t − 1 is persistent the error terms in (1) and (4) are not independent of each other and can be expressed using (5), where ξ≠0 and ε t are i.i.d. errors that are independent of v t and its lags. Thus, a simple OLS with autoregressive predicting variables will result in biased estimates. Historical Average (HA) Goyal and Welch (2008) compare the simple OLS predicted returns with the Historical Average (HA) returns shown in (6), the predicted returns are the average of the past realized returns. Nonparametric (NP) Instead of assuming the data generation process, to be a linear model, as shown in (1), the functional form can be expressed as m(x t − 1 ) using a local constant non-parametric model as shown in (7). For a discrete random x t − 1 there are n* observations in its neighborhood, let them be x, m(x t − 1 ) is the average of the r t 's corresponding to the x's (Pagan & Ullah, 1999). h is the window width that determines the size of the neighborhood of x t − 1 that will be used to find m(x t − 1 ), as shown in (8). where ψ t − 1 = (x − x t − 1 )⁄h. A kernel function K can be used for smoothing as illustrated in (9). While local constant minimizes ∑ [ ] with respect to m; local linear minimizes ∑[ ] Although the nonparametric regression addresses the specification bias stemming from selecting the functional form between r t and x t − 1 , it does not take into account the predictive bias stemming from highly autoregressive x t − 1 . This paper explores two new multistep nonparametric and semiparametric models to address that predictive regression bias. Model 1: Multistep Semiparametric Model (Multistep SP) In the multistep semi-parametric model, excess stock returns are predicted using a combination of linear and non-linear models. Any linear relationship between the excess stock return and the predictive variable is first captured using an OLS regression as (1). The linear prediction is then re-scaled for additional nonlinearities. Any remaining non-linearities and the endogeneity between x t − 1 and u t are then addressed by nonparametrically estimating the residuals of (1), u t , using the residuals of the AR(1) process of x t − 1 , v t . After running the OLS regressions (1) and (4) the residuals are saved and used in a nonparametric regression as shown in (10). The estimated values of t , SP = t ) are then used to update equation (1) as illustrated in (11). The predicted excess stock returns t,SP , is a sum of the predicted excess return from the OLS model in (1) and the predicted residual from (10). Model 2: Multistep Nonparametric Model (Multistep NP) The multistep nonparametric model is similar to the previous model discussed, except the linear regressions (1) and (4) are replaced with nonparametric regressions. Step 1: Excess stock returns are regressed on the predictive variables using nonparametric regressions as in (12) and the residuals, t , NP are saved. Step 4: Excess stock returns are predicted as the sum of the predicted values of (12) and (14). An across-the-board non-parametric model addresses not only any nonlinear relationship between excess stock return and the predictive variable but also any nonlinear relationship the predictive variable may have with its own past. r t, NP P = m (x t − 1 ) + m 2 (v t − 1, NP ) (15) r t, NPP = r t, NP + t , NP P In the next section, the predictive performance in-sample and out-of-sample of the two proposed models are compared with the historical average, OLS and nonparametric regressions, for the predictive variables used in Goyal and Welch (2008) and Campbell and Thompson (2008). Empirical Results Annual S&P 500 Index return with dividends in excess of the risk-free return are predicted using the historical average in (6), OLS regression model in (1), nonparametric regression (NP) as in (7), proposed multistep semi-parametric (Multistep SP) and nonparametric models (Multistep NP). Data is collected from Amit Goyal's website. Bold typeface in each row indicates the model with the lowest RMSE when compared till 4 decimal places. Start reports the start year of the sample. ρ is the one-lag autocorrelation of the independent variable. The dependent variable is risk premium with dividends. The out-of-sample Root Mean Squared Forecast Error (RMSFE) for the original data till 2005 of the aforementioned models is presented in Table 3. Rolling expanding window is used for estimation, with the first sample using 20 years of data. The estimated model is used to forecast the one year ahead excess S&P 500 return. The bold typeface indicates the model with the lowest RMSFE for respective predictive variables. The historic model outperforms the other models in the out-of-sample analysis in half of the cases. In the other half of the variables studied the predictive models were able to out predict the historical average in terms of lower forecast errors. In out-of-sample local constant regressions tend to produce lower forecast errors than corresponding local linear models. The nonparametric and semiparametric models that outperform the historical average in-sample but not in out-of-sample analysis likely suffer from overfitting. Although no model consistently outperforms the others studied, it does indicate which model is better suited based on the variable in question. It is not unusual to expect that each of these variables have unique relationships or possibly influences on stock returns, and one particular model may not be suitable for all. The last three rows present results for dividend yield, earnings price ratio and book to market ratio, for samples starting in year 1928. It can be seen that the results are also sensitive to the starting year. Earning to price ratio does not appear to have predictive ability based on the models tested when the sample starts from 1873. However, changing the start year to 1928 changed the predictive performance of the models, and all the studied models are able to outperform the historic average. Measures such as RMSFE can be swayed by extremely large forecast errors, even if they are rare. Out-of-sample analysis extended till 2019 are presented in Table 4. In the extended data, the gains from non-parametric and semiparametric are reduced and historical average tends to dominate in most variables. However, dividend yield spread, book to market ratio, investment capital and percent of equity issuing continue to show predictive powers in the extended data. Local linear models tend to do better in-sample compared to local constant, whereas out of sample local constant produces lower forecast errors. Bold typeface in each row indicates the model with the lowest RMSFE when compared till 5 decimal places. Start reports the start year of the sample. Expanding window is used for estimation, with 20 years bands. The dependent variable is risk premium with dividends. Conclusion Predictability of stock return is an elusive subject, and whether certain variables have predictive power over stock return has yet to cease the interest of many academics and practitioners. The presence of high autocorrelation in the predictive variables and possible non-linearities in their relationship with stock return further complicates the matter. In order to address the possible non-linearity and endogeneity between the residuals due to the persistent independent variables in the predictive regression, multistep semiparametric and non-parametric methods are explored, where the conditional mean and the residuals are estimated separately and added to obtain the predicted excess stock return. Using Goyal and Welch's (2008) predictive variables, the proposed models particularly the multistep nonparametric model produce better estimates of the excess S&P 500 return in-sample than the historical average and OLS regression. Out-of-sample the results are mixed, while in many variables the historical average dominates in terms of producing lower forecast errors, there are several variables, that are able to better predict the stock excess returns than the historical average. Future research in this area can focus on studying individual variables and their relationship with excess stock returns to find the most suitable forecasting model. Different estimation and forecast windows may also provide forecasting opportunities. In order to reduce overfitting often encountered in non-parametric regression, possible regularization parameters can be explored.
2021-09-09T20:48:09.312Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a17eaea839aed91f222d53ab32ca6ef6940802cd", "oa_license": "CCBY", "oa_url": "https://www.sciedu.ca/journal/index.php/ijfr/article/download/20611/12798", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dce3fdc800c2ebc253907e97f74d249cce97b16d", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
124716572
pes2o/s2orc
v3-fos-license
DYNAMIC DAMPER PRESSURE FLUCTUATION IN THE PUMPING SYSTEMS О.В. Корольов, Чжоу Хуіюй. Динамічні погашувачі коливань тиску у насосних системах. Інерційна частина будь-якого пристрою або машини (наприклад, насоса), підвішена або укріплена на пружному каркасі, що перебуває під дією збуджуючої сили, яка діє з постійною частотою, може бути схильна до коливань, особливо поблизу резонансної ділянки. Для усунення таких коливань можна вдатися до використання динамічного погашувача коливань. Мета: Метою роботи є аналітичне дослідження різних динамічних погашувачів для завдань зниження коливання тиску в насосних системах. Матеріали і методи: Порівняльний аналіз ефективності функціонування був проведений для динамічних погашувачів двох типів ⎯ гідравлічного і механічного. Результати: Представлено методику розрахунку динамічного погашувача коливань тиску рідини в насосах гідравлічного і механічного типу. Алгоритми розрахунків доведено до інженерних застосувань і впроваджено у виробничий процес. Проведені розрахунки показують, що застосування механічних динамічних погашувачів коливань доцільне на високочастотних насосах, разом з тим, при більшій частоти роботи насоса в 6 разів, виграємо в габаритах демпфера в 3,5 рази. Ключові слова: динамічний гаситель коливань, демпфер, коливання, збуджуюча сила. Introduction.Inertial part of any devices and equipment (e.g., pumps), hung or mounted on the resilient frame and being under the influence of the disturbing force that works at a constant frequency, may be subject to fluctuations, especially near of the resonance area.With regard to the pumping systems we can confidently assert that such fluctuations lead to increased vibration of pipelines, decrease the pumping system resource as a whole, as well as to a significant error in the measurement of flow rate, supplied in such pump.Elimination of such fluctuations can be done in one of two ways: either to stop the impact of the disturbing forces, that in the case when the source of disturbances is a pump system itself is impossible; either deduce the system from the resonance area, changing the inertia and resilient components of the system.However, the case with the suction conduit of the piston pump that is connected to a supply container of a large volume, this approach is unworkable.In this case, you can resort to the dynamic damper (DD), invented in 1909 by Frahm. The basic dynamic damper diagram is shown in Fig. 1.Thus, a periodic disturbing force 0 sin P t ω acts on the inertia part of the mass M that contained in the mechanism.The resilient component of the system is summarized by a spring with stiffness K.The dynamic damper is an oscillating system with mass m relatively smaller then pump and by a spring with stiffness k.As can be seen from the figure, DD is related with the mass M. Conditions of DD work is the equality of natural frequency / K m of the associated damper and frequency ωof the disturbing force.In this case, the work of the whole system is realized in such way that the mass M does not fluctuate, and the oscillating system with mass m and the spring stiffness k, fluctuates so that elastic force of the spring is equal in magnitude and opposite in direction to the disturbing force 0 sin P t ω . Full evidence of positions presented hereinafter given, for example, in [1,2].We introduce the following dimensionless parameters: -the natural frequency of the main system, 1/s; / m M μ = -the ratio of the mass of the dynamic damper to the mass of the main system. Given these values can be obtained the dependence for the relative amplitudes of oscillations of the pump mass (1) and DD mass (2): where ω -the frequency of the disturbing forces rad/s (1/s). Fig. 1. Schematic diagram of the dynamic damper As we can see from (1) and (2), when the frequency of the disturbing force ω and the DD natural frequency d ω are coincided, the amplitude of the mass oscillation M goes to zero 0 M x → , and the amplitude of oscillation DD, respectively, proportional to the ratio of springs elasticity coefficients of main and auxiliary mass The main disadvantage of dynamic dampers designs is their narrow range of work, which limits their use mainly in systems with disturbing force of constant frequency, such as synchronous machines.Recently, however, the range of DD applicability expanded by resolving the problems of DD work in the systems with disturbing force of variable frequency.For this purpose DD is manufactured with possibilities to adjust its own frequency d ω and adjustment it into resonance with disturbing force.Most often it achieved by changing the elasticity of the DD springs DD using various kinds of design tools.Range of the frequency change / is wide enough to ensure that the system had the ability to adjust and adjustment. The aim of research is an analytical study of various dynamic dampers to reduce the problems of pressure fluctuations in pumping systems. Materials and Methods.When applying DD to reduce pressure fluctuations in the system, such as a piston pump suction line, it is first necessary to determine what constitutes a mass M, and what is meant by a spring with stiffness K.Under the weight of M we mean the mass of liquid in the pipeline.Spring stiffness coefficient K corresponds to resilience of the compressible medium in the connected supply tank.Mounted on the pump inlet the gas cap-damper acts as DD, i.e., k ⎯ coefficient resilience of gas into the gas cap-connected damper and m ⎯ mass of the liquid contained between the pump inlet and place the damper settings.This disturbing force 0 sin P t ω is applied to masses M and m, which, however, does not deprive of fairness the general considerations on the work of DD.The introduction of such an analogy simplifies greatly the analysis of the damper operation in these conditions and shows an important role not only of the resonance volumes included in the work of tanks, but also the mass of the liquid separated by installed damper in the pipeline. As applied to the suction conduit masses of M and m can be written as follows: ( ) where L , -the full length of the pipeline and the length of its section from the entrance to the pump to place the damper places, respectively; F ⎯ square of the living section of the pipeline; ρ f ⎯ fluid density.Calculation of vibration damper frequency in such conditions will look like: where 0 P -the pressure in the damper; n -adiabatic index; F -square of the living section of the pipeline; d V -volume of gas in the damper (compressible volume).Let us consider two types of pumps: the single-piston and three-piston ones.Since the frequency of the disturbance under these conditions is I I 2 10 f ω = π = π for single-piston pump and III 60 ω = π ⎯ for three-piston pump, the conditions for the calculation of vibration damper frequency will take the form: Hence, the structural characteristics of the "gas cap" are defined, respectively, for the two cases: It should be noted that the choice of the place of installation of the damper, i.e. length is limited to condition m M << , and therefore the damper must be installed as close as possible to the entrance to the pump (usually , where D p ⎯ diameter of the pipe).Accordingly, the parameters of the gas chamber of the damper, made of pipes of ∅70×7.0 and installed on the pipeline of ∅ p 45×1.5 will be equal to As a method of struggle with significant pressure fluctuation in the suction line can also be the installation of DD.Diagram is shown in Fig. 2. DD that presented in Fig. 2 is a resonance absorber, i.e., it only works in the field of resonant frequencies and frequencies close to them.In the case when the disturbance frequency becomes significantly different from the resonant, the liquid oscillations in a pumping system become similar to oscillation in a system without installed DD.To avoid such effects, spring elements must be installed in such a way as to be able to change their elastic properties by changing the degree of tension (which changes the resonant frequency of the damper). The best design for this purpose is consistent installation of elastic-inertial elements (pistonspring) in one housing.This vibrational chain in the case of a large number of components is a highpass filter, i.e., it does not pass a frequency disturbance 2 / k m ω ≥ .As a model system we consider DD comprising three oscillators with masses m, 2m, 3m DD oscillation frequency in this case can be calculated as m k s 283 .0 ω = .According to [2] it will reduce the frequency of natural oscillations of DD in comparison with the structure shown in Fig. 2 approximately 3 times that essential to quench the low-frequency oscillations. Fig. 2. Schematic diagram of the DD with mechanical elastic elements We calculate the parameters of the DD that is installed in a horizontal pipeline of ∅ p 45×1.5.Initial data for the calculation: The stiffness of the springs, in the system in accordance with [4] By varying the number of coils in the spring, we determine the necessary length of the pistons.Number of coils of wire for single piston pump will take to 10, for three piston pump ⎯ 2. We obtain the following values: The calculations make it possible to construct a dynamic vibration damper as for single piston pump, as well as three piston pump. Conclusions.Here are the methods of calculation of dynamic damper of fluid pressure oscillations in hydraulic and mechanical pumps.Algorithms of calculations brought to engineering applications and implemented in the production process in Kislorodmash and Krioprom plants (Odessa, Ukraine). It is shown that the pressure variations in the suction pipe can be reduced not only by setting the "gas cap"-damper, but also by dynamic damper of mechanical type.Calculations show that the use of mechanical DD suitably in high frequency pumps, and with increasing frequency of the pump in 6 times, winning in the dimensions of the damper in 3.5 times.The presented method of calculation will allow expand the range of applicability of DD in hydraulic systems. ω = rad/s -the frequencies of oscillations of piston pump groups for single-piston sys- tem; III 30 ω = rad/s -the frequencies of oscillations piston pump group for three-piston system; . the number of spring coils.The mass of the piston according to[4], is determined as () the hole square for the passage f from the condition / p p d D = 0.7, we obtain an expres- sion for the mass of the piston: for the natural frequency of the DD oscillations takes the form:Taking into account that the frequencies of perturbing forces for two types of pumps are 5π and 30π rad/s, respectively, define the dimensions of the springs and pistons included in DD.For single piston pump ⎯
2018-12-06T21:51:47.853Z
2016-04-27T00:00:00.000
{ "year": 2016, "sha1": "2e80cc8b6ef2030c440fddd4b5605beb1f35fa31", "oa_license": "CCBY", "oa_url": "http://pratsi.opu.ua/app/webroot/articles/1463076736.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2e80cc8b6ef2030c440fddd4b5605beb1f35fa31", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Materials Science" ] }
262577212
pes2o/s2orc
v3-fos-license
Nature of senior high school chemistry students’ alternative conceptions in organic qualitative analysis ABSTRACT INTRODUCTION Chemistry fundamentally deals with the study of matter and the changes it undergoes (Ebbing & Gammon, 2005).Chemical analysis is a vital tool in learning other chemistryrelated concepts in the areas of medicine, the chemical industry, government, and academic laboratories throughout the world due to its interdisciplinary nature.Learning chemical analysis in chemistry falls in the area of analytical chemistry.Analytical chemistry consists of a set of powerful ideas and methods that are useful in all fields of science, medicine, and engineering (Skoog et al., 2014).The scope of analytical chemistry continues to be vital and evolve due to its enormous applications in other scientific-related fields such as biology, materials science, ecology, medicine, and forensic science.For instance, analytical concepts are employed to determine the identity and amount of major, minor and traces of elements in substances; concentrations of oxygen and carbon (IV) oxide (CO2) are determined to diagnose and treat many illnesses; quantities of hydrocarbons (compounds containing carbon and hydrogen only), oxides of nitrogen (such as nitrogen (II) oxide (NO), nitrogen (IV) oxide (NO2), and carbon (II) oxide (CO) present in exhaust gases from automobiles are determined using chemical analysis (Atkins & Carey, 1990;Fessenden & Fessenden, 1994). Analytical chemistry is broadly classified into quantitative analysis and qualitative analysis (Ministry of Education [MOE], 2010).Quantitative analysis deals with the estimation of constituents of a substance whereas qualitative analysis deals with the identification and detection of constituents of a substance or mixture of substances in solutions (Dash, 2011).Qualitative analysis is further categorized into inorganic qualitative analysis and organic qualitative analysis (OQA).The inorganic qualitative analysis considers the identification of inorganic ions (cations and anions) and gases, and OQA deals with the detection of functional groups in organic compounds (Fieser & Williamson, 1992). OQA is a concept in organic chemistry that helps students understand the fundamental concepts of the structure and reactivity of organic compounds (Adu-Gyamfi & Anim-Eduful, 2022).Learning of chemical properties and reactions of organic compounds is abstract, difficult, and complicated (Vishnoi, 2009) for students to learn.Notwithstanding, students with a deep understanding of elemental chemical analysis make it possible to identify functional groups such as OPEN ACCESS alkenes, alkynes, alkanols, alkanoic (carboxylic) acids, alkylalkanoates (esters), alkanals, alkanones, and amides (MOE, 2010) in solutions.Again, chemical analysis in OQA does not only help students improve their experimental techniques in organic chemistry but also inculcates in students the spirit of deductive reasoning thus enabling students to apply theoretical knowledge acquired in practical problems in their daily lives.To Fieser and Williamson (1992), OQA offers students the opportunity to identify unknown chemicals in substances including toxic substances.Identification of functional group in organic chemistry occurs when there is a chemical reaction between organic solutions and suitable oxidizing and reducing reagents such as acidified potassium heptaoxodichromate (VI), brown bromine solution, ammoniacalsilvernitrate (AgNO3/NH3), sodium trioxonitrate (IV) (Na2CO3) and acidified potassium tetraoxomanganate (VII) (Atkins & Carey, 1990;Ebbing & Gammon, 2005;Fieser & Williamson, 1992).A functional group is a specific combination of bonded atoms that reacts in a characteristic way for easy identification (Silberberg, 2000).Functional group detection is usually the more appropriate way of identifying and recognizing functional groups in organic compounds through chemical reactions and mechanisms.Vishnoi (2009) reports that the three most important and common problems students encounter in OQA concepts are the separation of mixtures of organic compounds; identification of organic compounds, and preparation of organic compounds (in this study, we considered the identification of organic functional groups in compounds. Organic chemistry, which is the study of carbon-containing compounds (Ebbing & Gammon, 2005;Fessenden & Fessenden, 1994;Fieser & Williamson, 1992) consists of many concepts (MOE, 2010) such as nomenclature of structures, physical and chemical properties, separation and purification of compounds, chemical reactions and mechanisms, detections of functional groups (MOE, 2010, p. 46-51).Researchers have revealed that students have difficulties in many of these concepts in organic chemistry.For instance, concepts of nomenclature of organic compounds using IUPAC system by (Adu-Gyamfi et al., 2013, 2017), organic reactions and mechanisms (Bhattacharyya & Bodner 2005;Ferguson & Bodner, 2008;Graulich, 2015;Tang et al., 2010;Wasacz, 2010) functional groups detection (Adu-Gyamfi & Anim-Eduful, 2022;Anim-Eduful & Adu-Gyamfi, 2021, 2022b).For example, we reported in our study (Anim-Eduful & Adu-Gyamfi 2022b) that senior high school (SHS) chemistry students demonstrated conceptual difficulties at a level of no scientific understanding in the detection of organic functional groups such as hydrocarbons (such as alkenes, alkynes, and benzene), alkanols, alkanoic (carboxylic) acids, alkylalkanoates (esters), alkanals (aldehydes), alkenones (ketones) and amides using a two-tier diagnostic test.The study further revealed that students' conceptual difficulties were categorized as factual difficulties, and alternative conceptions were envisaged in all the functional groups.However, this study did not report on the nature of alternative conceptions held by the students. In Ghana, chemical analysis is one of the important areas of chemistry that is introduced to students at SHS level (MOE, 2010).One of the major objectives of the chemistry curriculum is for chemistry students not only to acquire a deep understanding of chemical analysis of compounds to stimulate their analytical thinking but also to demonstrate knowledge of characteristic tests for functional groups (MOE, 2010, p. vii).This, perhaps, necessitated the developers of the Ghanaian chemistry curriculum to recommend students acquire analytical skills to help them appreciate and conceptualize chemical analysis of compounds in solutions.Students' deep knowledge of chemical analysis in functional group detection enhances their conceptual understanding of and help improve their scientific reasoning in other chemistry concepts (MOE, 2010).However, notwithstanding the importance of chemical analysis to students, empirical studies (Adu-Gyamfi & Anim-Eduful, 2022;Anim-Eduful & Adu-Gyamfi, 2022a, 2022b) have reported students' difficulties in learning OQA concepts. Many studies have revealed students' difficulties in organic chemistry concepts (Adu-Gyamfi & Asaki, 2022, 2023;Adu-Gyamfi et al., 2013, 2017;Bhattacharyya & Bodner, 2005;Childs & Sheehan, 2009;Ferguson & Bodner, 2008;Graulich, 2015;Wasacz, 2010).However, very few focused on OQA concept (Adu-Gyamfi & Anim-Eduful, 2022;Anim-Eduful & Adu-Gyamfi, 2021, 2022a, 2022b) in organic chemistry.An examination of literature on students' difficulties in learning OQA have shown that all these studies mainly focused on investigating students' level of understanding and their conceptual difficulties (that is, their factual difficulties and their alternative conceptions) in OQA.However, none of these studies has focused on the nature of students' alternative conceptions in OQA.For instance, Anim-Eduful and Adu-Gyamfi (2022b) investigated students' conceptual difficulties using a two-tier diagnostic test, which consisted of an answertier (A-tier) containing four options with three distractions and one correct answer.The answer tier sought for students' content knowledge.The second tier of the instrument was of an open-ended type that sought students' explanations (reasons-tier, R-tier) to the selected answers in the A-tier.In that study, we classified students to have alternative conceptions on the basis that either students scored both tiers incorrectly (no scientific understanding) or scored any of the two-tier correctly (partial scientific understanding), but students who scored both tier correctly were classified to have full scientific understanding. As science educators, we sought to investigate further about the nature of students alternative conceptions known which has not been reported in the previous studies (Adu-Gyamfi & Anim-Eduful, 2022;Anim-Eduful & Adu-Gyamfi, 2021, 2022b) in OQA is what is missing (the gap) in the literature about students conceptual difficulties (alternative conceptions) in OQA.There is a need to further investigate students' alternative conceptions to help determine whether their alternative conceptions were genuine or were due to a lack of knowledge in OQA.Again, with respect to students classified to have full scientific understanding (correctly scored both tier), probably their understanding could be due to guessing of answers but not entirely complete understanding of the concept or could also be based on genuine understanding of OQA concepts.These are the gaps previous studies could not report hence, this current study seeks to fill. Previous studies on OQA could not account for the nature of students' alternative conceptions purposely due to the type of instrument (two-tier diagnostic test) used.This could be the main limitation of two-tier diagnostic test resulting in a research gap in the literature about the nature of alternative conceptions in OQA held by students.It will be, therefore, appropriate for us to investigate the nature of students' alternative conceptions in OQA using a more robust instrument.For the nature of students' alternative conception in OQA to be investigated and help fill the gap in the literature, a more robust diagnostic test instrument could be developed and tested to overcome these limitations of two-tier instrument.Sreenivasulu and Subramaniam, (2013) viewed learning as a process that results in a conceptual change.Learning occurs when learners are able to organize and integrate new knowledge acquired into their pre-existing knowledge (Sreenivasulu & Subramaniam, 2013).Students' cognitive structures (misconceptions), which are contrary to scientifically accepted explanations by the scientific community are resistant to conceptual change.Thus, students with alternative conceptions in chemistry concepts could have difficulties learning meaningfully and also understanding the taught concepts (Caleon & Subramaniam, 2010;Mutlu & Sesen, 2016;Palmer, 2001;Treagust, 1995).Students' alternative conceptions greatly interfere with their conceptual understanding of science concepts (Sreenivasulu & Subramaniam, 2013).It is necessary for science educators and researchers to diagnose students' alternative conceptions (misconceptions) to help educators develop and use more efficient and effective instructional teaching strategies that stimulate conceptual change in students in order for them [learners] to facilitate their comprehension of these science concepts.Students can achieve meaningful learning when a multiple choice diagnostic instrument such as the four-tier diagnostic test (Caleon & Subramaniam, 2010;Hoe & Subramaniam, 2016) is used for students' conceptual difficulties to be diagnosed. The limitations of the two-tier diagnostic test instrument made it difficult for previous studies in OQA (Adu-Gyamfi & Anim-Eduful, 2022;Anim-Eduful & Adu-Gyamfi, 2021, 2022b) to determine the nature of students' alternative conceptions.The limitation of the two-tier could be addressed significantly by incorporating confidence rating into both tiers (A-and Rtier) to become a four-tier multiple-choice diagnostic test instrument (Caleon & Subramaniam, 2010).The four-tier diagnostic test items consisted of answer tier (A-tier) for students' content knowledge and a reason tier (R-tier) for students explanation knowledge making it a two-tier and confidence rating at each tier making it a four-tier diagnostic test (Caleon & Subramaniam, 2010).The four-tier makes available confidence ratings to students to rate their level of confidence in their selected responses in the A-and R-tier, which measures the accuracy and precision of their selected options in both tier.The confidence ratings ranged from Just guessing (1) to absolutely confident (6). Myriads of studies (Caleon & Subramaniam, 2010;Hoe & Subramaniam, 2016;Onder-Celikkanli & Tan, 2022;Sreenivasulu & Subramaniam, 2013) have revealed the effectiveness of four-tier diagnostic test in investigating students' alternative conceptions and the nature of those alternative conceptions in other science-related concepts.For instance, a study conducted in Singapore by Sreenivasulu and Subramaniam (2013) explored 296 undergraduate chemistry students' understanding of thermodynamics using four-tier diagnostic instrument.Findings of the study revealed that students harbored as many as 34 alternative conceptions in thermodynamics concepts and the strength of these alternative conceptions held by students were made known.Sreenivasulu and Subramaniam (2013), therefore, suggested that not only do four-tier diagnostic instrument help diagnose students' alternative conceptions in thermodynamics concepts in physics but also has the potential of determining the nature of students' alternative conceptions. Similarly, in Singapore, Hoe and Subramaniam (2016) explored the alternative conceptions held by grade 9 students in acid-base concepts in chemistry using the four-tier diagnostic instrument.The study revealed that grade 9 students harbored 30 alternative conceptions in acid-base concepts such as properties of acids and bases, strengths of acids and bases, pH, neutralization reactions, indicators and sub-microscopic views of acids and bases.Hoe and Subramaniam (2016) concluded in their study that, the fourtier diagnostic instrument is effective in determining the strength of students' alternative conceptions in acids and bases.Subsequently, a more recent study conducted in Turkey by Onder-Celikkanli and Tan (2022) investigated tenth-grade students' misconceptions about electric charge imbalance using a four-tier diagnostic misconceptions test was administered to 402 students.Findings of the study suggested that the four-tier diagnostic instrument helped determine misconceptions harbored by students in their learning of electric charge imbalance in physics.Due to the effectiveness of the four-tier diagnostic test instrument in diagnosing misconceptions held by students, Onder-Celikkanli and Tan (2022) recommended the use of such an instrument.This is because these tiers help identify what students know (either they know by guessing or are genuine) and what they do not know.Studies above have shown the effectiveness of the fourtier diagnostic test instrument in investigating the nature of students' alternative conceptions of science concepts of which chemistry is not an exception.It is, therefore, appropriate to investigate the nature of chemistry students' alternative conceptions in OQA using a four-tier diagnostic test instrument. In Ghana, the West African Examination Council (WAEC) chemistry chief examiner's reports (WAEC, 2015(WAEC, , 2016(WAEC, , 2017(WAEC, , 2018(WAEC, , 2019(WAEC, , 2020) ) have also reported on Ghanaian SHS chemistry students' difficulties in answering standardized test items on OQA (functional group detection) during their examinations.However, chief examiners reports have not indicated the nature of students' alternative conceptions in learning OQA concepts.That is, whether students' alternative conceptions in OQA are significant or otherwise and whether these alternative conceptions are due to students' lack of knowledge or lack of understanding of the concepts.Studies in OQA (Adu-Gyamfi & Anim-Eduful, 2022; Anim-Eduful & Adu-Gyamfi, 2021, 2022b), were silent on the nature of alternative conceptions shown by students.This is to say that these studies could not report whether students' correct answers provided as explanations to the concepts (suggesting understanding) were due to correct reasoning or due to guessing.Consequently, there is a lack of evidence in the literature to show that students with incorrect answers (misconceptions) were due to wrong reasoning or lack of knowledge of the concepts but not lack of understanding of the concepts leading to their alternative conceptions.Hence, the need for this study. As evidence abounds in the literature about the effectiveness of a four-tier diagnostic test instrument in investigating the nature of students' alternative conceptions in other science-related concepts (Hoe & Subramaniam, 2016;Onder-Celikkanli & Tan, 2022;Sreenivasulu & Subramaniam, 2013) but not in OQA.It will, therefore, be appropriate to investigate the nature of students' alternative conceptions in OQA using a four-tier diagnostic test instrument.This study will help to a larger extent, contribute to the body of knowledge, as the findings will expand the boundary of existing literature on students' alternative conceptions in OQA and also contribute to the existing body of knowledge in organic chemistry by accounting for the nature of students' alternative conceptions in OQA.Again, the findings of this study will inform policy and decision-making towards teaching and learning of OQA and organic chemistry as a whole at the high school level and even beyond. Research Design This study employed a cross-sectional survey design (Creswell, 2014).This design helped to collect quantitative data from a sampled population at the same period. Sample & Sampling Procedures This study was carried out in Cape Coast Metropolis in the Central Region of Ghana.There were 10 SHSs in Cape Coast Metropolis for the 2022/2023 academic year.The target population for this study was all SHS 3 chemistry students offering elective chemistry as an elective subject for the 2022/2023 academic year in all the ten schools in the Metropolis.This was because organic chemistry is taught at SHS 2 (MOE, 2010), hence SHS 3 chemistry students had studied the concept in form 2, and hence had covered enough of the concepts.Thus, SHS 3 students were in the position to help obtain data required for this study than those in SHS 1 and SHS 2. Schools in Cape Coast Metropolis were selected for this study because, all three categories (category A, category B, and category C) of schools (MOE, 2010) in Ghana were present within the Metropolis and students in these categories of schools possess similar characteristics as other students in similar schools in other 15 regions in Ghana. The ten schools were stratified into three strata as category A, category B, and category C (MOE, 2010).There were five category A schools, two category B and three category C schools.Two schools each were randomly selected from two categories (category A and category C) and the two category B schools were purposively selected.In all, six schools out of the ten schools were selected to participate in the study.This was to ensure that every chemistry student in the ten schools had equal chance of being selected to participate in the study.At the time of data collection, only three schools (one each from the three categories) had covered enough in OQA, thus, 345 SHS3 students from three schools within the metropolis participated in the study.This implies that the other three schools had not covered enough content in OQA required of them to respond to the test items appropriately and hence could not participate in the study. Research Instrument The instrument used in this study was an achievement test in the form of a diagnostic test (a four-tier-multiple-choice diagnostic test).The diagnostic test based on functional group detection such as hydrocarbons (alkanes, alkenes, alkynes, and benzene), alkanols, alkanoic acids, aldehydes, ketones, and amides consisting of 17 items was adapted from (Adu-Gyamfi & Anim-Eduful, 2022) dubbed organic qualitative analysis diagnostic test (OQADT).The instrument, which was two-tier (content knowledge (A-tier) and open-ended (R-tier) originally, was modified to suit the current study.The modification was done in two phases.In the first phase, the open-ended part (reason-tier, R-tier) was developed by studying available alternative conceptions reported in the literature (Adu-Gyamfi & Anim-Eduful, 2022; Anim-Eduful & Adu-Gyamfi, 2021, 2022b).These alternative conceptions were used as distractors in the options part of the R-tier.At this point, the instrument had become a complete two-tier with four options for both the A-tier and R-tier.That is, content knowledge (A-tier) and explanation knowledge (R-tier) with four options under each tier.During the second phase, confidence ratings were incorporated at each tier making it a complete four-tier instrument; four-tier organic qualitative analysis test (FTOQAT).The intent of the four-tier multiplechoice diagnostic test was to help measure nature of students' alternative conceptions in OQA.The confidence ratings were incorporated to help measure the certainty level of students' answer selection.That is, to determine both students' correct content conceptions on OQA and whether their reasons were genuine and not guessing, likewise their incorrect responses.In all, the developed FTOQAT had 17 test items. Purpose of Study The purpose of this study was to investigate the nature of students' alternative conceptions using a four-tier diagnostic test instrument.That is, to determine whether students are able to segregate their mistakes resulting from lack of knowledge from mistakes due to genuine alternative conceptions or able to distinguish correct answers based on guessing from correct answers based on genuine understanding.Based on this purpose, the study sought to answer the question: What is the nature of students' alternative conception in OQA? Validity & Reliability of Research Instrument To ensure face validity of the four-tier diagnostic test instrument, it was shown to two experienced colleague chemistry teachers who were examiners and a science educator for expert advice.Their input helped to fine-tune the instruments before it was pilot-tested.Thereafter, the instrument was pilot-tested with 69 SHS 3 chemistry students from two schools in Abura-Asebu-Kwamankese District, a District in Central Region of Ghana.Students in the pilottested schools had similar characteristics as those who participated in the main study.The purpose of the pilot-testing was to help determine the item difficulty level of the test items and also to establish the reliability coefficient of the instrument.After the pilot-testing, four items (6, 9, 15, and 17) were deleted because they measured the same functional groups hence the deletion.In all, 13 items remained after the deletion.Thereafter, Kuder-Richardson 21 (KR-21) reliability coefficient was calculated to determine the internal consistency of the instrument.The instrument was reliable as the calculated KR-21 value was .81. Data Collection Procedure Before the data collection, we had a brief discussion with teachers teaching the third-year students to ascertain whether they (teachers) had covered the concepts of OQA in organic chemistry.Thereafter, we also briefed the students about the relevance and the need for participating in the study as they were preparing to write their final examination conducted by WAEC.Permission was sought from the authorities of the participating schools for smooth data collection.This was to help have data collected without any difficulties and also ensure corporation among participants.In all, we spent two weeks collecting data from 345 SHS 3 chemistry students selected from three schools in Cape Coast Metropolis. Data Processing & Analysis Data collected on every item on FTOQAT was analysed according to Caleon and Subramaniam (2010) using descriptive statistics (such as percentages, frequencies, standard deviation, and mean).The answer tier and it corresponding reason tier were scored separately as: '0' and '1' for each incorrect and correct response respectively.Again, a value of '1'was assigned when both (A-and R-tier) were correct and '0' when otherwise (both items been incorrect) (Caleon & Subramaniam, 2010).Some relevant variables were calculated from students' confidence ratings for both tiers, which were mean values of students' confidence ratings for the answer tier and reason tier, and also for the test items: overall mean confidence (CF); (CFC) for confidence of students when correct answers provided; (CFW) confidence of students when wrong answers provided.Confidence discrimination quotient (CDQ) was calculated as CDQ=CFC-CFW/standard deviation of confidence.CDQ indicates whether students discriminate between what they know and what they do not know.Confidence ratings of students' alternative conceptions were further classified as follows: A significant alternative conception, which refers to a particular option or A-R options, which were chosen by 10% of the sample above the percentage of students who select the option or A-R options by chance.Significant alternative conceptions were further categorized into two: genuine and spurious. 1.A spurious alternative conception: It is a type of significant alternative conception that was expressed by students with low confidence ratings below 3.50.This is due to students' lack of knowledge or guessing. A genuine alternative conception: A type of significant alternative conception that was expressed with confidence associated with a mean confidence rating of above 3.50.This indicates that students' alternative conceptions were due to a lack of understanding of the concepts (A-tier) and the application of wrong reasoning (R-tier).Genuine alternative conceptions were categorized further into two: moderate and strong.Moderate alternative conceptions is a type of genuine alternative conceptions expressed by students with medium level of mean confident ratings between 3.50 and 4.0 and strong alternative conceptions been a type of genuine alternative conceptions expressed with high level mean confidence ratings of 4.0 and above.Table 1 shows summary of categorization of students' alternative conceptions as adopted from Caleon and Subramaniam (2010). RESULTS & DISCUSSION This study sought to investigate the nature of students' alternative conceptions in OQA using four-tier diagnostic test items.To achieve the purpose of this study, mean confidence ratings for A-tier, R-tier, and both tiers were calculated.Mean values for confidence ratings for all tiers helped to answer the research question raised.Even though students' response to items measuring their content knowledge (A-tier) and explanation knowledge (R-tier) revealed that students harbored alternative conceptions, such alternative conceptions were not reported.This was because students' alternative conceptions on OQA had already been reported in authors' previous studies.Generally, all the alternative conceptions harbored by students on OQA in this study were significant.That is, the options for (either answer-tier or reason-tier) or answer-tier and reason-tier (A-R) were chosen by 10% of the students sample, which was above the percentage of students who selected that particular option or A-R options by chance.Of the 13 items of which students harbored significant alternative conceptions, only two items (3 and 8) were spurious (that is, had mean values to be less than 3.50) and the remaining 11 were genuine (mean values been greater than 3.50).This implies that the alternative conceptions exhibited by students in the two items (3 and 8) were due to students' lack of knowledge in OQA.Although students' alternative conceptions were all significant, they were classified as spurious alternative conceptions because they were as a result of lack of knowledge of the concepts but not alternative conceptions that exist within students' cognitive structures due to lack of understanding.Such alternative conception could be subjected to conceptual change with effective conceptual change teaching instructional strategy.However, the rest of the alternative conceptions expressed by students in the other eleven were classified as genuine.Students' alternative conceptions classified as genuine indicate that those conceptions were due to a lack of understanding of the concepts and application of wrong reasoning to their correctly answered A-tier.This implies that students' alternative conception in 11 items were due to students' lack of understanding of concepts (Table 1). As seen in Table 1, three items (item 1, item 2, and item 4) of the eleven genuine alternative conceptions were strong alternative conceptions (M=3.50-3.99).This implies that students had a mean confidence rating in those three items to be above 3.99 and eight of the alternative conceptions being moderate had a mean confidence rating between 3.50 to 3.99.Students exhibiting eleven out of the thirteen test items on OQA to be genuine indicate that students' conceptual difficulties in learning OQA concepts were accompanied by alternative conceptions.This means that even when students correctly answered the content knowledge (A-tier), they selected wrong reasoning to justify their correct answers.This shows that students show little awareness of their conceptual difficulties, implying they do not know that they do not understand the concepts of OQA. It is worth noting that students' alternative conceptions in OQA were significant (M>3.50) and this could be deep-rooted within their cognitive structures.With students' alternative conceptions been significant in all the items, this could impede their further learning of chemistry concepts related to OQA.It could be seen in Table 1 that the nature of students' alternative conceptions was significant as all mean values for students' alternative conceptions were greater than 3.50. The four-tier diagnostic test with confidence ratings embedded in a two-tier (answer-tier and reason-tier) instrument was able to measure veracity of students confident regarding their answer response selections in both A-and Rtiers.Confidence ratings for each of the 13 test items were summarized as seen in Table 2. The average mean confidence (CF) for A-tier and R-tier were (M=3.75) and (M=3.66)respectively whereas that of each tier was (M=3.61).The mean CFC for the A-tier was 3.78 while that of CFW was 3.70.These CFC and CFW values for the A-tier suggest what confidence ratings students assign to their selection whether their selection is correctly scored or otherwise. CFC and CFW values for all tiers suggest that when the value is less than (M=4.0), it implies that students were unable to assign the highest confidence rating when the test item is answered correctly.This also indicates that students also fail to assign lowest confidence rating when the test item is wrongly answered.As seen in Table 2, only three A-tier items (item 1, item 2, and item 3) had CFC value to be above 4.0 and the rest of the ten item values were less than 4.0.This implies that for the ten items, although students' scores were correct, they failed to assign the highest confidence rating for the certainty of their responses.This suggests that even when students conceptually understood the concepts with correct score, still had low confidents regarding accuracy of their responses.This is interesting and could be that students do not know that they conceptually understand OQA concepts.With regards to the three items, students confidently assigned highest confidents ratings to correctly scored items. On the other hand, three items (1, 5, and 11) had CFW value to be less than 4.0, and the remaining ten of the items value were above 4.0.This suggests that although students' scores were wrong, yet they failed to assign lowest possible confidence rating.This suggest that while students exhibit alternative conceptions in those items, they still assigned high confident rating indicating that students were oblivious of their difficulties in OQA (that is, students do not know that they do not conceptually understand the concepts in OQA).This makes students alternative conceptions to be due to their lack of understanding of the concepts but not due to students lack of knowledge. For the R-tier, only two items (1 and 2) had CFC values to be above 4.0, and the remaining eleven items had CFC values to be below 4.0.This seems to suggest that when students' scores in the explanation knowledge (R-tier) were wrong, they were unable to assign the lowest confidence rating but rather assigned high confidence rating.This seems to suggest that more of the students' correct responses were due to a lack of understanding of concepts but unlikely to be due to a lack of knowledge of the concepts. As seen in Table 2, CFW for four items (1, 2, 7, and 11) on the reason tier had their values to be above 4.0 and the remaining nine items had CFW values to be below 4.0.This implies that even when students were wrong in their explanation, continued not to assign the lowest possible confident ratings.Once again, students' responses in justifying their A-tier were purely alternative conceptions due to their lack of understanding of the concepts in OQA but not due to lack of knowledge.This means students express high confidence ratings for their wrong explanations when their [students] explanations presented are scientifically unaccepted. Furthermore, on both tiers, three items (1, 2, and 7) had CFC values to be higher than 4.0 and ten of the items CFC values were below 4.0.This implies that students assigned highest confidence ratings to their incorrect responses.Little over half (53.8%) of the students' responses had CFW value to be below 4.0.This indicates that students assigned lowest confidence ratings when their scores were incorrect.This seems to suggest that majority of the students were able to know that they had conceptual difficulties in OQA concepts.For instance, students' responses regarding A-tier had four items (4, 5, 6, and 11); that of R-tier had nine items (2, 4, 5, 6, 7, 8, 10, 11, and 13) and eight items (1, 2, 4, 5, 6, 8, 10, and 11) for both-tier had negative CDQ values.This implies that A-tier recorded the lowest number of items students showing highest discriminating power (were able to discriminate what they know from what they do not know).Students' responses to both-tier items recorded the lower discriminating power of eight items (were able to discriminate what they know from what they do not know) with students' response to the R-tier recording the highest number of items students showed lowest discriminating power (were unable to discriminate what they know from what they do not know).This seems to suggest that many students failed to discriminate well between what they know and what they do not know with more seen in the R-tier followed by both-tier with A-tier being the least. CDQ values Findings of this study suggest that nature of alternative conceptions harbored by students' in OQA were significant (spurious and genuine).This implies that students' alternative conceptions were due to lack of understanding of the concepts but not due to lack of knowledge.This study has not only confirmed previous studies (Adu-Gyamfi & Anim-Eduful, 2022;Anim-Eduful & Adu-Gyamfi, 2021, 2022b) that students have conceptual difficulties accompanied with alternative conceptions in OQA but has shown the nature of students' alternative conception to be significant and genuine, which are due to their lack of understanding of the concept.Students' alternative conceptions were not only due to lack of understanding of the concepts but also due to application of wrong reasoning in their quest of explaining or providing reasons to justify their content knowledge.With few of students' alternative conceptions been genuine perhaps, could also be due to students been oblivious of the difficult nature of organic functional group detection concepts.Again, with few of students' alternative conceptions been spurious indicates that students' difficulties in learning of OQA could be due to lack of knowledge of the concepts or guessing but not necessarily due to their lack of understanding of the concepts. This study has shown that not all incorrect answer responses from students could be a genuine alternative conception, as this could also be mistakes in students' selections of answer options.This was seen as students assigned high confidence rating to their incorrect answers.In the same vein, this study has shown that not all correct answers from students are due to conceptual understanding of the concepts but could be due to guessing yielding correct answers.This was envisaged in the study as students assigned lowest confidence ratings when they had scored correctly.These findings seem to suggest that students were uncertain of their understanding of OQA concepts as they failed to assign high confident rating to their selected options regarding correct responses.Not only were students oblivious to the difficult nature of OQA concepts but they also doubted their understanding of the concepts with little or no confidence.Students doubting their understanding could also be due to their weak content knowledge in OQA.Students were with low confidence ratings with correct responses, especially to both A-tier and R-tier.This could be the reason students failed to assign the highest confidence rating when their responses to the concepts were even correct. Furthermore, this study has shown that students exhibited low discrimination power to most of the items.That is, they were unable to discriminate well between what they do know and what they do not know.Students exhibited low discrimination power for the explanation of concepts (R-tier) followed by both tiers, but the answer (A-tier) had high discrimination power.This implies that students were able to distinguish between what they know and what they do not know for the A-tier.This could be that students were good at answering questions that required a response in declarative learning ('what is') much better than answering questions that required explanatory responses found in explicative learning ('why').The findings of this study suggest that students have more conceptual difficulties in assigning reasons or explanations to a particular phenomenon than to indicate what the phenomenon is.This could be the reasons students harbored alternatives conceptions in OQA (Adu-Gyamfi & Anim-Eduful, 2022;Anim-Eduful & Adu-Gyamfi, 2021, 2022b) Furthermore, the study has shown that students' lack of conceptual understanding in OQA concepts could influence their learning of chemical analysis not only in OQA concepts but also in other chemistry concepts such as organic reactions.This could be the reason why students exhibit difficulties in understanding organic reactions and mechanisms regarding all functional groups.With students having conceptual difficulties in explanation knowledge (R-tier) than in content knowledge (A-tier) indicate that students have more difficulties providing scientifically accepted explanations in justifying and explaining chemical analysis phenomenon.This indicates that students are more interested in declarative learning ('what is') much more than explicative learning ('why'). With regards to the effectiveness of four-tier diagnostic test, findings of this study have shown that not only is the four-tier diagnostic test effective in investigating students' alternative conceptions (Caleon & Subramaniam, 2010;Sreenivasulu & Subramaniam, 2013) but is also effective and efficient in investigating nature of students' alternative conceptions as well.The four-tier diagnostic test instrument was also effective in determining students' confidence ratings that is, the certainty of their selected responses. CONCLUSIONS This study sought to investigate the nature of students' alternative conceptions in OQA using a four-tier diagnostic test instrument.The study has revealed the nature of students' alternative conceptions of OQA to be significant and genuine indicating students' alternative conceptions were due to students' lack of understanding but not lack of knowledge of OQA concepts.On one hand, students assigned low confident ratings when they scored items correctly, and on the other hand, students assigned high confident ratings when their scores were incorrect.These findings indicate that students were more oblivious to the difficult nature of OQA.Again, this study has revealed that students perform better in A-tier than in R-tier, that is, they [students] scored more correctly in Atier than in R-tier and assigned high confident ratings to A-tier responses when they were correct than when they were incorrect.Additionally, students exhibited low discrimination power to most of the items indicating their inability to discriminate between what they do know and what they do not know especially in the A-tier and R-tier, but more were seen in the latter. Consequently, this study has showed that not all alternative conceptions exhibited and harbored by students are due to a lack of understanding of the concepts but could also be due to their lack of knowledge in the concepts.Students are good at answering questions that require a response in declarative learning ('what is') much better than answering questions that require explanatory responses in explicative learning (why). Conclusively, the four-tier diagnostic test has been found in this study not only effective in diagnosing students' alternative conceptions but also efficient in investigating nature of students' alternative conceptions in OQA. Recommendations This current study investigated the nature of SHS students' alternative conceptions in OQA using a four-tier diagnostic test instrument.However, the study did not consider using an intervention to help improve students' conceptual understanding of OQA. Studies in OQA have employed either a mixed-method research approach or a quantitative research approach using paper-test mode diagnosing students' alternative conceptions as seen in previous and current studies.However, no study on OQA has employed a qualitative research approach to obtain an in-depth understanding of the students' perspectives without limitation to writing.Hence, further studies should employ a qualitative research approach to explore students' conceptual understanding of OQA qualitatively. Table 1 . Categorization of students' alternative conceptions Table 2 . Proportion of students' relevant confidence variables per question (n=345) were calculated to help determine students' discrimination power, that is, how well they are able to discriminate what they know from what they do not know.Students exhibit low discriminating power when CDQ values for the item is negative.This implies that students fail to discriminate well between what they know and what they do not know.However, students exhibit high discriminating power when CDQ values for the items are positive.This implies that students strongly discriminate well between what they know and what they do not know.
2023-09-26T15:02:12.361Z
2023-09-24T00:00:00.000
{ "year": 2023, "sha1": "6d5e57c9c424919d8dcebdac76d624eb3d57cde4", "oa_license": "CCBY", "oa_url": "https://www.aquademia-journal.com/download/nature-of-senior-high-school-chemistry-students-alternative-conceptions-in-organic-qualitative-13711.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "63f1d3fb0741eb657fd17fa5ceeeeb2437cabb23", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
254017499
pes2o/s2orc
v3-fos-license
c : Observation of : Using data taken at 29 center-of-mass energies between 4.16 and 4.70 GeV with the BESIII detector at the Beijing Electron Positron Collider corresponding to a total integrated luminosity of approximately 18.8 , the process is observed for the first time with a statistical significance of . The average Born cross sections in the energy ranges of (4.160, 4.380) GeV, (4.400, 4.600) GeV and (4.610, 4.700) GeV are measured to be fb, fb and fb, respectively, where the first uncertainties are statistical and the second are systematic. The line shapes of the and invariant mass spectra are consistent with phase space distributions, indicating that no hexaquark or di-baryon state is observed. INTRODUCTION One of the most fundamental questions in hadron physics is related to the mechanism of color confinement in Quantum Chromodynamics (QCD). Colorsinglet hadronic configurations of quarks and gluons can form bound states or resonances. Besides the well-known combinations of qq for mesons and qqq for baryons, other combinations, such as gqq for hybrid states [1], multi-gluons for glueball states [2], qqqq for tetraquark states [3], qqqqq for pentaquark states [4] and hexaquark states (qqqqqq), are also allowed by QCD. Di-baryon and hexaquark states have been searched for in a range of nucleon-nucleon scattering reactions. Recently, an isoscalar resonant structure was observed in the isoscalar two-pion fusion process pn → dπ 0 π 0 [5] by the WASA Collaboration and was later confirmed in the other twopion fusion processes pn → dπ + π − [6] and pp → dπ + π 0 [7], and the two-pion non-fusion process pn → ppπ 0 π − [8] and pn → pnπ 0 π 0 [9]. This state was denoted by d * (2380) following the convention used for nucleon excitations. These observations indicate the possibility of the existence of hexaquark and di-baryon configurations. In 2021, the BESIII Collaboration reported the search for hexaquark and di-baryon states in examining the invariant mass spectra of two baryons in the process e + e − → 2(pp) [10], and no significant signal was observed. Analyzing data sets corresponding to a total integrated luminosity of approximately 18.8 fb −1 taken at center-of-mass energies √ s between 4.16 and 4.70 GeV with the BESIII detector, we present in this paper the first measurement of the cross section of the process e + e − → pppnπ − + c.c.. We search for the d * (2380) and other possible hexaquark or di-baryon states with the data samples with energies above 4.60 GeV, where thē pn system with a mass around 2.4 GeV for d * (2380) is kinematically accessible. The mass range of thepn system around 2.4 GeV/c 2 , in which the d * (2380) might contribute, is covered by the data samples with energies above 4.60 GeV. Throughout this paper, charge conjugation is always implied unless explicitly stated, and in discussing systematic uncertainties. THE BESIII DETECTOR AND DATA SAMPLES The BESIII detector [11] records symmetric e + e − collisions provided by the BEPCII storage ring [12], which operates in the center-of-mass energy range from 2.0 to 4.95 GeV. BESIII has collected large data samples in this energy region [13]. The cylindrical core of the BESIII detector covers 93% of the full solid angle and consists of a helium-based multilayer drift chamber (MDC), a plastic scintillator time-of-flight system (TOF), and a CsI(Tl) electromagnetic calorimeter (EMC), which are all enclosed in a superconducting solenoidal magnet providing a 1.0 T magnetic field. The solenoid is supported by an octagonal flux-return yoke with resistive plate counter muon identification modules interleaved with steel. The charged-particle momentum resolution at 1 GeV/c is 0.5%, and the specific energy loss (dE/dx) resolution is 6% for electrons from Bhabha scattering. The EMC measures photon energies with a resolution of 2.5% (5%) at 1 GeV in the barrel (end cap) region. The time resolution in the TOF barrel region is 68 ps, while that in the end cap region is 110 ps. The end cap TOF system was upgraded in 2015 using multi-gap resistive plate chamber technology, providing a time resolution of 60 ps [14]. The data sets were collected at 29 center-of-mass energies between 4.16 and 4.70 GeV. The nominal energies of the data sets from 4.16 to 4.60 GeV are measured by the di-muon process e + e − → (γ ISR/FSR )µ + µ − [15,16], where the subscript ISR/FSR stands for the initial-state or final-state radiation process, respectively. The data sets from 4.61 to 4.70 GeV are calibrated by the process e + e − → Λ + cΛ − c [17]. The integrated luminosity L int is determined using large-angle Bhabha scattering events [17,18]. The total integrated luminosity of all data sets is approximately 18.8 fb −1 . The response of the BESIII detector is modeled with Monte Carlo (MC) simulations using the software framework boost [19] based on geant4 [20], which includes the geometry and material description of the BESIII detector, the detector response and digitization models, as well as a database that keeps track of the running conditions and the detector performance. Large MC simulated event samples are used to optimize the selection criteria, evaluate the signal efficiency, and estimate background contributions. Inclusive MC simulation samples are generated at different center-of-mass energies to study potential background reactions. These samples consist of open charm processes, the ISR production of vector charmonium and charmonium-like states, and the continuum processes incorporated in kkmc [21]. The known decay modes are modeled with evtgen [22] using branching fractions taken from the Particle Data Group (PDG) [23], and the remaining unknown decays of the charmonium states are simulated with lundcharm [24]. Final-state radiation from charged final-state particles is incorporated with photos [25]. The signal MC simulation sample of e + e − → pppnπ − at each energy point is generated with the events being uniformly distributed in phase space. DATA ANALYSIS Events with two positive and two negative charged tracks are selected. For each charged track, the polar angle in the MDC with respect to the z direction must satisfy |cosθ| < 0.93. All charged tracks are required to originate from the interaction region, defined as R xy < 1 cm and |V z | < 10 cm, where R xy and |V z | are the distances from the point of closest approach of the tracks to the interaction point in the x − y plane and in the z direction, respectively. The combined dE/dx and TOF information are used to calculate particle identification (PID) confidence levels for the pion, kaon, and proton hypotheses. Each track is assigned as the particle hypothesis with the highest confidence level. The final state in the e + e − → pppnπ − process is reconstructed with three (anti-)protons and one π − . Since the neutron can not be well reconstructed with the BESIII detector, the signal process is determined via the recoiling mass of the reconstructed charged particles, defined as where E e + e − and P e + e − are the center-of-mass energy and the momentum of the e + e − system, respectively; E pppπ − and P pppπ − are the total reconstructed energy and total momentum of the pppπ − system, respectively. Events with M rec greater than 0.8 GeV/c 2 are kept for further analysis. Studies based on the inclusive MC simulation samples [26] show that no peaking background events survive the selection criteria. To further suppress background events, two additional selection criteria are imposed on the accepted candidate events. First, the invariant mass M pπ − of the reconstructed pπ − system is required to be outside the Λ signal region, i.e. |M pπ − − 1.115| > 0.010 GeV/c 2 , to remove the possible background associated with Λ decays. Here, 1.115 GeV/c 2 is the known Λ mass [29], and 0.010 GeV/c 2 corresponds to about three times the mass resolution. Second, the invariant mass of ppp (M ppp ) must be less than 3.6 GeV/c 2 due to the remaining neutron and pion in the event. The M rec distribution of the accepted candidates after the above selection criteria from the combined data sets is displayed in Fig. 1, where a significant neutron signal is observed. The signal yield is determined by a maximum likelihood fit to this distribution. In the fit, the signal is represented by the luminosity weighted MC-simulated shape convolved with a Gaussian function and the remaining background is described by a linear function. From the fit, the signal yield is determined to be 123 ± 14. The statistical significance of the signal is determined to be 11.5σ, which is evaluated as −2 ln(L 0 /L max ), where L max is the maximum likelihood of the nominal fit and L 0 is the likelihood of the fit without involving the signal component. The change of the degree of freedom is 1. The neutron signal region is de-fined as M rec ∈ (0.925, 0.968) GeV/c 2 and the corresponding sideband regions are defined as M rec ∈ (0.857,0.900) ∪ (0.990,1.033) GeV/c 2 . Figure 2 shows the comparisons of the momentum and polar angle distributions of the neutron of the accepted candidate events between data and signal MC simulation samples, where the data distribution is from the combined data sets and the MC simulation distribution has been weighted by the signal yields in data. The invariant mass of any two or three particles, the momentum and cosθ distributions of the other final state particles have also been examined. The agreement between data and MC simulation allows to determine the detection efficiency with the signal MC simulation events generated uniformly distributed in the five-body phase space. To search for hexaquark and di-baryon states, thepn invariant mass spectrum is examined. Figure 3 shows the ppπ − andpn invariant mass spectra of the candidate events for the reaction e + e − → pppnπ − . In the fit to Mpn, the signal is represented by the luminosity weighted phase space MC simulation shape and the remaining combinatorial background is described by a linear function. The goodness-of-fit is χ 2 /ndf = 2.10/2. Here, ndf is the number of degrees of freedom. Compared to the phase space hypothesis, no obvious structure is observed. AVERAGE CROSS SECTIONS In each data set, only a few events have been observed in the neutron signal region, with a statistical significance of less than 3σ. To obtain significant neutron signals the data sets are combined into three sub-samples in the energy ranges of (4.160, 4.380), (4.400, 4.600) and (4.610, 4.700) GeV for further analysis. The average observed cross section for e + e − → pppnπ − is calculated by where N sig j is the number of signal events from the j-th combined data set, L i and ǫ i are the integrated luminosity and efficiency of data set i, , respectively, i represents the i-th energy point in j-th sub-data set. The detection efficiency is corrected by the PID and tracking efficiencies correction factors, f PID and f trk , which are determined to be 0.92 and 0.98 by weighting the differences between data and MC simulation efficiencies in different momentum ranges, respectively. Inserting the numbers which are listed in Table 1 into Eq. 2 yields the average observed cross sections (19.4±5.1±1.0) fb, (42.8±9.8±2.3) fb and (54.2 ± 8.6 ± 2.9) fb for the three sub-data sets, respectively, where the first uncertainties are statistical and the second are systematic. To measure the average Born cross section of e + e − → pppnπ − , a similar lineshape as that of e + e − → 2(pp) [10] is assumed to determine the ISR and vacuum polariza-5 No. X Observation of e + e − → pppnπ − + c.c. 6 The obtained Born cross sections are then used as input in the generator and the cross section measurements are iterated with the updated detection efficiencies. This process is repeated until the (1+δ γ )·ǫ values become stable at all energies, i.e. the difference of (1+δ γ )·ǫ between the last two iterations is less than 4%. Figure 4 shows the obtained average Born cross sections in the defined subsamples. The average Born observed cross sections are calculated with Eq. 3, and the results are (21.5±5.7±1.2) fb, (46.3±10.6±2.5) fb and (59.0±9.4±3.2) fb for the three sub-data sets, respectively, where the first uncertainties are statistical and the second are systematic. Two different functions are used to compare the trend of the average Born cross section to a reaction where a similar behaviour is expected. The first one is a simple five-body energydependent phase space lineshape [10,27] and the second one is an exponential function [10,28], which are shown in Figure 4. The exponential function is constructed as where p 0 and p 1 are free parameters, M th = (3m p + m n + m π − ), m p , m n , and m π − are the known masses of p, n, and π − taken from the PDG [29]. This is similar to the one used for the cross section lineshape of e + e − → 2(pp) in Ref. [10], as they are similar reactions where one of thep has been exchanged bynπ − . However, it should be noted that the two functions in Figure 4 are not fit results, but drawn with arbitrary scale factors for comparison since a qualitative fit is not possible due to the limited statistics. The systematic uncertainties in the cross section measurements will be discussed in the next section. SYSTEMATIC UNCERTAINTY In the cross section measurements, the systematic uncertainties mainly comes from the integrated luminosity, tracking efficiency, PID efficiency, ISR correction, M rec fit, and veto of background events associated with Λ decays. The integrated luminosity of the data set is measured by large-angle Bhabha scattering events, and the uncertainty in the measurement is 1.0% [18], which is dominated by the precision of the MC generator used for efficiency correction. The tracking and PID efficiencies have been studied with high purity control samples of J/ψ → ppπ + π − and ψ(3686) → π + π − J/ψ → π + π − pp decays [30,31]. The differences of the tracking and PID efficiencies between data and MC simulation in different transverse momentum and total momentum ranges are obtained separately. The averaged differences for the tracking (PID) efficiencies are corrected by the factors f trk (f PID ) as mentioned in Sec. 4. The uncertainties of the tracking and PID efficiencies are reweighted by the p/p and π + /π − momenta of the signal MC simulation events. The reweighted uncertainties for tracking (PID) efficiencies, 0.1% (0.3%) per p, 0.1% (0.4%) perp, 1.0% (0.5%) per π + and 0.8% (0.4%) per π − , are assigned as the systematic uncertainties. Adding them linearly gives the total systematic uncertainties due to the tracking and PID efficiencies to be 1.1% and 1.6% for the process e + e − → pppnπ − , and 1.3% and 1.9% for the process e + e − → pppnπ + , respectively. The input Born cross sections in the generator are iterated until the (1 + δ γ ) · ǫ values converge. The largest difference of (1 + δ γ ) · ǫ between the last two iterations at all energy points, 3.2%, is taken as the corresponding systematic uncertainty. Three different tests were performed to estimate the uncertainty associated with the M rec fit. The fit range is increased or decreased by 5 MeV/c 2 . The background shape is replaced with a second-order Chebychev polynomial function, and the signal shape is replaced with an MC simulation-derived shape convolved with a Gaussian function. The quadrature sum of these changes, 3.6%, is taken as the relevant uncertainty. The systematic uncertainty due to the veto of Λ background events is estimated by changing the Λ veto mass window from ±3σ to ±5σ, where σ is the invariant mass resolution and the value is 3 MeV/c 2 . The change of the measured cross section, 0.03%, is assigned as the uncertainty. Adding the above systematic uncertainties summarized in Table 2 in quadrature yields the total systematic uncertainties of 5.3% and 5.4%, for the processes e + e − → pppnπ − and e + e − → pppnπ + , respectively. The average systematic uncertainty, 5.35%, is taken as the total systematic uncertainty in the cross section measurement for the process e + e − → pppnπ − + c.c.. SUMMARY By using the data sets taken at the center-of-mass energies between 4.16 and 4.70 GeV, the process e + e − → pppnπ − + c.c. has been observed for the first time with a statistical significance of 11.5σ. The average Born cross sections in the three energy ranges of (4.160, 4.380), (4.400, 4.600) and (4.610, 4.700) GeV are measured to be (21.5±5.7±1.2) fb, (46.3±10.6±2.5) fb and (59.0±9.4±3.2) fb, respectively, where the first uncertainties are statistical and the second systematic. The Born cross section close to threshold is larger than would be expected from five-body phase space. The lineshape of the average Born cross sections for the process e + e − → pppnπ − +c.c. shows similar behaviour to that of the process e + e − → 2(pp). The shape of the invariant-mass spectra ofpn and ppπ − are in good agreement with the phase-space distributions, thereby indicating no hexaquark or di-baryon state observed with the current data sample size. The BESIII collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support.
2022-11-28T06:42:08.600Z
2022-11-24T00:00:00.000
{ "year": 2023, "sha1": "0bc46997145193030c1c5ffb0a53c6afa28e84c4", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1674-1137/acb6eb/pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "cd6c30beaf7fab89908a6da5a22a23cc77e9fc26", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
271258994
pes2o/s2orc
v3-fos-license
Community Readiness in Emergency First Aid for Victims of Traffic Accidents A traffic accident is an emergency condition that requires immediate or life-threatening help. It is important for all people to have readiness regarding initial handling/first aid quickly and precisely so that the chain of handling traffic accident victims can run properly and correctly. This research aims to find out the level of community readiness for emergency first aid for victims of traffic accidents at Jalan Raya Gapura, Sumenep Regency. This study uses a descriptive design. The population and sample of this study, namely the entire community along the edge of Jalan Raya Gapura, Braji Village, Sumenep Regency, consisting of 30 people, used a non-probability sampling technique - total sampling. The research instrument used a Type Multiple Choice Questionnaire with 10 questions. The results of the data are processed using editing, coding, scoring, tabulating. Nearly half of the respondents were in the unprepared category 14 (47%), almost half of the respondents were in the quite ready category 10 (33%), and a small proportion of respondents were in the ready category 6 (20%). Based on the results of the study, it was found that most of the community's readiness level for emergency first aid for victims of traffic accidents at Jalan Raya Gapura, Sumenep Regency, was with a less prepared level of readiness. INTRODUCTION Accident is an emergency condition that can happen to anyone, anytime, and anywhere who needs help quickly or immediately which is life threatening [1].A traffic accident is an unwanted event that occurs as a result of a vehicle colliding with another object, the accident victim can cause injury damage and even lead to death [2].Therefore, first aid is an action for victims of traffic accidents to avoid more severe injuries to victims before being treated directly by medical personnel.The attitude that must be shown for the community is responsiveness when a traffic accident occurs, besides that the community who is the first helper must be able to help appropriately to minimize the situation of the victim who is more severe [3].However, not many people understand first aid for accidents.In 2020, the OECD/WHO recorded around 1.25 million deaths and 20-50 million injuries that occur annually due to traffic accidents, of which 90% of cases occur in middle and low income countries [4].The Central Bureau of Statistics of the Republic of Indonesia (BPS) in 2019 recorded the number of accident cases that occurred in Indonesia, namely 116,411 accident cases, where as many as 25,671 victims died, 12,475 victims suffered serious injuries, and 137,342 victims suffered minor injuries [5].In 2021 the number of traffic accidents that occurred in Indonesia was 5,350 cases, of which 452 people died, 6,390 cases suffered minor injuries, and 6 cases suffered serious injuries, which were caused by traffic accidents and could suffer material losses of Rp. 2,393,687.00[6].The Sumenep Police Headquarters (Head of Traffic Accident Unit of the Sumenep Police) recorded 322 traffic accidents in Sumenep Regency in 2022, of which 63 people died, 13 people were seriously injured, and 440 people were slightly injured.And can experience a loss of IDR 1,152,850,000.As for the number of traffic accidents at Jalan Raya Gapura, Sumenep Regency, in 2022 there are 12 incidents of accidents, of which 3 people died, and 140 were lightly injured (Head of Laka Traffic Unit of the Sumenep Police).Traffic accidents can usually cause musculoskeletal injuries, which are conditions that can interfere with the function of ligaments, tendons, muscles, bones, joints, and even nerves.So treatment or help is needed quickly and precisely, if handled incorrectly or inaccurately it can lead to further complications, such as nerve and blood vessel damage, infection, and can result in further soft tissue damage [7].Errors in first aid are not only related to technical matters, but are related to the accuracy of action, speed or readiness for help, including the level of success in performing first aid for traffic accident victims.Victims who are wrong during first aid will cause further injury to victims and can increase difficulties during further treatment at the hospital.In addition, mistakes during initial aid for victims will cause cervical modifications so that the possibility of causing death is even greater.So far, not many people have realized that the initial rescue of victims of traffic accidents is very important when viewed in terms of the initial handling time.In order for the chain of handling traffic accident victims to run properly and correctly, every member of the public should have the knowledge and readiness about handling emergencies.Community knowledge in providing first aid about the importance of readiness and accuracy for traffic accidents is very important for all people to have, therefore researchers are interested in further researching the level of community readiness in emergency first aid for traffic accidents at Jalan Raya Gapura, Sumenep Regency METHOD This study uses a descriptive research design.The variable of this research is the community's readiness for emergency first aid for victims of traffic accidents at Jalan Raya Gapura, Sumenep Regency.All members of the population were taken as samples, namely as many as 30 people who lives along the road at Jalan Raya Gapura, Sumenep Regency.with the sampling technique using Non-probability -Total sampling.1 explained that the respondents were based on gender, all respondents were male (100%), and none of the respondents were female (0%).3 it was explained that respondents based on their last education, a small proportion had primary and secondary education (3%), and almost half had high school and undergraduate education (47%).4 explained that respondents based on occupation, a small number did not work as much (3%), and a small number as farmers (13%), most worked as entrepreneurs (53%), and almost half worked as civil servants / military / police 30%).5 it was explained that the readiness of the community in first aid for traffic accident victims, almost half of the respondents were in the unprepared category (47%), and quite ready (33%), while the respondents who were in the ready category were only a small portion (20%) . Custom Data The results of this study indicate that the community's readiness for first aid for traffic accident victims at Jalan Raya Gapura, Braji Village, Sumenep Regency shows that most of the total respondents are categorized as unprepared.From the results of the questionnaire most people's attitudes in giving first aid in the event of a traffic accident are panic and even fear in helping accident victims, even the community just helps without paying attention to the victim's condition first and provides assistance not in accordance with the conditions experienced by the victim.Errors in performing first aid are not only related to technical matters, but the accuracy of action, speed or readiness in time for help, including the level of success in performing first aid for traffic accident victims, not just providing assistance quickly but also precisely so as not to add to more serious injuries.on victims and the victim handling chain can run well.Many victims of traffic accidents do not get first aid properly because of the low level of public knowledge about first aid.Knowledge here has a big influence on someone doing good and right actions, many factors can influence one's knowledge such as educational factors and sociocultural factors [8,9].The results of this study indicate that most people already understand about the next steps to be taken after providing first aid at the scene, namely by evacuating or referring to health care centers such as health centers and hospitals.Almost all of the people who were respondents already understood the purpose of carrying out an evacuation, namely to provide more intensive or more competent action on victims.Evacuation or transfer of victims is a method used to save victims to a safer place.By moving the victim it will help in the process of handling the victim.In evacuating victims do not add new injuries to victims.The principles of victim evacuation must be observed, such as if the victim is referred if he is in a stable condition and does not add to any new injuries [10]. RESULTS Most of the people along Jalan Raya Gapura, Braji Village, Sumenep Regency, are less prepared to provide emergency first aid for traffic accident victims. Nurse can be empower the community to improve about first aid skills for accident victims. The research instrument used a multiple choice type questionnaire.Validity and reliability tests have been carried out for this instrument, and 10 questions of 17 questions have been obtained the requirements.Data analysis uses percentage calculations.When the research was conducted on February 17, 2023 Table 2 Characteristics of respondents based on age. Table 3 Characteristics of respondents based on recent education. Table 4 Characteristics of respondents based on work.
2024-07-18T15:07:18.888Z
2023-12-21T00:00:00.000
{ "year": 2023, "sha1": "a406903340d35fda89feb7e7461fbf21ab2ae925", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.54832/phj.v5i2.497", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1845bb44381cc9613fded77e742bdd2e53d69ed3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
148622198
pes2o/s2orc
v3-fos-license
Behavioral addictions A social science perspective : This conceptual review and analysis discusses the development of the addiction concept, a development that entails a marked expansion of what is considered to constitute an addiction. During the last decade, following the introduction of DSM-5 (the Diagnostic and Statistical Manual of Mental Disorders, APA 2013) and the opening up of new terrains, many bad habits and behavioral problems are in the process of being transferred to and renamed as addictions, endorsing a continued or rather reinforced medicalization of social problems. In this paper a social science viewpoint is suggested as a more appropriate perspective on these matters. 1 Behavioral addiction or behavioral problem? This paper presents a discussion on the debated concept of behavioral addiction. Building on, or rather copied from, indicators of substance use problems in such diagnostic 'bibles' as DSM, the development and use of the concept has mainly taken place within a medical science context. In this paper I suggest a social science model, i.e., a return to such ideas as habituation, aiming at an alternative interpretation of behavioral addiction and the problems related to the concept. For a long time, addiction was conceptually confined to excessive use of alcohol and substances such as opioids, and motivated by the substance itself as well as by consequences of such use. In recent decades, however, we have seen an accelerating expansion of the sphere for which addiction is seen as an appropriate conceptual framework. Some relate this to the consumer society of our times (Keane 2017), and the construction of disordered identities within expanding premises of consumption, such as gamblers, bulimics and Internet addicts (Reith 2004). Many of these 'new' addictions are related to life online. These perceived problems-"behavioral addictions"-refer to online gambling as well as, for example, gaming or networking. Public health authorities in Europe have not yet addressed behavioral addictions as a public health concern, other than for gambling, but private clinics have proliferated to meet the market for advice and treatment (King et al., 2010). In China, for example, boot camps work to "wean off" excessive online habits among teenagers (King et al., 2011). Importantly, these clinics run interventions directed at presumed "internet addicts", despite the fact that there is little in-depth knowledge about the actual nature of the problems, how they manifest among different individuals and groups, and how they might be "cured". The concept of addiction The meanings associated with the addiction concept are varied, and though there is indeed a common interpretation, every understanding has its unique features, reflecting the era and the surrounding society. Up until the late 1900s, addiction was conceptually confined to the excessive use of alcohol and other habit-forming substances. Subsequently, addiction vs. dependence are fairly 'modern' concepts in that before the 1900s scholars discussed the roots of repeated drunkenness not in terms of disease, but more as a consequence of inherent weakness of will. There were some exceptions, such as Thomas Trotter, who in 1804 described the habit of drunkenness as a disease, and Magnus Huss, whose publication, Alcoholismus chronicus (Chronic Alcohol Disease), came out in 1851. The intention, for Trotter and Huss as well as for later disease proponents, was to dismantle the moral condemnation of the drunkard and to reduce the trailing stigma in order to facilitate a change of habits. Later on, Harry Levine (1978) claimed that "addiction" as well as the related idea of "loss of control" were conceptually constructed in western society as a consequence of an emerging modernity entailing the increasing significance of self-control. Inebriety and drunkenness continued to be the prime concepts also in the 1930s, when the Alcoholics Anonymous (AA) 12 steps program was formulated. The AA disease model was not presented as scholarly; AA was and is a laymen's mutual help fellowship (Mäkelä et al., 1996) aiming at sobriety via a suggested program and an explanatory model using metaphors and the sharing of examples-their life stories. Inspired by the AA program and work, E.M. Jellinek later on published The Disease Concept of Alcoholism (1960), wherein a disease model claiming scientific support was presented. In Jellinek's model, loss of control, craving, withdrawal, and tolerance were key concepts, and in tandem with the growing success of this disease model, the World health Organization (WHO) advocated a conceptual change from addiction and habituation to dependence in order to cover a broader set of substances, such as cannabis, nicotine and cocaine (WHO, 1964). Until fairly recently, thus, dependence has been the concept used in the two major classifications of psychiatric disorders: the International Classification of Diseases (ICD), in which dependence was first included in the ninth version from 1977, 1 and the Diagnostic and Statistical Manual of Mental Disorders (DSM), where dependence was first introduced in 1987, but where changes in the new DSM-5 (APA, 2013) to include the category of addiction entailed a return from exile of the addiction concept. DSM-5 constituted an important shift in that "substance-related and addictive abuse and dependency" was relabeled "substance-related and addictive disorders", which comprise not only substance-based addictions, but also gambling, i.e., a "behavioral addiction". In addition to gambling, Internet Gaming Disorder was placed in an appendix with a call for more research aiming at future inclusion in the manual. Another step in the same direction was taken in a disputed ICD proposal for the 2018 11 th edition of introducing a new "gaming disorder" category (Aarseth et al., 2016). One reason behind the shift in DSM-5 was the risk of confusion over different definitions of dependence (O'Brien 2011), as physical dependence may well develop in connection with prescribed medications such as beta-blockers, entailing tolerance and withdrawal but no harmful consequences. The shift in DSM-5 can be interpreted as a change from a focus on physical dependence to a focus on harmful consequences as a core part of the addiction concept. The proponents of expanding the addiction concept to comprise behavioral addictions are based in several and partly opposing theoretical traditions. Claims vary from psychological perspectives underlining that any type of activity that is perceived as pleasurable can develop into an addiction (Orford, 2001;Peele, 1985), to neurophysiology, where addiction is related to the brain's reward system (Potenza, 2006). The growing importance and relative dominance of a neurobiological perspective on addiction (Keane, op.cit.) is in itself part of a more general medicalization process, incorporating also a change in 'common knowledge' or the layman perspective on these issues. In fact in Wikipedia, 2 under the heading "addiction", we find it described as "…a brain disorder characterized by compulsive engagement in rewarding stimuli …. The term behavioral addiction correctly refers to a compulsion to engage in a natural reward," a rather bold statement giving the impression of a conceptual consensus regarding both physical and behavioural addictions. Chamberlain et al. claim that "Structurally, internet gaming addiction has been linked with reduced grey matter density in inferior frontal gyrus, cingulate, insula, precuneus, and hippocampus; along with lower white matter density in related regions…" (2016, p.847). But what does this mean? The problems related to claiming addiction to be a brain disease and a good object for neuroscience research is elucidated in a recent piece in The Lancet (personal view, Yücel et al., 2017), of which one plausible interpretation is that the fairly large group of authors display evident difficulties in reaching any conclusions that all could agree on. Similarities between behavioral and substance addiction are claimed to be proven with "growing evidence", but from studies comprising 10-17 subjects (and corresponding numbers in control groups). The power of these studies is not impressive and even if it was, there are still no tenable explanations of, e.g., MRI findings. A circumstance that supports restrictive interpretations (Kalant, 2009) of the meaningfulness of conceptualizing addiction as a neurobiological disease is the fact that even though the theory has achieved considerable scientific prominence, its clinical influence has so far been limited, and it is still far from fulfilling the promises originally articulated by its proponents. In fact, after 20 years of very expensive research there is still little evidence of progress and no tenable explanations of mechanisms involved (Midanek, 2012). Behavioral addiction, a Trojan horse? Today, excessive involvement in any type of consumption or activity can be considered an addiction, i.e., a psychiatric disorder (Billieux et al., 2015). Research in this field has grown exceedingly; the number of behavioural addiction papers was tenfold in 2014 compared with 1990 (Ibid.), and similarly, the number of Internet Addiction papers grew even more steeply between 2000 and 2013 . There are good reasons to believe that the notion of behavioral addictions will play an increasingly important role in terms of the number of people that will be considered to be suffering from them, as diagnosed by professionals as well as identified by their own designation. Against the background of such a scenario it is of utmost importance to know whether and how behavioral addictions deviate from other types of addictions, along with their unique manifestations and prognosis over time. Much research on Internet and behavioral addictions has been based on skewed or convenience samples (Ibid.); in the case of Internet addiction, for example, the research has been biased towards high frequency users of the Internet. The emergence of the concept of behavioral addiction as well as of Internet addiction coincides with a more general process of medicalization, whereby a variety of social problems are being defined as medical-as illness, disorder, or pathology, along with processes of promoting the establishment and evaluation of interventions into evidence-based practices. In this connection it has been suggested that the evidence-based-practice movement is itself a "medicalization engine" (Bergmark, 2014), establishing a medical conceptualization through its emphasis on standardized assessment and diagnosis. In my view, however, the discussion and research on behavioral addictions is still mainly indiscriminating when it comes to how the concept is defined, measured and judged. Block (2008, p.306) (and for most problems, frequencies "too low to conduct comparisons", p.9) of respondents confirming experiences of behavioral addictions report on having sought help for the problem. In his recent work on addiction and choice, Heather (2016) suggests that the very core of addiction is to be found in "the struggle to change a way of behaving that, implicitly, one knows to be harmful but cannot easily shake off" (p.12). 3 Two U.S. national surveys were used in another study analyzing lifetime and past-year DSM-lV-diagnoses of and treatment-seeking for pathological gambling (Slutske, 2006); also in this study, proofs of persistence are scarce. Only a minority of respondents reported excessive gambling episodes lasting more than one year, and few (7-12%) had sought help from healthcare or 12 step groups. Similarly, in a study of Canadian gamblers (Suurvali et al., 2008) only a small fraction (6%) of those with lifetime gambling problems at any level had ever used treatment services, including self-help materials. The SWELOG study, a longitudinal study of problematic gambling in Sweden, has produced results that are in line with this; in an eleven-year follow-up of problem gamblers, 24% were considered as at-risk gamblers, 7% as problem gamblers and only 6% as "probable pathological implies that extensive online gaming may be characterized as transitory rather than as persistent. In 2002, Salguero and colleagues published an early attempt to define and demarcate addiction to computer games. They concluded that, when given the type of adapted instrument for measurement of substance disorders that nowadays is the standard tool also for behavioral addictions, data pointed at good consistency. Hence, a number of problems related to excessive gaming were identified and the authors conclude that gaming for some is a behavior resembling "dependence". On the other hand, they add that for competitive play, responses might reflect competition rather than pathology and they also raise the need for proofs of persistence. Yet, as proofs of persistence are rare to find, results (Ibid.) should not be interpreted as support for the addiction track. There are some exceptions; Gentile et al. found in a study of "pathological video game use" (2011) that 84% of boys initially labeled as "pathological gamers" remained in this category two years later. This group also displayed more psychiatric symptoms (depression, anxiety, social phobias) than others. Hormes and colleagues (2014) conclude, drawing on a cross-sectional survey with U.S. undergraduate students, that use of social networking sites is potentially addictive. Craving for Facebook was common in this study and almost 10% of respondents met the criteria used for disordered online social networking use. As in other studies, modified tests from the alcohol/substance field were used and significant relations were found between networking addiction, Internet addiction and problem drinking. Also, psychological problems were reported on significantly more often in the addicted group, indicating the option of interpreting the excessive networking as coping, i.e., a secondary problem. The fact that measures of behavioral addiction are retrieved or "translated" from the efforts to measure problem drinking/substance use is rarely problematized or questioned. Some seem not to make any distinction at all between Internet use/addiction and substance use/addiction, in using wordings such as "other types of substance use" for behavioral addictions (Baggio et al., 2017). When some researchers state that "the diagnostic criteria for pathological gambling is similar to substance dependence…" (Johansson & Götestam, 2004), the reader needs to be reminded that there are no objective observations underpinning the logic for such similarities, but that the basis is the use of the same questionnaires, only adapted to the specific addiction at hand for study (see e.g. Beard et al., 2001). Starcevic (2016), who focuses on repetitive and problematic behavior and poor impulse control, warns against simplification of the addiction concept and foresees that unless further explored more thoroughly we will see an "…uncontrolled expansion of the catalogue of behavioural addictions, drastic lowering of the diagnostic threshold and spurious epidemics of all sorts…" (p.724). He points to disparities between traditional addictions and behavioral addictions when it comes to the interpretation of excessive use, preoccupation, tolerance and abstinence, while Sellman suggests a solution to this problem by launching the idea of the concept "behavioural health disorders" (2016 p. 806), claiming that this would solve these problems. Griffiths (2005), on the other hand, assumes tolerance and withdrawal symptoms to be mandatory for the labeling of anything as addiction. Keane (op.cit.) argues that it would be relevant to redefine addiction from being the problem of a stigmatized minority to something that affects the majority; not originating from problem areas but instead from activities "not only morally neutral but positively valued and encouraged" (p.373). Adding to the confusion, Wei et al. (2016) describe in a commentary that 91.8% of teenagers in Singapore seeking treatment for excessive computer gaming were gaming daily, hence the 8.2% of the teenagers that obviously played computer games less often than daily yet were treated for gaming problems. A significant proportion of the treatment seekers had previous conduct disorder (32%) and mental health diagnoses (42%), indicating that there were other problems in the teenagers' lives leading up to excessive gaming. One thing is certain: behavioral addictions carry with them infinite possibilities of establishing additional addictions, but they also carry with them a limitation of "marking" special substances as more dangerous than other things, so that the concept of addiction is in movement to another "explanatory field" where the substance no longer is the focal point. This Trojan horse can easily be transformed into a Pandora's box, which opened once, cannot be closed again; new addictions will continue to pop up. And this question remains: what will the consequences be of letting behavioral addictions into the realm of medical issues? Concepts matter! Revitalising a social science perspective on addiction The lack of sound theoretical underpinnings for the inclusion of behavioral addictions into the catalog of psychiatric diagnoses has been pointed out also by other researchers (Billieux et al., op.cit.), who emphasize the option of considering the signs of behavioral addiction as coping strategies explained by an underlying disorder, such as depression or problems of impulse control (Kardefeldt-Winther et al. 2017). Too, there is certainly no consensus on the expanded addiction concept. Warnings have come from within the DSM-lV task force (Frances 2010;Frances & Widiger, 2012), but also other agents (Pickersgill, 2014) Instead of seriously and intently investigating the concept of behavioral addiction and its implications, most studies in this field are confirmative rather than explorative . Lack of theoretical specificity for the new disorders in parallel with a reliance on using criteria for already existing disorders to identify new disorders, does imply that the measurements used to explore new behavioral addictions cannot account for the potentially unique aspects of the problem behaviors. For example, as King and Delfabbro (2014) write, it is unclear whether preoccupation with video games is problematic in the same way as preoccupation with gambling, since the outcome and progress of video gaming is determined much more by a player's choices and inputs to the game. Such unique aspects are neglected when traditional addiction criteria are used to define the boundaries of the problem behavior. Therefore, even though behavioral addictions are being paid increased attention in research, there is a persistent lack of knowledge about how these addictions might be expressed if identification of the problem behavior was not constrained by traditional addiction criteria based in a biomedical tradition (Kardefelt-Winther, 2015). Furthermore, although a number of researchers (e.g., Griffiths, 1996;Marks, 1990;Petry, 2006) might be correct in their claims that behavioral addictions share certain features with substance addictions, it is obvious that there is also a distinct difference between a substance addiction and a behavioral addiction, the former identified as an addiction primarily due to specific characteristics of a limited set of substances, presumed to be capable of enslaving those who consume them due to inherent dependence generating qualities of the substance. Still, in a review of Internet addiction studies Stensson, 2015) we saw that on the one hand the number of studies has increased exponentially in the new millennium, while on the other hand, there was a lack of longitudinal studies in this field as well as a domination of confirmatory studies at the expense of critical discussions of discourses and concepts. The question of whether one may have a behavioral addiction can at present not be given an explicit answer that everyone would agree on. Basically, any attempt to answer the question is dependent on how addiction is defined. As a consequence of the lack of consensus, I find it hard to identify any diagnostic system or theoretical framework that can be seen as fully satisfactory in terms of providing a definition that also contains an explanatory mechanism for the causal processes. However, this positioning does not imply that we do not recognize the presence of problems in relation to the phenomena that are designated as behavioral addictions, but at the same time there is also a clear tendency to 'over-medicalization' in the professional as well as in the lay discourse. In a previous study (Bergmark & Bergmark, 2009), we analyzed available research on addiction claims of frequent use of MMORP games. Results pointed at substantial problems in connection with attempts to establish consistent conceptual frameworks underpinning an addiction diagnosis for individuals with frequent use and problems associated with that use. It is, similarly, important to understand that such a conclusion does not exclude the presence of problems that can be related to, e.g., frequent gaming; problems do indeed occur for individuals due to their gaming activities, but it remains doubtful whether or not such problems should be the basis for an addiction diagnosis. A useful way forward would be to try to understand the particular concerns that have developed in society around problem behaviors, how these problems typically manifest and whether they are persistent. If certain activities have a tendency to lead to more problems, or exacerbate existing problems, why is that the case? For example, Sussman et al. (2011) argue that availability is the most important factor in developing addictive behavior. This circumstance is well in line with the general perspective on the main determinants for classic addictions such as to alcohol and to drugs (Babor et al., 2010). However, it is as yet unknown whether Internet availability impacts on the risk of developing a behavioral addiction to certain online activities, or whether the risk rather depends on other underlying problems. Considering the fact that, e.g., almost all Swedes have access to the Internet daily, and considering the fact of unlimited and instant access, this could be formulated as a new burden for the individual in contemporary (Swedish) society. Many features of the Internet are both enticing and rewarding, which might contribute to the development of problems, but also exemplifies the challenges of distinguishing a healthy fascination from harmful behavior and brain disease in a society where the Internet is ubiquitous. A fruitful approach to studying potential problem behaviors with such challenging conceptualization might be to abandon the idea of addiction as a specific delimited phenomenon. Rather, we might normalize it by pointing to the fact that habits of the heart, activities central to one's way of life, may be hard to change even if the activity brings on problems (Fingarette 1988). Such a perspective can be seen as congruent with a long list of activities that are currently called addictions. Along the same line of normalization is the approach developed by the Alcohol Research Group in Berkeley during the 1960s for alcohol problems: "Our use of the term problem drinker here instead of the term alcoholic is not accidental. We wish to avoid getting into the question, 'what is a real alcoholic', or 'does the person have the disease called alcoholism?' We take the point of view that any problemconnected fairly closely with drinkingconstitutes a problem." (Knupfer 1967, p.975). Such a perspective would be useful also for Internet-related problem behaviors, as it is unlikely that researchers will reach a consensus on whether such behavioral addictions constitute 'real' addictions or not; this is largely a semantic question depending on the definition of addiction. Thus, we might work within a framework of behavioral addictions by considering the problems that occur as consequences of persistent and excessive Internet use, but without treating it as a specific delimited phenomenon in the manner of a mental disorder. It is unclear how the biomedical discourse of addiction shapes our contemporary understanding of Internet addiction and other proposed behavioral addictions, which is important, as it impacts on how society perceives and treats people who spend a lot of time online. There is also the wider question regarding the extent to which the identification of behavioral addictions is related to a more pervasive change and widening of the medicalization processes that researchers have identified (e.g. Clarke, 2003;Conrad & Schneider, 1992;Conrad & Waggoner, 2012). These questions are relevant not only in order to understand the potential risks of increased Internet use in society, but also to heed Allen Frances' (op.cit.) warning that the changes to the DSM might result in over-diagnosing of unproblematic behaviors. Acknowledgements and disclosure statement A previous version was presented at the 43rd Annual Alcohol Epidemiology Symposium of the Kettil Bruun Society. There is no conflict of interest.
2019-05-11T13:06:42.553Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "98d460a5d5a1ba1c0b57eb0d77ae6f0f1f70d172", "oa_license": "CCBY", "oa_url": "https://www.oatext.com/pdf/GIMCI-3-159.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8ab428bd35d33dd25b3cedb4eb6a1c96d354014b", "s2fieldsofstudy": [ "Psychology", "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
83458010
pes2o/s2orc
v3-fos-license
Scheme of a Derivation of Collapse from Quantum Dynamics II Is wave function collapse a prediction of the Schr\"odinger equation? This unusual problem is explored in an enlarged framework of interpretation, where quantum dynamics is considered exact and its interpretation is extended to include local entanglement of two systems, including a macroscopic one. This property of local entanglement, which results directly from the Schr\"odinger equation but is unrelated with observables, is measured by local probabilities, fundamentally distinct from quantum probabilities and evolving nonlinearly. When applied to a macroscopic system and the fluctuations in its environment, local entanglement can also inject a formerly ignored species of incoherence into the quantum state of this system,. When applied to a quantum measurement, the conjunction of these two effects suggests a self-consistent mechanism of collapse, which would directly derive from the Schr\"odinger equation. (This work develops and improves significantly a previously circulated version with the same title [23]) Is wave function collapse a prediction of the Schrödinger equation? This unusual problem is explored in an enlarged framework of interpretation, where quantum dynamics is considered exact and its interpretation is extended to include local entanglement of two systems, including a macroscopic one. This property of local entanglement, which results directly from the Schrödinger equation but is unrelated with observables, is measured by local probabilities, fundamentally distinct from quantum probabilities and evolving nonlinearly. When applied to a macroscopic system and the fluctuations in its environment, local entanglement can also inject a formerly ignored species of incoherence into the quantum state of this system,. When applied to a quantum measurement, the conjunction of these two effects suggests a self-consistent mechanism of collapse, which would directly derive from the Schrödinger equation. (This work develops and improves significantly a previously circulated version with the same title [23]) ------------Two momentous papers by Schrödinger [1] and by Einstein, Podolsky and Rosen [2], both published in 1935, left a lasting acceptance that the uniqueness of measurements data would be inconsistent with the Schrödinger equation of evolution. This problem remained since then a matter of worry [3] and is still a subject of much research [4]. It has also become a major theme in the philosophy of science [5]. Quantum physics itself made nevertheless outstanding progress in the meantime. Its laws were always found undeniable foundations for these developments, but the problem of their agreement with a unique macroscopic Reality did not receive a universally agreed answer. One proposes here a new approach to this problem, according to which the laws of quantum mechanics would be self-consistent and predict wave function collapse as one of their consequences. No revision of the quantum laws themselves would be needed for reaching this result, but the orthodox interpretation, expressed in classical books [6,7] and used in most textbooks, would be revised significantly. This revision would not be a rejection of the standard interpretation, however, but a broadening making use of a few consequences of the Schrödinger equation, which are still partly conjectural but shed a remarkable new light on the phenomenon of collapse. A pattern for a wider interpretation Many thorough experiments during the last thirty years or so, led to an essential empirical conclusion according to which "wave function collapse" is strictly restricted to macroscopic systems and is never observed in microscopic ones [8]. As a direct consequence and since these systems are never in a pure state, one will dismiss here the usual name of this phenomenon and simply call it "collapse". Collapse is undoubtedly a physical phenomenon and, moreover, the most frequently observed one since every experiment in quantum physics relies on its systematic and multiple occurrences in experiments. The circumstances under which it happens are well known but one will recall them first here for definiteness: A quantum measurement is concerned with a microscopic system A and is intended to measure the value of an observable in it. One usually describes it in a case where the initial state of A is expressed by a state vector, which is a superposition of eigenvectors of this observable: This system A interacts with a measuring system B, which is always macroscopic. Collapse consists then in the observed fact that a state characterizing a unique value of the measured observable, associated with one of the state vectors k , comes out from the measurement. Various results occur randomly when the same measurement is performed many times with the same initial state, and the observed frequencies are in perfect accordance with Born's probability law A particularly intriguing aspect of collapse is the fact that a measuring device, when it shows off a value of a measured observable, works exactly in the same way as if the initial state of the measured system had involved the unique state vector , associated with that value, rather than in the superposition (1.1). One wonders then how a physical effect could be so efficient and universal, and yet be so evanescent that it leaves no sign of its mode of action. One proposes in this paper that this hidden mode of action relies most probably on local entanglement, which is a property resulting directly from the Schrödinger equation, but also an invisible one because of its lack of relation with observables (i.e. self-adjoint operators in Hilbert space [7]). Section 2 recalls the theory of this effect of local entanglement and extends it somewhat. Section 3 suggests that local entanglement between a macroscopic system and the fluctuations in its environment can generate a specific type of incoherence, which would have remained unsuspected hitherto. Section 4 identifies then an explicit effect of "slip in coherence", which acts at the level of a few atoms and would be the elementary agent through which minute transfers occur between the quantum probabilities of various measurement k k channels. The final Section 5 shows how an accumulation of a huge number of invisible slips of this kind could be responsible for collapse without leaving any trace behind. The resulting theory leads to drastic revisions and enlargement of the interpretation of quantum mechanics, without any change in its basic dynamical laws. This reconstruction of interpretation is complex and sometimes disturbing by its modifications, so that an attempt at getting clarity will be privileged here rather than a search for rigorous proofs, which would need much harder work. One may mention that this desire for a minimum of complexity led to a much simplified introduction of local entanglement in Section 2, in spite of the central part of this notion here. A more mathematical approach is sketched in an appendix. Local entanglement Local Entanglement is a direct consequence of the Schrödinger equation. Although one may consider it a genuine consequence of the Schrödinger equation, it shows no relation with quantum observables (i.e. no association with self-adjoint operators in Hilbert space [7]). Eliot Lieb and Derek Robinson discovered it in the seventies and gave it this name of "local entanglement" [9]. It drew little attention in the field of interpretation, probably because it was considered mostly as a peculiar effect, specific to many-body physics and not fundamental. The present author rediscovered it in a serendipitous way [10], by ignorance so to say, and called it then "intricacy". One will keep here however its original name of local entanglement, which one will often abridge by "LE", even when using this abbreviation to mean "locally entangled" in place of an adjective. The adjective "local" in this name came in the Lieb-Robinson approach from its association with a spin-lattice model, where the designation of a spin coincides with its location. Although one will recover this association with a location in space, one will rather mean it as associated with individual atoms (or other elementary constituents of a macroscopic system). The "paradigm" of LE with which one will deal consists in a model of a Geiger counter (or a wire chamber, or essentially a gas of atoms in a solid box). It stands then as a well-defined quantum system, which one will denote by B and will first suppose isolated. Another system A, usually microscopic, can interact with this system B and consists in an energetic charged particle, initially in a state (1.1) where the states represent different tracks of the particle. The same approach holds also when the measuring system involves several separate parts, like in a Stern-Gerlach measurement for instance. These parts can also have eventually a space-like relativistic separation. Although one will discuss mainly local entanglement in the case of a detector made of atoms (for instance a gas of argon atoms acting as a detector and a dielectric in a counter), the discussion will be valid also when excited atoms, ions and free electrons are produced by a charged particle. As a matter of fact, local entanglement depends little on the nature of the particles under consideration and this character contributes to make its discussion easier and general. One introduces this local entanglement in a simple case where the system B is in one piece and the initial state of A consists in a unique track, associated with a unique state vector . One will also disregard the charge of this particle and represent simply its interaction with atoms by a potential U, whereas another potential V describes the interaction between pairs of atoms in B. The introduction of LE looks then much like a game: One may imagine that, in addition to its quantum behavior, every particle in the AB system carries a color, either white k k or red. Before interaction, the particle A is red and every atom in B is white. One also assumes that the red color is conveyed by contagion so that, when a red particle interacts with a white one, both of them come out red from their interaction. Moreover, when a particle has become red, it keeps that color forever. Finally, when two white atoms interact, they remain white when they come out from interaction. A mathematical expression of this game consists in replacing the two colors, red and white, by two formal "indices of local entanglement", 1 and 0. The rules of contagion can be expressed then by using three 2 2 matrices, in which these indices 0 and 1 denote rows and columns, namely: P 0 can be interpreted as a projection matrix, which picks up an atom with LE index 0 and keeps this index unchanged. The same behavior holds for P 1 , which picks up Index 1 and conserves it. The matrix S picks up an LE index 0 (which indicates an absence of any previous influence of A) and brings it to local entanglement, shown by index 1 (so that the influence of A becomes therefrom imprinted on this atom). The matrices (2.1) are not meant to act on a state vector in a two-dimensional Hilbert space, but only on a conventional index, which an atom carries. An important feature of this family of matrices is that the matrix S † , which could be formally adjoined to S and would bring back local entanglement to no local entanglement, does not belong to the construction. This absence imposes an irreversible character to local entanglement, in accordance with its representation as a contagion of LE. One can make these rules of contagion mathematically explicit: To do so, one replaces the potential U Aa for the interaction between the particle A and an atom a, by a 2 2 matrix Similarly, the potential V ab for interaction between two atoms a and b is replaced by which describes adequately the rules of contagion. This formal construction can be extended easily to the case of several measurement channels k, as the ones in (1.1). Every index k is then associated with local entanglement with a definite state vector k of A whereas the index 0 represents non-local entanglement (i.e. no local entanglement with any channel). Dynamics of local entanglement One turns then to the dynamical evolution of local entanglement. In the case of a unique channel, for instance 1 , the standard wave function ψ of the composite AB system evolves according to the Schrödinger equation Initially, all atoms are still non-locally entangled in a unique component with all these indices equal to 0. The Bose-Einstein or Fermi-Dirac symmetry of the wave function ψ remains valid in every ψ s , because two atoms both carry the same index, 0 or 1, when they come out from an interaction. If one denotes by ψ ' ' this set {ψ s } and one considers it as a vector with 2 N components, one gets a linear equation of evolution with the same abstract form as a Schrödinger equation, namely The operator H' is a 2 N ×2 N matrix. Its matrix elements involve differential operators representing kinetic energy, and potentials (U, V) representing interactions. Before interaction between the two systems A and B, the vector ψ ' ' has only one component in which all the LE indices are 0. This unique component coincides then with the standard wave function ψ and one finds that, because of (2.4), the standard wave function coincides at all times with the sum ψ = Σ s ψ s . (2.5) Several other properties of ψ ' ' show off on the contrary significant differences in meaning and in form between standard quantum dynamics and local entanglement (although the second one amounts only to rewriting the first one): The evolution operator H' in (2.4) is not self-adjoint and, as a consequence (or as the real cause), local entanglement is irreversible under time reversal. It always ends up with a situation where all the atomic states have become locally entangled. In the case of several channels, as in the sum (1.1), this final situation coincides with standard entanglement. Moreover, local entanglement stands completely out of the standard interpretation, since no standard observable can extract the LE component ψ s (t) from the wave function ψ(t) as one of its eigenvectors. Probabilities of local entanglement One can also construct a quantum field version for local entanglement [10]. This is convenient for extending the domain of LE and draw more of its consequences in macroscopic systems, not only in gases but also in every system that can be analyzed by means of many-body theory and the use of quantum fields [11]. One can construct quantum fields showing local entanglement and denoted by φ r (x), where the index r can either be equal to some channel index k in (1.1), or equal to 0 for no local entanglement. The standard quantum field φ(x) coincides then with the sum of these LE fields One can also define number densities n r (x) of locally entangled atoms (or no locally entangled ones) as average values of products φ r † (x')φ r (x') over a small space region with center at a point x. Although the sum of these averages is not exactly equal to the standard local density n(x) of atoms, it does so with a negligible error when the state of the system is strongly disorganized, as in a gas for instance. This kind of emergence of a classical behavior is well known in statistical physics [11] and one will often use, in accordance of the present work with a first exploration. One gets thus local probabilities of local entanglement, f r (x), which are defined as the ratios n r (x)/n(x). They are positive and satisfy the sum property One can interpret this relation as meaning that the atoms near a point x have a probability p k for being entangled with a channel k in (1.1) and also a probability f k (x) for being moreover locally entangled with that channel. This set of probabilities is completed by a probability f 0 (x) for non-local entanglement. This existence of local probabilities of local entanglement (and non-LE) is the most remarkable outcome of these results, because these local probabilities are not expressible by means of standard observables and do not belong therefore to the standard category of quantum probabilities. One is therefore already trespassing neatly the frontiers of the standard interpretation. Propagation of local probabilities of local entanglement: Waves of LE When Lieb and Robinson discovered local entanglement, they described their properties of propagation as remarkable "light-cone effects", although the corresponding velocity was unrelated with the velocity of light. The present author considered also these aspects of LE in the case of local entanglement between the particle A and a gas of atoms [10]. One will look now at that case, with emphasis on the physical aspects of these effects. One will use again for that purpose a descriptive formulation where there is only one channel and the influence of Particle A can be illustrated as a transmission of color, A being red and communicating this color to initially white atoms, which carry this color farther away. The collisions between atoms can be considered random and their collective effect is expressed by a probability f 1 (x,t) for the atoms near a point x to be locally entangled with A (i.e. to be red). Another probability f 0 (x,t) is associated with non-local entanglement (or the white color). The two probabilities sum up to 1 almost exactly, namely An approximation of the evolution of f 1 (x, t) by means of classical statistical physics can be justified by the fact that everything in it depends only on random atomic collisions. Similar approximate methods are known significant, at least qualitatively and regarding orders of magnitude, similar transport processes such as heat diffusion or electric conduction [12]. This kind of evolution, which depends only on collisions of atoms, can be described by a diffusion equation where D is a diffusion coefficient. It can also be linked simply, as far as order of magnitude are concerned, with the mean free path of atoms λ and their mean free time τ by D = λ 2 /6τ. A collision between a non-locally entangled atom and a locally entangled one contributes to the contagion of LE. When it occurs near a point x during a short time interval δt, the associated probability is equal to the product f 1 (x, t) f 0 (x, t) δt/τ. The corresponding increase in f 1 (x, t) owing to contagion is therefore given by Using (2.8) and (2.9), one gets a nonlinear equation for the evolution and propagation of local entanglement, which is When looking at this equation in a one-dimensional space, one finds that it cannot be satisfied by a function f 1 , which would be everywhere positive and non-vanishing as it does in the case of the diffusion equation (2.8). In dimension 3, there must exist a moving boundary S, which separates a region where f 1 (x, t) is positive from a region where it vanishes (this existence of moving fronts is frequent in nonlinear wave equations [13]). One can get an idea of the motion of the front and of the behavior of f 1 by solving numerically this equation (2.10), when it depends only on a one-dimensional variable x. The average velocity of atoms is then λ/τ and its average value along one direction of threedimensional space is v' = 3 -1/2 v (notice that this is the velocity of sound in a dilute gas). Whereas diffusion expands only at time t to a distance of order (Dt) 1/2 , diffusion acting together with contagion in Equation (2.10) yields an expansion of LE at the much larger distance v't, which defines the position of the moving boundary S at that time. Numerical solutions of the propagation equation (2.10) confirm this motion of a wave front S at the velocity v'. The probability of local entanglement f 1 (x, t) increases rapidly from zero to 1 behind this front, over a distance of order the mean free path λ. The environment and its interpretation The second step in the present construction is concerned with the effects of the environment of a macroscopic system. It consists essentially in the following assertion: Proposition 1 Fluctuations in the action of environment can inject into the state of a macroscopic system a specific form of incoherence, which propagates into the system. This effect will be shown a consequence of local entanglement between the macroscopic system and its environment. As long as one uses only the standard interpretation [7], however, one cannot prove the existence of the kind of incoherence in Proposition 1, or express its nature reliably: Two keywords in this proposition, "environment" and "incoherence", do not belong to this interpretation. A third word, "fluctuations", is also external, since it is linked with the notion of "environment", in a sense that does not does not take this environment as a quantum system and is therefore also foreign to the standard interpretation. A suitable framework for this proposition relies on the "cluster decomposition principle" of quantum theory, which is advocated by Steven Weinberg as necessary for a foundation of quantum field theory on a complete set of principles [14]. This principle can be used also to derive Feynman paths from the principles of quantum field theory ( [14], Volume II). A description by Feynman paths can provide a direct approach to the incoherence in Proposition 1: It would be then associated with random phases, originating in external molecules belonging to fluctuations in the environment. Another aspect of Proposition 1 is concerned with the theoretical status of environment. The question is then whether the environment of a well-defined quantum system can be considered itself as being also a quantum system. The previous Geiger counter can be considered well defined, theoretically, in view of its association with a definite Hilbert space and a definite algebra of observables [6,7]. One might think of "defining" its environment as a wider system surrounding the counter, for instance a definite part of the atmosphere around it, in which case this environment would be described by a grand canonical ensemble. But there would always be a still wider environment around this newly defined environment, with no end except for the whole universe. One will not adopt this assumption by Everett of the universe as being a perfect quantum system, because of its "many-worlds" unavoidable consequence [15,16]. One will rather consider that the main consequence of the universe, regarding quantum measurements, is a permanent presence of an environment around any formally well-defined quantum system. One will consider the environment as an objective datum on which much information is available, but which does not constitute by itself an ideal quantum system. The case of a unique molecule and a first axiom of interpretation One will use again the example of Geiger counter, still denoted by B, in which a solid box encloses a gas of atoms. No measured system A is present at the period of time, which one considers now. The environment acting on B is supposed to consist only of a limited external atmosphere, which is under standard conditions of temperature and pressure. To begin with, one considers a unique atmospheric molecule, denoted by M, which hits the box and rebounds on it. The previous description of local entanglement (with the outgoing state of M in the present case) implies that a wave of local entanglement starts from a point x M where the collision occurs and expands from there into the counter One could show more precisely how this collision generates first some phonons, which are locally entangled with the outgoing state of the molecule and begin to propagate local entanglement. This LE passes then to other phonons, under a series of phonon-phonon interactions. A description of this propagation by means of Feynman paths (or Feynman graphs), shows that locally entangled phonons can be distinguished from non-locally entangled ones and can be labeled by an index of local entanglement. When the LE wave fills the box up, the phase it carries is no more active, because it is present everywhere in all the wave functions of B, with no consequence. The place x M where M hits the box is random, as well as the momentum of the incoming molecule and the momentum transfer ∆p resulting from the collision. When one considers the initial state of B before the collision as an eigenfunction of ρ B , the outgoing wave function of the MB system carries a phase α = x M .Δp /  , which is also random and is present in all the new eigenfunctions of ρ B after the collision. This is what one means in Proposition 1 when saying that the environment can inject incoherence into B, with the usual meaning of "incoherence" as a presence of random phases. Another significant datum of this example of a unique molecule, is concerned with the time ∆t during which a wave of local entanglement crosses the system B and keeps its wave functions separated into sums of differently locally entangled ones. This delay is of order L/c s where L is a typical scale length of the system B and c s the velocity of the wave (the velocity of sound in this example). One finds this time delay ∆t of order 10 -5 L cm (in units of one second) if L cm denotes the size of the system B in centimeters.. This is a long time, when compared with the time scales of elementary processes, and this duration will be one the main parameters in the present theory. Fluctuations in the action of environment and their theoretical description One comes then to the central part of this discussion, which is concerned with the detailed action of environment on the state of B. One estimates first some parameters. Using the rather long time ∆t during which a wave of local entanglement crosses the system B, one can compute how many waves are present in B at an arbitrary time t. These waves must have arrived during the time interval [t -∆t, t] and their number, which one denotes by N t , is of order 10 24 L cm 2 in the present example. In view of the stationary behavior of the system, one can also expect that as many LE waves disappear on average during that time interval, after they filled up B completely. The fluctuations in these two numbers are of order N f = N t 1/2 , or presently 10 12 L cm . This is quite large. One knows also that the active part of an LE wave (the region where local entanglement is growing behind its front) has a width of order one mean free path of atom (about 10 -5 cm). Various such active regions overlap therefore at every point x in B and their number N x is of order 10 7 . This is again large and much disorder must be therefore permanently active in a non-perfectly isolated macroscopic system. One can give a formal expression for this disorder in ρ B (t), by separating a stable average of this state from its fluctuations: The average action of environment, in the present case, boils down to a pressure acting on the box. It has little interest and one will leave it aside. As far as the gas in the box is concerned, standard methods in statistical physics yield the definite expressions [11] <ρ B > = Z -1 exp(-H/T ), (3.1) where the temperature T is expressed in energy units. Fluctuations can only belong the difference ∆ρ B (t) = ρ B (t) -<ρ B >. It has a vanishing trace and can be conveniently split into a part ρ B+ (t), involving only its positive eigenvalues, and a part -ρ B-(t) involving the negative ones. One gets then (3.2). The choice of a theoretical description for the environment is a nontrivial problem. As far as its effects on the system B are concerned, one can only get a few data regarding the number of collisions by atmospheric molecules on the external box, during the relevant time interval, as well as the random distribution of their place and time of arrival. Their average effect is only the previously mentioned pressure, and the collisions obey essentially a Poisson distribution. The part of environment, which can act on B during the interval [t -∆t, t], can be restricted to a region of the surrounding atmosphere, which one denotes by E and which is limited by an ideal boundary, at a sufficient distance from the frontier of B for insuring that all the molecules hitting B during that time interval, were always in that region during that time. From the standpoint of quantum mechanics, one could thus describe the environment as a grand canonical ensemble of molecules, located in this region E. This description holds perfectly well for the action on environment on the system B, but not for the reverse effect of B on the state of the atmosphere, which is associated with the return of molecules after collision. A complete quantum account of this coupling between the system B and its environment would require a consideration of a composite system EB, with a quantum state ρ EB . One excludes this approach because it would lead by extension to a quantum state of the universe. A proper quantum description of the environment would be a phenomenological representation by a grand canonical ensemble, but although it would allow a quantum description of the action of environment on B, it would leave aside the back action of B on its environment. This asymmetric status of the system and of its environment was expressed earlier when one said that an environment is not generally representable by a genuine quantum system. As a consequence, the density matrix ρ B (t) of the system B is a random matrix [17], by which one means that its matrix elements in a fixed reference system (for instance the eigenvectors of the Hamiltonian of B), are random numbers. If so, this behavior is also true, automatically, for the matrices ρ B+ (t) and ρ B-(t). To go farther, one needs a guide and the one we shall use is a guess: Could it be that the randomness of the matrix ρ B (t), from its environment, could be the one at work when this system B acts in a measurement and undergoes a random collapse? This is an assumption and, at least at the point where the present theory stands, one will be unable to prove it. This is because a proof needs axioms, and these axioms would have to define an interpretation, which would extend the standard one. This aim is still too far and one will proceed by means of some remarks and other more guesses, as follow: One can get an idea of the relevant fluctuations in environment by considering the fluctuations in the number N t of colliding molecules, which hit the box around B during the time interval [t -∆t, t ], and also the associated fluctuations in number N f . One considers a sample of these fluctuations, which consist in principle of excesses above the average number of collisions, or deficiencies below, their number being N f . A fundamental property of fluctuations, which are that their samples are intrinsically inaccessible, will be used to pick up at random positive ones, which correspond to excesses, and negative ones corresponding to deficiencies. Because of the arbitrariness in this construction, one will suppose valuable (in a future theoretical interpretation) a property, which one can establish by looking at a sample, and which is valid for every sample (with anticipation, one may say that this behavior will be found valid for collapse). Regarding the matrices ρ B+ (t) and ρ B-(t), one recalls that in a positive fluctuation by one molecule, the random phase, which is carried by that molecule, passes to an outgoing wave function of B and is absent in what remains of an ingoing wave function. It means that a fluctuating collision either positive or negative, contributes to both ρ B+ and ρ B-, but these two matrices carry different phases (at least in different places). This behavior will be the main one, which one will need regarding these matrices. Finally, one notices that the intervening phases (like the previous α) have random values, but fixed ones. When one averages on the contrary upon all possible samples, the various phases in various samples behave as a set of absolutely random quantities, in which all of them are independent and every one of them randomly contained in the interval [0, 2π] As a last comment, one will consider the "strength" of incoherence, by which one means the value of the common trace W of the two matrices ρ B+ and ρ B-. One approaches by making assumptions, namely the following ones: (i) The action of environment does not spoil appreciably the energy distribution in ρ B, , as given by (3.1). (ii) The external fluctuations are strong enough for making the eigenvectors of a restriction of ρ B to a small energy interval, randomly oriented with respect to the basis of eigenvectors of H B (and <ρ B >) in that interval. (iii) Some eigenvalues of the perturbed matrix ρ B can come close to zero. One can prove that ∆ρ B is a Wigner random matrix [17] under these conditions, and the trace W of ρ B+ and ρ B-is then equal to its maximal value 4/3π. One will not take this result for granted, but will consider it suggestive enough for assuming that the actual values of W are not extremely small. Note: Some readers could wonder how it could be that such a high amount of incoherence would be present almost everywhere, and was not noticed earlier. The answer is that this incoherence is only present in the matrices ρ B+ (t) and ρ B-(t) and their effects cancel in the average value of every observable, which would express an actual observation. This incoherence is therefore invisible. One may mention however that a significant exception exists. It will appear in the forthcoming discussion of collapse, that the probabilities of various measurement channels fluctuate, under the effect of this incoherence. Quite remarkably however, this exception is a confirmation! The reason is that it is concerned with observables belonging to the measured system, and not to the measuring one, in which incoherence holds. One could return the question and say that there could be a unique case where this incoherence would be seen at work, and one sees it everyday in laboratories, where it is called collapse. A ballet of LE waves When one deals with a definite sample involving a number N f . of fluctuations in external collisions, and one looks at all the associated waves of local entanglement in the matrices ρ B+ for instance, these waves look like if they were dancing a ballet. Some of them arose near the beginning of the time interval [t -∆t, t], and they had enough time for reaching a wide development in B. Other ones occurred near the end of this interval and are still close to the boundary B. Most of them are somewhere in-between, with randomly oriented fronts.. Every one of these LE wave carries a specific phase, which one denotes again by α. This phase is present only behind a moving wave front and absent beyond. It is carried by all atomic states at a distance greater than λ behind the front. From this place behind to the front itself, the local probability for an atomic state to carry this random phase decreases gradually from 1 to 0. In the matrix ρ B+ for instance, all these fronts of LE waves move around at the velocity of sound. New ones appear permanently on the boundary and other ones disappear, after having filled up the whole system B by their phase. As in Section 2, every one of them is associated with a local probability f 1 (x, t; α), which expresses the fraction of atomic states carrying the random phase α near a point x. Many fronts of LE waves overlap at every point x in B and their number N x , which one already evaluated, is significantly large in the present example. The number of different random phases, which are carried by different overlapping wave fronts of LE waves, is still much larger: In a definite sample of fluctuations, the index r, which one used in Section 2 with values 1 or 0 for characterizing local entanglement, is now associated with a definite wave and a definite phase α. An eigenfunction of ρ B+ , in the case of a definite sample of fluctuations, is associated with a number of LE waves equal to N f and an equal number of associated phases. Near a point x, there are about N x different waves, and so many indices of local entanglement, which are either equal to 1 or 0 (this number is systematically 1 far enough behind the front, and systematically 0 afore). When one goes from one sample to the set of all samples, the phases occurring near a point x keep their number N x , but become undetermined in the interval [0, 2π]. One can express this situation by two significant propositions, which are as follow: where every component ψ n is a wave function carrying a specific phase and does not extend in space over a distance greater than a mean free path of atoms. Every component ψ n involves at most a limited number of atoms, of order N c = n a λ 3 . (3.4) Proposition 3 The contributions to the matrix ρ B+ (t) of two space regions inside the macroscopic system B, which are separated by a distance larger than an atomic mean free path, are independent. The same propositions hold of course for the matrix ρ B-(t). Proposition 3 can be expressed by considering explicitly two space regions R and R' in B. They are associated with two local density matrices, ρ R+ and ρ R'+ , which are defined respectively by partial traces of ρ B+ (t) over the atoms outside of R, or outside of R'. The proposition results from the fact that these two matrices involve unrelated components, which carry different random phases. One can express this property mathematically by introducing the union of the two regions R and R'. One has then (3.5) Slips in coherence A derivation of collapse begins then by pointing out an elementary mechanism, which will be considered responsible for generating the phenomenon of collapse. One will call this element a "slip in coherence": It consists in a very small alteration in the conservation of quantum probabilities, when two atoms collide under specific conditions. One must take into account that this phenomenon occurs when the system B is interacting with the microscopic system A during a measurement. The state of B is still under the permanent influence of fluctuations in its environment, and involves a high amount of incoherence. The initial state of the measured system A is supposed given by the superposition (1.1). Local entanglement between the two systems A and B begins as soon as they interact. A slip in coherence consists then by definition in a collision between two atoms, say a and b, under the following conditions: (i) The collision is incoherent. (ii) The state of Atom a is locally entangled with a state of the system A. (iii) The state of Atom b is non-locally entangled with the system A. The (a, b) collision is governed by the composite density matrix ρ AB , which can be decomposed as in (3.2) into the sum of an average and of two components, ρ ABi+ and -ρ AB-, with opposite signs. Condition (i) restricts the slip to a collision that is governed by these last two matrices. One restricts first attention to ρ AB+ . In view of entanglement between the systems A and B, Equation (3.2), which expresses one of its eigenvectors, becomes It will be convenient, for avoiding long discussions, to consider that the various components ψ Bnk have the same random phase for various indices k of entanglement and the same index n denoting a specific set of phases in a sample of collisions. The absolute values of the various coefficients c k in (1.1) were absorbed for convenience in Equation (3.6) into the norms of the associated components ψ Bnk . In view of Condition (ii) and the fact that local entanglement with a state of A implies algebraic entanglement with that state, the state of Atom a belongs necessarily to some function ψ Bnj . The ab collision can happen sometimes to be coherent, but only when the state of b belongs to a wave function ψ Bnk showing the same index n characterizing the same random phase. The same quantum state of b is present then in every component ψ Bnk for every index k, because of Condition (iii), which requires its non-local entanglement. Conversely, when the state of Atom b carries a phase index n' ≠ n, the collision is incoherent. In view of the very large number of these indices n', one can assert that the number of coherent ab collisions is negligible with respect to the number of incoherent ones. Condition (i) is therefore valid for most collisions and slips in coherence are very frequent events. One may consider now these slip events: The state of Atom b carries then a phase, which is random with respect to the phase of the state of Atom a. All the matrix elements of an ab collision vanish then under averaging on this relative random phase. Since algebraic entanglement is a linear property, which requires a unique global phase in the wave function where it occurs, it loses its power of selection when there is incoherence. All the matrix elements of a collision matrix vanish then when one sums over all possible samples of collision, because it makes phases absolutely random and not only with different values n and n' in different components ψ ABn like the ones in (3.3)). The conclusion is opposite regarding the squares of matrix elements for a collision, because they do not carry the phases of incoming states: They are insensitive to averaging on random phases. Moreover, these squares are identical for all indices k of algebraic entanglement of b with the states various states of A with indices k, because of the absence of local entanglement of Atom b. The slip becomes then a full-fledged contagion of the complete state of Atom b to local entanglement with the state j of A, and accordingly a switch of the full outgoing state of the collision towards algebraic entanglement with this state j. j An essential consequence of this slip is a generation of small variations δp k in the quantum probabilities of the various channels. The calculation yields explicitly The factor W, as well as the signs in these equations, express that the collision is governed by the matrix ρ AB+ (all the signs are opposite in the case of -ρ AB-). The factor f j (x) is the probability for validity of Condition (ii) and f 0 (x) does the same for Condition (iii). One recalls that the notation x denotes the place in B where the collision occurs. One can extend the domain of validity of these results to more realistic phenomena in actual measurements: One considered here only the collisions between atoms, and there are such events in a gas acting as dielectric in a Geiger counter. Free electrons and ions are produced by a charged particle, free electrons are accelerated by an electric field, and so on. But there is no essential difference regarding local entanglement: It works in the same way with neutral atoms, excited ones, ions and electrons, even eventually with photons (from the decay of excited atoms). One will mention later on relevant orders of magnitude but, presently, regarding only matters of principles and of consistency, one may say that slips in coherence could be essential agents in collapse, since they provide simply an answer to one of the main associated questions: "How can there be variations in the quantum probabilities of various measurement channels?". (The author looked of course at a variety of other measurements, if only to check whether some of them would produce obvious counterexamples. This review raised interesting new problems, but no obvious counterevidence. One will leave it aside here). A difficulty could have been linked with the non-separable character of quantum mechanics, particularly when a measurement uses several separate detectors, like in a Stern-Gerlach experiment. The necessary change in the present approach is obvious and purely formal. It amounts simply to extend the domain of definition of the position variable x in Equations (4.2) to the union of all space regions inside the detectors. An adaptation to other different parameters in various detectors, or various places in one of them, is trivial/ Collapse as a quantum phenomenon The theory of collapse becomes almost straightforward when one uses Equations (4.2). Its mechanism relies on an accumulation of transitions in quantum probabilities, which result from all the slip events entering among all the atomic collisions during a short time δt. One must of course consider also the effects of the two matrices ρ AB+ and -ρ AB-. Everything boils down to sum the results of equations such as (4.2), with variants taking account of various states j entering in them and of all the places x where collisions occur between unexcited atoms or other particles. One will only consider two atoms (or call "atoms" particles participating in a slip). One will not enter in detailed calculations, which are straightforward, and only look at a few aspects of their results. When doing these calculations, one compares first Equations (4.2) with the same ones holding under slightly different conditions. One dealt for instance with Atom a in the case where it was locally entangled with Channel j and gave rise to small transfers of quantum probabilities from the channels with index j' ≠ j, towards this channel j. There are other slips, where the atom playing the part of a is locally entangled with one of these channels j': The average variations in probabilities, δp i and δp j' , cancel each other in these two cases (in view of the symmetry of the right-hand side of (4.2b) in the indices I and j. The standard deviations <(δp j ) 2 > as well as the correlation coefficients <δp j δp j' > do not vanish however and they even add up. When one considers the matrix -ρ AB-after having dealt with ρ AB+ , the results again add up. One gets thus the final results The local probability f 0 (x) for no local entanglement is again given in these expressions by Equation (2.6)) From fluctuations to collapse The linear behavior in δt of the correlations (5.1-2) implies that the set of random quantum probabilities {p i } undergoes a Brownian random process. Philip Pearle suggested rather long ago the possible essential relevance of these processes in collapse [19]. Because of Schrödinger's no-go conclusion [1], which was undisputed (till now), he considered logically that an occurrence of this kind of process would require violation of the Schrödinger equation. A more recent theory of "continuous spontaneous localization" (CSL) has extended more recently Pearle's results [19,4], by a combination with the Ghitardi-Rimini-Weber assumption of a physical effect, which would adding a random action to the evolution under Schrödinger's equation [20]. One does not need here this GRW effect and one considers Schrödinger's equation as "the" unique Law of quantum dynamics. The essential of Pearle's conception stands on a key theorem, which he proved in various ways: According to this theorem, a Brownian random process leads unavoidably to a collapse effect: The various quantum probabilities of most channels vanish successively, until a unique one (say for instance p j ), reaches the fatidic and final value 1. It turns out (and this is the beauty of this theorem) that the Brownian probability for this outcome is identical with the initial value of this quantity p j , in perfect agreement with Born's fundamental law. Presently, one must look carefully at the conditions of validity for Pearle's theorem. They consider the fluctuations as random, infinitely small and infinite in number. When one introduces accordingly a probability distribution Φ(p 1 , p 2 ,…; t ) for the random quantities {p j }, it must satisfy the Fokker-Planck equation The assumptions of the theorem require that a quantity p j can actually reach the value 0, so that the associated channel can disappear. This condition can be expressed explicitly by introducing a Fokker-Planck probability current J, with components Pearle's theorem requires that the component J j of this current does not vanish on the parts of the boundary where some p j is zero. This is necessary for allowing p j to vanish and getting a finite value for the average time of collapse (otherwise, collapse would take an infinite time…). It seems at firs sight that there is a difficulty there, with the correlation coefficients in (5.1-2): They give where ∂ j' is meant as ∂/∂p j' . The first term vanishes on the boundary because of the boundary condition Φ = 0 for p j = 0. The second term vanishes also in view of the explicit dependence (5.1-2) on the p k 's, including the expression of the quantity f 0 (x)). One is thus led apparently to a sad conclusion, which would be that no randomly varying quantity p j would ever be able to vanish: Schrödinger's analysis would eventually need some revision, its essential conclusion regarding the impossibility of collapse would remain. It may be worth mentioning that this impediment came only to attention at the last step in the present research, like if one had been hunting for the snark and got a boojum [21]. The relieving answer came only after a few days, much like a "deus ex machina" last event in a play. This is how it goes: The Fokker-Planck equation relies on infinitely small random variations of a purely mathematical nature. But individual variations are finite in the present theory: they are due to a rather large number of slips during a short time δt (for instance the duration of a two-atoms collision). Every slip yields the finite effects (4.2). If one covers the space in the system B into a lattice of cells with size λ, the Bose-Einstein's or Fermi-Dirac's indistinguishable character of atoms implies that every individual slip is entirely characterized by the cell β where it occurs and the channel k with which there is initially local entanglement (also whether the collision is positive or negative, occurs in ρ AB+ or ρ AB-). In view of Proposition 3, the finite variations in p j, from the slips in different cells, add up, so that these local effects can have a significant global action. There is also another much less obvious magnifying effect: If one denotes by N k β the number of slips of a given type during a given time interval, this is a random integer and it has a very small average value <N k β >. The magnifying effect comes then from the Poisson distribution of the values of t N k β , together with the famous property, which makes this distribution sometimes call the "law of small numbers", namely: The expression ∆N k β = (<N k β >) 1/2 for fluctuations in this distribution implies that the standard deviation ∆N k β is much larger than the average <N k β >, when this average is much smaller than 1. These conditions are satisfied here. There is accordingly a strong magnification of the fluctuations δp j when p j is small. It means that a quantum probability can vanish under the effect of a finite number of conspiring slips, or even a unique one. If the Fokker-Planck equation had been exact, as in CSL theories, the time scale of collapse would have been predicted infinite by Equations (5.1-2). Rough estimates confirm this standpoint, but one will leave its details for future more quantitative studies. Ultimate collapse The main remaining question, which readers could wait for, is concerned with quantitative estimates. Only a rough one will be proposed: In the model of a Geiger counter with which one dealt: Formula (5.1) yields a time scale τ c of collapse, of order , (5.7) where n a denotes the number density of atoms in the gas. A time scale of collapse of the order of 10 -10 s comes out from this estimate as indicative. More careful considerations regarding an actual detector could imply a significant increase in efficiency: If one denotes by ∆ the size of the cloud of free electrons in an actual detector when ionization is progressing, ,one may expect a decrease of the time scale (5.7) by a factor of order λ/∆. The estimate looks then sensible. A further look at matters of principle draws out a more surprising possibility, which could go as far as making the concept of time scale empty. It is inspired by Wojciech Zurek's attractive proposal of "quantum Darwinism" [22] and is concerned in the present case with an eventual presence of other organized systems, which would stand outside the measuring device and would in some sense "observe" it (like an electric current or a microprocessor can be said to "observe" a detector). These systems would participate in the collapse process and would strongly enhance it, making it much shorter. The outcome would be surprising, from a philosophical standpoint regarding Reality [5]: When a unique measuring channel would come out at last, all the past histories of random evolution in other channels, which would have occurred in the meantime, would be wiped out forever into definitive oblivion. What happened during collapse, in the measuring device and its helpers would leave absolutely no trace and no memory. Except for science fiction writers, the real result with scientific and philosophical value of this study is that collapse would be not only a special and very important consequence of quantum dynamics, but also that its lack of connection with observables means that, intrinsically, as a consequence of the quantum principles, its working would be fundamentally inaccessible by experimental methods. One will not draw more conclusions, except for saying again that this theory of collapse is proposed only as a conjecture, and remains in wait for really thorough investigations. To which one will add however that the present ideas -whatever their value-seemed able to raise new possibilities and shed unexpected light on old problems. Although that does not imply in any way that the proposed conjecture is true, it makes one believe that something deep and true could exist along that direction. The next step would not be then so much to give proofs, which one can presume hard, but to construct first a really new interpretation of quantum mechanics. where N C denotes the average number of atoms in the cell C. These quantities are positive and can be used as local measures of entanglement, as done in Section 2. The sum property (2.7), which makes them meaningful as probabilities, becomes then with all the integrals extending on the cell C. The condition of validity for (2.7) is then One assumes that this property holds because of several conjugated reasons: The number of eigenfunctions of the restriction of the matrix ρ AB or of σ AB to a cell C is extremely large (of exponential order in N C ). The trace (A.9), which involves this restriction of σ AB , is a sum, over all pairs of these eigenfunctions, of so many real terms. As in many-body theory [11], one considers the contribution to (A.11) of the sums of all pairs of different eigenfunctions as negligible, because they involve an extremely high number of real quantities with positive and negative signs. One may considered this sum as negligible, when compared to the sum on diagonal terms, which enter in (A.11). Finally, one acknowledges the very sketchy character of these indications regarding a mathematical formalism: they have at least the interest of showing how much would have to be done for insuring a valid explanation of collapse, if it made sense along the proposed direction.
2019-03-18T01:49:01.107Z
2016-01-06T00:00:00.000
{ "year": 2016, "sha1": "9d6029bb0cf4af4b3317e844205c606c91780e1b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d1b782b91737aeaa996f43652c153793d8a9badb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
35651736
pes2o/s2orc
v3-fos-license
Deterministic One-to-One Synthesis of Germanium Nanowires and Individual Gold Nano-Seed Patterning for Aligned Nanowire Arrays It is demonstrated by the current work controlled one-to-one synthesis of Ge nanowires (GeNWs) with ~100% yield from Au seed particles. The optimum GeNW growth conditions are found to be size dependent. For<50 nm Au seeds, under-growth, one-to-one growth and over-growth can occur depending on the growth conditions. Growth of large GeNWs appears to be diffusion limited. These results should have generic implications to the synthesis of other types of nanowires via vapor-liquid-solid (VLS) processes. Patterning and positioning of individual Au nanoparticles are achieved by lithographic patterning and used for successful one-to-one nanowire growth. Finally, post-growth flow-alignment is used to obtain quasi-parallel nanowires originating from well-controlled single-particle sites. These results are important to the fundamental science of nanomaterials synthesis and may find important applications in various fields including high performance nanoelectronics. temperatures in the range of 270-400°C on substrates decorated with preformed Au nanoparticles (diameter in the range of 5-50 nm) using GeH 4 as the precursor for Ge feedstock. As shown previously, the growth mechanism is well described by the vaporliquid-solid (VLS) model [5,13,14] . We find that a main advantage of LPCVD synthesis over APCVD is that GeH 4 concentration in the CVD system is better controlled by varying the pressure in the system than diluting GeH 4 with carrier gases. We identify the partial pressure of GeH 4 for optimum GeNW growth is between 4-8 Torr, below which the yield of NWs is low due to insufficient feedstock and above which undesirable pyrolysis of GeH 4 is observed. Another important advantage of the LPCVD approach is the rapid removal of O 2 and H 2 O species trapped in the system by vacuum pumping. This efficiently reduces the contaminants in the system and makes the growth results highly reproducible between experiments. The main growth result and understanding obtained by the current work is that the optimum GeNW growth condition is size dependent. That is, the growth temperatures at which optimum 1-1 growth of GeNWs can be achieved vary with the sizes of Au seeds. Under a fixed partial pressure of GeH 4 of P=5 Torr, the optimum growth temperature for d=20±2 nm Au seeds is around 295°C under which every Au seed can produce a GeNW ( Fig. 1b and c). It can be seen that the total number of nanowires grown matches with the number of starting Au nanoparticle seeds, and the nanowires are originated from the positions of the starting particles (Fig. 1b). The 100% yield and 1-1 growth at 295°C for the 20 nm particles are robust and have been reproduced with 10 batches of samples. For larger d=50±3 nm Au seeds, the optimum GeNW growth temperature is ~ 310 °C at which 1-1 growth can be achieved (Fig. 2b). At a lower temperature of 295°C, not all of the 50±3 nm Au seeds are capable of producing GeNWs (Fig. 2a). On the other hand, if the growth temperature is high (at e.g., 325°C), the 50nm Au seeds are found to produce more GeNWs than the number of starting seed particles and interestingly, NWs with diameters much smaller than the starting d~50 nm particles are observed (Fig. 2c). This observation suggests that the ~ 50 nm Au seeds have split into smaller ones to produce smaller NWs at the relatively high growth temperature (Fig. 2d). The optimum GeNW CVD growth temperatures for various size Au seed particles in the range of 5nm to 50nm are summarized in Fig. 3. A general trend is that smaller Au seeds can nucleate and grow nanowires at lower temperatures. For large particles (d~50 nm), low growth temperature under-produce GeNWs with low yield and too high a temperature tends to over-produce wires due to splitting of Au seeds. These results can be explained by considering several key factors involved in the VLS growth process. The first factor is that the eutectic melting temperature of Ge-Au is size dependent and higher for larger particles. Such size dependence of melting temperature has been documented for single and binary element particles [15] [16] . It is therefore reasonable that larger particles require higher temperature for efficient supersaturation and growth to occur. Secondly, Ge diffusion in the Au particle is an important kinetic factor of the VLS growth process. The size of the Au seeds determines the diffusion length over which Ge must reach to saturate the Ge-Au solution for nucleation and growth of one nanowire from the seed particle. Higher temperatures will facilitate Ge diffusion and thus NW growth from larger particles. The third factor is Ge feedstock supply. Higher temperature will lead to more efficient decomposition of GeH 4 precursor and provide an efficient Ge supply need for larger Au particles. The VLS growth of NWs from large Au seed particles appears to be diffusion limited. At high temperatures, the feeding of feedstock could be rapid while the diffusion of feedstock atoms in Au might not be sufficiently high to supersaturate a large particle. Rather, smaller regions of the Au cluster are supersaturated rapidly, leading to nucleation and growth of smaller NWs from the parent Au particle (Fig. 2d). As control experiments, we have attempted growth of Ge and Si NWs (GeNWs using GeH 4 and SiNWs using SiH 4 ) from ultra large Au particles with d~250nm. Under all experimental temperatures tested, we are unable to achieve 1-1 growth from these large particles and always observed small wire growth due to particle splitting. We believe that the diffusion limitation for large NW growth, and the size dependent NW growth is general to the synthesis of various NW materials via the VLS mechanism. With the 1-1 growth ability, we next pursue patterning of individual Au nanoparticles to achieve 1-1 growth of GeNWs at controlled locations with monodispersed sizes. This is achieved by first using electron-beam lithography to pattern arrays of small Au islands on a substrate (Fig. 4a). The islands are 40 nm on the side with various thickness in the range of 1-10 nm formed by the evaporation and liftoff technique. Upon annealing at 300 ºC, Au atoms in each island are found to aggregate and form well-defined Au dots with controllable diameters in the range of 5-50 nm (dot size dependent on the metal thickness in the 40 nm wide islands). Fig. 4b shows an array of regularly spaced d~20±3 nm Au particles formed by this method. CVD growth using the optimum condition identified in Fig.3 for 20 nm Au seeds leads to successful 1-1 growth of GeNWs from the nanoparticle arrays (Fig.5). This result demonstrates that NW synthesis can be well controlled at the single particle level by making use of the understanding of nanowire growth and state-ofthe-art lithographic patterning technique. We have also explored controlling the orientations of the GeNWs. Our approach is to utilize fluid flow [17] [18] to manipulate and re-orient the GeNWs grown from the patterned Au particle arrays, as shown schematically in Fig. 5a. Due to the VLS tip-growth process, one of the ends of an as-grown GeNW is anchored on the substrate (at where the nanowire is grown from as highlighted by arrows in Fig. 5b and d) and can act as a pivotal point for the wire. After a stream of DI H 2 O is flowed across the substrate surface, we find that the nanowires are re-oriented towards the flow direction and become quasi-aligned while maintaining the same spacing between their pivoted ends ( Fig. 5c and e). To summarize, we have demonstrated controlled one-to-one synthesis of GeNWs with ~100% yield from Au seed particles. Size dependent optimum GeNW growth is first evacuated to its base pressure of 150mTorr and then heated up to a growth temperature in the range of 270-325°C. Afterwards, the chamber is filled with precursor species of GeH 4 (Germane, 10% in He, Voltaix Inc. NJ, USA) to desired growth pressure (~50Torr total pressure, GeH 4 partial pressure ~5Torr) and kept at that pressure throughout the growth. During this process, GeH 4 is flowed at a rate of 10sccm (standard cubic centimeter per minute). At the end of the reaction, the feeding of GeH 4 is stopped and the chamber is pumped to its base pressure again, followed by cooling down to room temperature. One of the criteria of optimum growth is that after CVD, visual inspections should find that the quartz growth chamber is free of pyrolytic deposits of GeH 4 during growth. Manipulating the orientations of nanowires after one-to-one growth. After CVD growth of GeNWs on a substrate with patterned Au dots, a H 2 O droplet is placed onto the substrate to cover the as-grown GeNWs. A N 2 flow is then passed to blow-dry the surface along a desired direction. After this simple process, we find that the nanowires can be quasi-aligned with the flow direction.
2018-04-03T02:57:41.395Z
2005-03-02T00:00:00.000
{ "year": 2005, "sha1": "88417e4b9eff4d89feb6196bbed26c1e1625136d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0503035", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2de3c696617b81e26d9f8dc040c92cfaadc76762", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Medicine" ] }
257003078
pes2o/s2orc
v3-fos-license
Hearts are NOT Made to Be Broken: Expert Opinion on Amyloid Light-Chain Cardiac Amyloidosis Amyloid light-chain (AL) amyloidosis is a rare systemic disease caused by plasma cell dyscrasia. These plasma cells produce excess Ig light chains, which can misfold, aggregate, and deposit in tissues, resulting in toxicity and organ dysfunction. The heart is among the most commonly affected organs and cardiac involvement is associated with significantly worse outcomes. Despite advances in the treatment of the underlying plasma cell dyscrasia, the survival of patients with advanced heart involvement is extremely poor. The median survival of patients with cardiac AL can be as short as 6 months from diagnosis, depending on severity of cardiac involvement. It is a condition of high unmet medical need. Timely diagnosis is essential, yet detecting the disease is fraught with challenges, not least a lack of recognition among clinicians. In addition, the treatments that are currently available, which include anti-plasma cell dyscrasia chemotherapy and immunotherapy, are far from ideal, offering complete response rates of around 50% and organ response rates of between 40–50%. However, new antibodies with the potential to target the amyloid deposits have demonstrated encouraging results in early phase studies and are now moving into late-stage development. Giovanni Palladini, Amyloidosis Research and Treatment Centre Foundation, San Matteo, Italy, and Department of Molecular Medicine, University of Pavia, Italy, explained how these new agents have the potential to change the AL amyloidosis treatment landscape and calls on cardiologists everywhere to consider AL amyloidosis when assessing patients with heart failure (HF). AMYLOID LIGHT-CHAIN AMYLOIDOSIS OVERVIEW AL amyloidosis is a life-threatening disease caused by plasma cell dyscrasia and can affect multiple organs, including the heart and kidneys. Small plasma cell clones or, more rarely, B cell clones produce monoclonal free light chains (FLC) that misfold and aggregate, forming insoluble amyloid fibrils, which are deposited in body tissue. 1 The accumulation of amyloid deposits leads to tissue damage and organ dysfunction, 2 while amyloid FLC can also cause cytotoxicity. 3 It is a rare condition, with approximately 40.5 cases per million people in 2015 in the USA. 2 Prevalence increases in line with age and the majority of patients are over the age of 65 years. 2 Palladini said: "Almost all organs can be affected by this condition. The most common are the heart, the kidneys, and the liver, but also the peripheral nervous system, the soft tissues, and so on. 4 It can be localised or systemic, and when systemic disease affects the vital organs, it is usually a very severe condition." Patients with advanced stage disease with cardiac involvement are at high risk of mortality, Palladini went on. The median survival of patients with cardiac AL can be as short as 6 months from diagnosis, depending on severity of cardiac involvement. 4 UNMET NEEDS IN AMYLOID LIGHT-CHAIN AMYLOIDOSIS DIAGNOSIS Despite such dire potential consequences, the condition is often missed, said Palladini. "If the patient knows they have a monoclonal component, they are followed by a haematologist and, sometimes, they will be detected early, at a pre-symptomatic stage," Palladini said. "More often, however, the patient will experience symptoms." These may include heart failure with peripheral fluid retention and dyspnoea, nephrotic syndrome with fluid retention and hepatomegaly, and peripheral and autonomic neuropathy. 5 "AL amyloidosis is difficult to diagnose because it mimics other conditions and, when there is single organ involvement, there is a lack of clues. If you have a patient with heart failure and nephrotic syndrome, for instance, you may think of amyloid. But if it is just heart failure, it can be more difficult." Diagnosis is often delayed, and 20% of patients with AL amyloidosis are not correctly diagnosed until 2 years or more after becoming symptomatic. 5 A lack of awareness of the condition among cardiologists can also hamper effective diagnosis in patients presenting with HF, Palladini went on. Studies have shown that up to 29% of patients with HF with preserved ejection fraction have cardiac amyloidosis. 6 Yet, while cardiologists often consider wild-type amyloid transthyretin (ATTRwt) amyloidosis during diagnostic workup, the same does not tend to be true for AL amyloidosis. "ATTRwt is probably more prevalent, and clinicians are better trained in detecting it," Palladini said, highlighting an important distinction between the two conditions. "ATTRwt progresses over years, but cardiac AL amyloidosis progresses over weeks. 7 If you see a patient and do not think of AL, you will miss the condition and the patient will die during the diagnostic procedures." Palladini believes every patient with suspected cardiac amyloidosis should be screened for monoclonal gammopathy. "Cardiologists could be facing a medical emergency and not be aware of it if they are only thinking of ATTRwt amyloidosis," Palladini said, explaining that the two will appear the same on echocardiogram. Looking for a monoclonal component "is very easy" with adequate technology, Palladini said. Serum FLC ratio in addition to immunofixation electrophoresis of both serum and urine has a 99% sensitivity for monoclonal gammopathy, 8 and so should be carried out regardless of serum protein electrophoresis results. If the result is positive, the patient will need a biopsy at a referral centre. If the screening does not show a monoclonal protein, "you have time to look for ATTRwt," Palladini went on. UNMET NEEDS IN AMYLOID LIGHT-CHAIN AMYLOIDOSIS TREATMENT AL amyloidosis will be diagnosed and staged, according to the Mayo Clinic system 9,10 or the most recent Mayo Clinic European modification, 11 at a specialist referral centre. "We need cardiologists to send their patients to us as early as possible," Palladini said, explaining that current treatment options were slow acting. Typically, the initial regimen is administered over 6 months, followed by a maintenance regimen for a further 24 months. 12 Supportive care, Palladini went on, is delivered by amyloidologists and cardiologists working in partnership over the course of the disease. Palladini explained that the presentation of patients with AL amyloidosis was "extremely heterogeneous" and that response to therapy was variable. "You have patients without heart involvement who can live longer, even if they do not respond to treatment, and you have patients with very advanced heart involvement for whom the opposite is true. These patients may only survive a few months and can die even if they do respond to treatment." Current treatment approaches focus on using anti-plasma cell chemotherapy or immunotherapy to eradicate the precursor clone, with the standard of care in the USA and Europe being daratumumab plus cyclophosphamide/ bortezomib/dexamethasone (DARA+CyBorD). 13 "If you kill the plasma cell, the concentration of light chains drops, the whole process stops. If you stop the production, you eradicate the toxic precursor, but the amyloid deposits are still there," Palladini explained, adding that damage reversal was a slow process. Stem cell transplantation could be considered in eligible patients with an inadequate response to DARA+CyBorD, Palladini said. 14 However, advanced heart involvement contraindicates stem cell transplant. 15 "With the exception of European modification Stage IIIb patients, we see a complete response (CR), meaning you no longer see the monoclonal protein, in approximately 55% of cases with DARA-based therapy. At 6 months, approximately 40-50% of patients achieve organ response or cardiac improvement." 14 Such statistics leave much room for improvement, Palladini went on. "It is much better than what we could do in the past, but it is still not enough. We want 100% CR rates, and we want 100% cardiac response rates." There are clear unmet needs for treating advanced disease, in which DARA+CyBorD is not indicated; treating refractory disease; preventing relapse; and removing the amyloid deposits, Palladini said. FUTURE TREATMENT OPTIONS Researchers are currently attempting to address these unmet needs in a variety of ways. Approaches have included 'borrowing' drugs from multiple myeloma to treat relapsed/refractory AL amyloidosis. Patients with a 11;14 translocation, which accounts for approximately 50% of people with AL amyloidosis, may benefit from inhibition of the BLC2 gene, for example. 16 Studies of chimeric antigen receptor T-cell therapy have also yielded interesting results in patients who were heavily pre-treated and refractory. "Only five patients have so far been reported, but the results were extraordinary," said Palladini. All achieved CR. "It is encouraging, but it is a very complicated treatment to deliver, and our patients are very frail." 17,18 In terms of fostering cardiac response in patients with advanced disease, attention has turned to using passive immunotherapy to neutralise existing amyloid deposits. Two antibodies, birtamimab and CAEL-101, are currently under investigation. While two studies of birtamimab, formerly NEOD001, were halted in 2019 due to futility, a post hoc analysis of the results found a significant survival advantage among patients with Mayo Stage IV or advanced disease. 19 This, Palladini explained, has led to the design of a randomised, placebo-controlled, Phase III trial in this cohort, which is currently enrolling. 20 CAEL-101 has long been recognised for its ability to bind to amyloid deposits while not recognising the normal light chain, Palladini explained. "This antibody has been around for many years and one of its first applications was imaging. While it is not used routinely, this is very important for us because we already have the in vivo evidence that it binds to the amyloid deposits." A Phase Ia/b study, published in 2021, found an improvement in cardiac response, as well as cardiac function measures such as global longitudinal strain following CAEL-101 treatment. 21 Based on these results, researchers have now designed two Phase III, placebocontrolled trials in patients with advanced heart involvement, defined as Mayo Stage IIIa 22 and IIIb. 23 All three Phase III studies (one with birtamimab and two with CAEL-101), which have a primary endpoint relating to overall survival, will see the antibodies administered alongside bortezomib-based therapy. The results, expected to be published in the near future, will ascertain whether these novel agents can accelerate cardiac response in people with advanced disease. "I think this is the right approach. With a good number of patients and a robust endpoint, we hope to have an answer to this unmet need and have something to help our patients," said Palladini. "But it would also be a very important proof of principle. If we can see results in this very disadvantaged patient population, then we should be able to see them in everybody." This, Palladini went on, could "open the door to many future developments." "In the future, we may be able to forget the chemotherapy: we could target the plasma cells and the amyloid deposits just with immunotherapy," Palladini said. "It is very exciting."
2023-02-19T16:16:31.330Z
2023-02-17T00:00:00.000
{ "year": 2023, "sha1": "6cf4de7a1f43534c07018cf414474e07bfdeb137", "oa_license": "CCBYNC", "oa_url": "https://emjreviews.com/wp-content/uploads/sites/2/2023/02/Hearts-are-NOT-Made-to-Be-Broken-Expert-Opinion-on-Amyloid-Light-Chain-Cardiac-Amyloidosis-1.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ff9f08b3f1f5bc5467531fcd66c4f10984936275", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
22782050
pes2o/s2orc
v3-fos-license
Hierarchy of evidence referring to the central nervous system in a high-impact radiation oncology journal: a 10-year assessment. Descriptive critical appraisal study ABSTRACT CONTEXT AND OBJECTIVE: To the best of our knowledge, there has been no systematic assessment of the classification of scientific production within the scope of radiation oncology relating to central nervous system tumors. The aim of this study was to systematically assess the status of evidence relating to the central nervous system and to evaluate the geographic origins and major content of these published data. DESIGN AND SETTING: Descriptive critical appraisal study conducted at a private hospital in São Paulo, Brazil. METHODS: We evaluated all of the central nervous system studies published in the journal Radiotherapy & Oncology between 2003 and 2012. The studies identified were classified according to their methodological design and level of evidence. Information regarding the geographical location of the study, the institutions and authors involved in the publication, main condition or disease investigated and time of publication was also obtained. RESULTS: We identified 3,004 studies published over the 10-year period. Of these, 125 (4.2%) were considered eligible, and 66% of them were case series. Systematic reviews and randomized clinical trials accounted for approximately 10% of all the published papers. We observed an increase in high-quality evidence and a decrease in low-quality published papers over this period (P = 0.036). The inter-rater reliability demonstrated significant agreement between observers in terms of the level of evidence. CONCLUSIONS: Increases in high-level evidence and in the total number of central nervous system papers were clearly demonstrated, although the overall number of such studies remained relatively small. INTRODUCTION Evidence-based medicine has become essential to clinical and research actions since it was formally proposed in 1990. 1 The importance of evidence-based medicine concepts was highlighted in an article published in the British Medical Journal in 2007, in which the editors described the emergence of evidence-based medicine as one of the 15 most important milestones since the foundation of the British Medical Journal (1870). 2,3Henceforth, critical evaluation of evidence has become an important tool for assessing research quality and progress.Clinical research can be classified into levels of evidence, which are based on evaluating and interpreting evidence.The level of evidence is closely related to the likelihood that a piece of research will produce valid and reliable results. Radiotherapy is no different in this regard.The pursuit of the best evidence is changing and is beginning to follow the trends reported in the 1990s. 4As an example, conducting a quick Medline search associating "randomized trials" and "radiation oncology", 211, 144, 27 and 5 studies for the years 2012, 1996, 1981 and 1970 are identified, respectively.This finding demonstrates the evolution and intensification of research applied to radiotherapy, with a 40-fold increase in publications, over this time period.Moreover, high-quality studies play a fundamental role in medical journals.From a broader perspective, the methodological quality and level of evidence of published articles are important determinants of how many times an article is cited, which therefore affects the impact factor of that journal and can also play a major role in the clinical transfer of knowledge. 5,6This has become an essential aspect of the evaluation of scientific journals. 6 2003, prominent journals began to use evidence hierarchies to rank the published studies. 7,8As a result, evidence-based medicine concepts were adopted by the conferences and symposia of the main specialties.Following this paradigm, great efforts have been applied within radiation oncology to follow the evidencebased medicine trend.Nevertheless, to date, there has been no systematic assessment of the quality of scientific production in several areas of radiation oncology. 4,5 OBJECTIVE The aim of this study was to identify central nervous system studies published in Radiotherapy & Oncology (Elsevier Ireland) over the last decade (2003-2012), classify the type of study and evidence levels according to evidence-based medicine criteria and observe the inter-rater agreement in the classification of the studies included. METHODS Using electronic databases, two researchers independently evaluated all studies published in all editions of the major European radiation oncology-specific journal (Radiotherapy & Oncology, Elsevier Ireland, accessed at http://www.thegreenjournal.com) between 2003 and 2012.This journal was chosen because it is important in the field of radiation oncology field; it is indexed in at least one major international database; and it is, so far, the radiation oncology journal with the highest impact factor.We conducted a descriptive critical appraisal study. Studies in this journal were initially screened based on their titles and were classified as eligible, potentially eligible or not eligible.The sole inclusion criterion was that they needed to be clinical studies relating to the central nervous system that were published between 2003 and 2012.Thus, presence of the following topics in the title counted for this initial screening: metastatic central nervous system; low-grade glioma; high-grade glioma; pediatrics and central nervous system (medulloblastoma, ependymoma or astrocytoma); central nervous system lymphoma; benign tumors (meningioma, schwannoma or arterial venous malformations); spinal cord, orbital and skull-base tumors; and experimental central nervous system studies.After this initial screening, the selected studies (eligible and potentially eligible) were first reassessed using their abstracts and then by using their full texts.All studies relating only to dosimetry were excluded. A third evaluator resolved any disagreements. The studies thus identified were assessed by two examiners and were subsequently classified according to the methodological design: 1. systematic reviews; 2. randomized or non-randomized clinical trials; 3. cohort studies; 4. case-control studies; 5. case series; and 6. basic science studies.The studies were also classified according to their level of evidence using the guidelines of the Oxford Centre for Evidence-based Medicine: systematic reviews of randomized clinical trials, level I; randomized clinical trials, level II; cohort and case-control studies, level III; case series, level IV; and narrative reviews and other designs, level V.This is a widely used classification method that has been adapted for use within the radiation oncology literature. 9This categorization was done after reading the full texts of the eligible studies. For all the studies ultimately included, we also obtained information regarding the geographical location at which the study was performed, institutions/departments and authors involved in the publication, main condition studied, main disease investigated and time of publication.We also examined the productivity relating to radiotherapy for the central nervous system in each department over the 10-year period covered by this analysis.The following parameters were stratified using the following parameters: time of publication (period Statistical analysis The assumption of normal distribution in the sample was assessed using the Kolmogorov-Smirnov test.Cohen's kappa test was used to assess reliability and to evaluate the internal consistency of the inter-rater classifications.The magnitude of agreement was determined based on the proposal of Landis and Koch: I. < 0, no agreement; II.0 to 0.20, slight agreement; III.0.21 to 0.40, fair agreement; IV. 0.41 to 0.60, moderate agreement; V. 0.61 to 0.80, significant agreement; and VI.0.81 to 1.00 almost perfect agreement. 10,11The chi-square test was used to evaluate the proportions of papers at evidence levels I, II and III between the two periods.We considered P-values from two-sided tests < 0.05 to be statistically significant. RESULTS We identified 3,004 studies published over the 10-year period evaluated.Of these, 135 were initially selected (central nervous system disease), from which 10 were then excluded.Thus, 125 studies (4.2%) were considered eligible and were included in this analysis (Figure 1).There was an average of 300.4 publications per year during the study period (which included an average of 13.5 publications per year relating to the central nervous system).We noted an absolute increase in the number of published papers of 33% overall and 41% in relation to the central nervous system, from period 1 to period 2 (Table 1). Table 2 shows the distribution of the central nervous system studies according to the geographical location at which they were conducted.European studies accounted for more than 60% of the published data over the entire period and, in comparison with the rest of the world, this difference was statistically significant (P = 0.0306). Stratification according to disease classification showed that the majority (74%) of the studies were related to central nervous system metastasis, followed by high-grade gliomas and benign tumors (Table 2). We also noted that the average numbers of authors and departments involved in the studies were 6.86 and 3.29, respectively; 67% of the first authors were radiation oncologists (Table 2) Among these studies, 66.5% were case series (prospective and retrospective; number published (n) = 81 articles); 8% were prospective controlled studies (not randomized) (n = 10); 1% were cohort studies (n = 1); 1% were case-control studies (n = 1); 6% were cross-sectional studies (n = 8); and 3% were review articles (n = 5).Systematic reviews (n = 5) and randomized clinical trials (n = 7) accounted for approximately 10% of all the published papers.Other studies, which included case reports, were responsible for 5% of the publications (n = 7).In analyzing the level of evidence according to year, we observed that there were greater numbers of published papers with evidence levels I, II and II and lower numbers with evidence levels IV and V in period 2 (2008-2012) than in period 1 (2003-2007) (P = 0.036). 9e Scientific Journal Rankings index showed average values of 1.38 and 1.89 for periods 1 and 2, respectively.This higher index for period 2 presented a tendency towards a statistically significant difference in relation to period 1 (P = 0.0528).The inter-rater reliability for the classification of study type according to the kappa statistic demonstrated significant agreement between the observers (kappa = 0.69). DISCUSSION In this 10-year single-journal analysis, we found that the studies published within the scope of the central nervous system increased in quality and number, although significant representation in the journal Radiotherapy & Oncology is still lacking (< 4.5% of published papers).We also found that case series (retrospective and prospective) represented the majority of central nervous system papers published in this journal.Furthermore, level of evidence was found to be a reproducible tool, and secondary tumor (metastasis) research was well represented in this journal.The major strength of our data is that they represent, to the best of our knowledge, an original study with a representative period of evaluation in a single journal. Moreover, this analysis was based on formal and systematic data-gathering and evaluation, and our results present sequential assessment, including formal statistical analysis and interreliability analysis based on Cohen's kappa test. In the United States, according to the national database, primary central nervous system tumors account for less than 3% of all diagnosed neoplasms. 12Similarly, primary and metastatic central nervous system tumors, together with benign central nervous system diseases, represent vast opportunities for treatment improvements, with implementation of new markers and prognostic factors.This is an important field for radiotherapy research, including newer approaches using stereotactic radiosurgery.In the field of central nervous system tumors, radiotherapy plays a major role in the management of almost all types of malignant brain tumors.Moreover, a high level of evidence can play a major role in treatment decisions. Among patients diagnosed with cancer (all anatomical sites), approximately 54% of all of them will require some radiation treatment during their lifetime, and 12% will require re-irradiation. 13Based on evidence-based guidelines, the central nervous 1. Frequencies of the hierarchy of evidence, grouped according to the period and region of origin system shows a highly recommended overall optimal radiotherapy utilization rate (approximately 92-93%). 13In a comprehensive analysis in which the objective was to estimate the ideal proportion of patients with newly diagnosed central nervous system neoplasms who could benefit from external-beam radiotherapy, most of the recommendations were based on evidence level III. 146][17][18] Based on our analysis, most of the studies presented low-level evidence, such as prospective and retrospective case series (66%).Higher evidence levels such as systematic reviews and randomized clinical trials represented only approximately 10% of all central nervous system published papers.In comparison with case series, randomized trials involve greater numbers of ethical issues and higher costs.The incidence of primary central nervous system tumors, combined with the tendency to treat them in large specialized centers, may explain the findings of this study.It also needs to be taken into consideration that the journal Radiotherapy & Oncology accepts physical contributions, dosimetry studies, molecular biology assays and other types of non-formal clinical publications.In addition to the important role of such articles with regard to development of radiation oncology, they are classified as presenting evidence levels IV or V according to the Oxford Criteria. 9Nevertheless, there was an increase in the evidence level of published central nervous system articles (in our data) over the years, particularly in the more recent period. In a manner similar to our study, Yarascavitch et al. (at McMaster University, Hamilton, Canada) quantified the level of evidence in 660 eligible articles in the neurosurgical literature in order to determine the changes over time and the predictive factors for higher-level evidence. 19Levels I and II accounted for only 1 in 10 neurosurgical clinical papers in top journals, and papers with larger sample sizes were significantly associated with higher level of evidence.These authors concluded that there is a need for better evidence in papers published within this field and that patient management and the publication of prospective studies may be improved by education and the adoption of level of evidence.[22][23][24] Similarly, it is important to note that level of evidence can be correlated with journal impact factor and that increasing numbers of studies with high-level evidence have been observed in palliative and orthopedic settings. 22,25,26 The limitations of the current analysis lie in the fact that central nervous system articles may not be well represented in the journal chosen for analysis here because other radiation oncology journals that were not included in the electronic search also publish articles relating to the central nervous system.In addition, specific journals and higher-impact journals may account for significant numbers of published papers relating to the central nervous system.These were not assessed in the present analysis but may have had an impact on the data presented.Furthermore, a wider search of the literature might lead to a more optimistic outlook regarding the proportion of high-quality studies. In this study, a training workshop on manuscript classification was conducted initially.The Oxford system of levels of evidence seemed to be a feasible instrument for evaluating studies, with a significant degree of consistency. 9These findings and those in other studies emphasize the importance of specific training for individuals who are responsible for determinations relating to the quality of evidence. 28Finally, our study represents a possible landmark for future studies and other evidence-based assessments on the central nervous system within the field of radiation oncology research.Moreover, the present study may result in new research opportunities, such as assessment of the internal and external validity of other study features and evaluations on high-impact and specialized journals.In particular, it would be interesting to evaluate how radiation oncologists manage and comprehend evidence-based medicine. 1 : 2003-2007; and period 2: 2008-2012); geographical location; level of evidence; and Scientific Journal Rankings index (www.scimagojr.com).This index measures the impact that a single published paper has, and hence the scientific influence of an average article in a journal.It thus expresses the extent to which an average journal article is central to the global scientific discussion. , and 85% of the first authors only had a single article published in Radiotherapy & Oncology as the first author.Twelve institutions (University Hospital of Heidelberg, Institute of Southern Switzerland, University Hospital Zurich, University Hospital Groningen, Tata Memorial Hospital, University of Wisconsin Medical School, McGill University Health Center, ''S.Maria" Hospital, VU University Medical Center, St. Jude Children's Research Hospital, Ludwig-Maximilians-University Munich and San Raffaele Scientific Institute) (13.3% of the total)were responsible for 40 published papers (32% of the total) and 78 institutions were responsible for the other 68% of publications over this 10-year period. Table 2 . Central nervous system papers published, according to region, diagnosis and first author over the 10-year period This finding emphasizes that there is an urgent need to expand the data relating to evidence-based oncology.Regarding the origin of the articles included in this study, in comparison with the rest of the world, Europe showed the largest number of published papers within the field of the central nervous system (60%).Because during their four years of training.We consider that this sort of analysis is important for future studies and for knowledge of referral centers for future postgraduate training.
2018-04-03T04:18:53.793Z
2015-07-03T00:00:00.000
{ "year": 2015, "sha1": "5bf1644b2efb71afdff139f96111359b09d4d235", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/spmj/v133n4/1516-3180-spmj-2014-8792210.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5bf1644b2efb71afdff139f96111359b09d4d235", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10493530
pes2o/s2orc
v3-fos-license
Candida albicans-Conditioned Medium Protects Yeast Cells from Oxidative Stress: a Possible Link between Quorum Sensing and Oxidative Stress Resistance ABSTRACT Candida albicans, the most frequent fungal pathogen of humans, encounters high levels of oxidants following ingestion by professional phagocytes and through contact with hydrogen peroxide-producing bacteria. In this study, we provide evidence that C. albicans is able to coordinately regulate the oxidative stress response at the global cell population level by releasing protective molecules into the surrounding medium. We demonstrate that conditioned medium, which is defined as a filter-sterilized supernatant from a C. albicans stationary-phase culture, is able to protect yeast cells from both hydrogen peroxide and superoxide anion-generating agents. Exponential-phase yeast cells preexposed to conditioned medium were able to survive levels of oxidative stress that would normally kill actively growing yeast cells. Heat treatment, digestion with proteinase K, pH adjustment, or the addition of the oxidant scavenger alpha-tocopherol did not alter the ability of conditioned medium to induce a protective response. Farnesol, a heat-stable quorum-sensing molecule (QSM) that is insensitive to proteolytic enzymes and is unaffected by pH extremes, is partly responsible for this protective response. In contrast, the QSM tyrosol did not alter the sensitivity of C. albicans cells to oxidants. Relative reverse transcription-PCR analysis indicates that Candida-conditioned growth medium induces the expression of CAT1, SOD1, SOD2, and SOD4, suggesting that protection may be mediated through the transcriptional regulation of antioxidant-encoding genes. Together, these data suggest a link between the quorum-sensing molecule farnesol and the oxidative stress response in C. albicans. Candida albicans is a normal inhabitant of the oral cavity and the gastrointestinal and genitourinary tracts, where it persists in equilibrium with the host's microflora; however, alterations in the physiological or immunological status of the host can lead to opportunistic infections ranging from mild mucosal lesions to life-threatening systemic disease (12,51). The success of C. albicans as an opportunistic pathogen stems in part from its ability to adapt to the many site-specific environmental, and potentially toxic, challenges within the human body. For example, C. albicans frequently encounters high levels of reactive oxygen species (ROS), including superoxide anions, hydrogen peroxide, and hydroxyl radicals, from both endogenous and exogenous sources (44). ROS can damage almost every essential cellular component, resulting in enzyme inactivation, membrane disruption, mutations, and ultimately cell death (9). Recent studies have implicated ROS as a central regulator of programmed cell death in Saccharomyces cerevisiae (42), C. albicans (54), and Aspergillus fumigatus (48). When exposed to toxic levels of hydrogen peroxide, C. albicans displays several apoptosis-like markers, including externalization of phosphatidylserine, nuclease-mediated double-strand DNA breakage, and condensation of chromatin into the nuclear envelope (54). The major source of exogenous oxidative stress for pathogenic fungi is the phagocytic cells of the host's immune system. Phagocytic cells play a key role in both innate and acquired resistance to mucosal and systemic candidiasis (2,32,64). Optimal microbial killing requires the production of metabolites as well as the action of various enzymes and peptides contained within the secretory granules of phagocytes (33,64). More specifically, the generation of reactive oxygen and nitrogen intermediates (for example, hydrogen peroxide, superoxide anions, nitric oxide, nitric acid, peroxynitrite, and hypochlorous acid) appears to play an important role in pathogen killing by neutrophils. Although phagocyte-derived oxidants are a principal source of oxidative stress for invading pathogens, other mechanisms of oxidant production exist. A number of microorganisms, for example, Enterococcus faecalis (25), Lactobacillus species (67), and alpha-hemolytic streptococci (3), produce extracellular ROS. Oral streptococci (Streptococcus oralis, Streptococcus mitis, Streptococcus sanguis, and Streptococcus gordonii) have been shown to release hydrogen peroxide into the surrounding medium, where accumulated levels are reported to reach 0.45 to 9.8 mM (3,15,58). Because hydrogen peroxide-generating bacteria can be inhibitory or toxic to adjacent fungal cells, hydrogen peroxide released by these sources is likely to limit the proliferation of Candida within the host (14). Fungal cells have evolved specific strategies to neutralize ROS (primary defense) and to repair or remove oxidized molecules (secondary defense) (reviewed in references 16, 17, and 45). In response to phagocytic attack, C. albicans initiates highly coordinated changes in its transcriptional program, which include (i) a switch from glycolysis to gluconeogenesis, (ii) activation of fatty acid degradation, (iii) downregulation of translation, and (iv) induction of oxidative stress responses and DNA damage repair (39,40,57). Induction of the oxidative stress response typically leads to the synthesis and activation of both antioxidant enzymes (superoxide dismutase, catalase, and flavohemoglobin) and nonenzymatic metabolites (trehalose, mannitol, and melanin). Not surprisingly, several studies have found a correlation between inactivation of the antioxidant stress response and decreased survival of C. albicans following oxidant attack (28,43,49,68). While the oxidative stress response has been characterized in some detail at the transcriptional level in S. cerevisiae, little is known about the molecular mechanisms responsible for resistance to oxidative stress in C. albicans. In this study, we show that C. albicans is able to coordinately regulate the oxidative stress response at the global cell population level by releasing substances into the medium, which impart on adjacent cells an increased resistance to oxidative stress. We show that farnesol, a heat-stable quorum-sensing molecule, is partly responsible for this protective response. Together, the results presented herein suggest that autoregulatory molecules contribute to oxidative stress resistance in the human pathogen C. albicans. MATERIALS AND METHODS Growth conditions. C. albicans SC5314 and ATCC MYA-2430 (also known as strain A72) were maintained as frozen glycerol stocks at Ϫ80°C and cultured monthly on Sabouraud dextrose (SAB; Difco) agar at 30°C. For routine culturing, a single colony was grown overnight in synthetic dextrose (SD) minimal medium (0.67% yeast nitrogen base without amino acids, 2% dextrose, adjusted to pH 6; Difco) at 30°C and then diluted to an optical density at 600 nm (OD 600 ) of 0.05 in prewarmed SD minimal medium. Where indicated, C. albicans was also grown in RPMI supplemented with L-glutamine and 3-(N-morpholino)propanesulfonic acid (BioWittaker) at 37°C to induce hyphal formation. Conditioned medium preparation. C. albicans SC5314 was grown aerobically in SD or RPMI medium for 24 h at 30°C or 37°C, respectively, and C. albicans A72 was grown in glucose-phosphate-proline (34) medium at 30°C for 24 h with and without 2 g/ml miconazole (Sigma). Microscopic analysis confirmed the presence of yeast (SD and glucose-phosphate-proline media) or hyphal (RPMI) cells following overnight growth. After centrifugation at 2,500 ϫ g for 15 min, the supernatant was adjusted to pH 6 and passed through a 0.22-m-pore-size filter. Sterile filtered supernatants (designated as conditioned or spent medium herein) were used immediately. Hydrogen peroxide, menadione, and plumbagin sensitivity assay. Overnight cultures were suspended in prewarmed SD minimal medium at an OD 600 of 0.05, and cells were allowed to grow at 30°C until an OD 600 of 0.15 was reached. The culture was divided equally, centrifuged at 2,500 ϫ g for 10 min, and resuspended in an equal volume of fresh or Candida-conditioned medium. Following 90 min of incubation at 30°C, cells were harvested, washed with phosphate-buffered saline (PBS), and resuspended in SD minimal medium at an OD 600 of 0.3. The culture was then challenged with 1.25 mM hydrogen peroxide, 0.6 mM menadione, or 0.05 mM plumbagin (final concentrations). Samples were taken before and after the addition of each stimulus at various times, diluted, and plated onto SAB plates. Viable counts were determined following incubation at 30°C for up to 48 h, and survival was expressed as a percentage of the viable cells at time zero. RNA extraction and relative RT-PCR. C. albicans was exposed to fresh or conditioned medium as described above, and RNAs were prepared using standard methodology (60). The quantity and quality of RNA were measured spectrophotometrically at 260 nm and 280 nm. Equal amounts of total RNA (2 g) were reversed transcribed into cDNAs using a Retroscript kit (Ambion). PCRs were performed initially using primers designed against the C. albicans elongation factor 1␤ gene (EFB1 forward primer, 5Ј-GAACGAATTCTTGGCTGAC; reverse primer, 5Ј-CATCAGAACCGAACAAGTC) to ensure that equal amounts of cDNA were used for each sample (59). If required, the amount of starting cDNA template was then adjusted accordingly. PCR analysis was performed with the following forward and reverse primers designed against the C. albicans superoxide dismutase (SOD) and catalase (CAT) genes: for SOD1, 5Ј-TTGAACAAGAATCCGAATCC and 5Ј-AGCCAATGACACCACAAG CAG; for SOD2, 5Ј-ACCACCCGTGCTACTTTGAAC and 5Ј-GCCCATCCA GAACCTTGAAT; for SOD4, 5Ј-CCAGTGAATCATTTGAAGTTG and 5Ј-A GAAGCACTAGTTGATGAACC; and for CAT1, 5Ј-ACACAGGAAATACCC AATGAG and 5Ј-GCATCAGCCAAGTCTTGAGAG. After initial denaturation at 95°C for 2 min, the samples were subjected to 30 cycles of denaturation at 95°C for 30 s, annealing at 55°C (EFB1 and CAT1), 58°C (SOD4), or 60°C (SOD1-2) for 30 s, and extension at 72°C for 30 s, with a final extension at 72°C for 2 min. PCRs lacking reverse transcriptase were subjected to PCR amplification to check for the presence of contaminating genomic DNA. In addition, the primers for EFB1 amplification were designed to flank an intron, thereby ensuring that the products were derived from cDNA as opposed to genomic DNA. Reverse transcription-PCR (RT-PCR) samples were resolved by agarose gel electrophoresis. The predicted sizes of RT-PCR products were as follows: EFB1, 242 bp; SOD1, 396 bp; SOD2, 437 bp; SOD4, 254 bp; and CAT1, 579 bp. Statistical analysis. Student's t test was used to determine statistical significance between the experimental groups. Differences were considered significant if the P value was Ͻ0.05. RESULTS AND DISCUSSION Hydrogen peroxide sensitivity is growth phase dependent. S. cerevisiae stationary-phase cells are much less sensitive to oxidative stress than exponentially growing cells (29). Older cells not only tolerate higher levels of oxidative stress but also require higher concentrations of hydrogen peroxide to induce apoptosis (19,42). Given that C. albicans encounters oxidative stress following ingestion by professional phagocytes and through contact with hydrogen peroxide-producing bacteria, we were interested in determining whether the C. albicans response to oxidants was similar to that of S. cerevisiae. To this end, we subjected C. albicans exponential-or stationary-phase FIG. 1. Susceptibility of C. albicans cells to hydrogen peroxide. Cells were grown (30°C in SD medium) to the specified optical densities (OD 600 ), harvested, and washed in PBS. Standardized cell suspensions (1 ϫ 10 7 cells) were challenged with 1.25 mM hydrogen peroxide for 60 min at 30°C, and viable counts were determined following dilution and plating on SAB plates. Percentages of survival are expressed as the means Ϯ standard deviations of triplicate samples. A survival rate of Ͼ100% reflects the inherent variability associated with the plating process between control and test cultures. **, P Ͻ 0.001 for sample survival compared to sample survival at an OD 600 of 0.1 (Student's t test). VOL. 4, 2005 CANDIDA ALBICANS RESISTANCE TO OXIDATIVE STRESS 1655 cells to externally added hydrogen peroxide and tested the viability of the cultures after 60 min of exposure ( Fig. 1). Early-exponential-phase yeast cells were found to be significantly more susceptible to hydrogen peroxide than stationaryphase cells (15% versus 112% survival, respectively). As C. albicans proceeded through each phase of growth (the lag, exponential, and stationary phases), there appeared to be an accompanying increase in resistance to hydrogen peroxide. Our results therefore indicate that C. albicans cells exhibit a growth phase-dependent resistance to hydrogen peroxide that is similar to that of the budding yeast S. cerevisiae. Conditioned medium protects exponential-phase cells from the lethal actions of hydrogen peroxide. Several mechanisms have been proposed to explain the reduced sensitivity of S. cerevisiae stationary-phase cells to oxidants. It has been shown, for example, that accumulation of the antioxidant metabolite trehalose during stationary phase can lead to increased resistance to oxidative stress (65,66). Previous studies have also suggested that oxidant sensitivity diminishes as cells enter stationary phase because cells are exposed to higher levels of endogenous ROS and thus generate an adaptive response (46). An alternative explanation can be derived from bacterial studies describing a link between quorum sensing (cell densitydependent molecules) and the stress response (5,18,23). We therefore investigated whether exposure to high levels of autoinducers (conditioned medium) would impart on cells an increased resistance to oxidative stress. Early-exponentialphase C. albicans yeast cells were exposed to fresh or conditioned medium (90 min), washed, and then treated with hydrogen peroxide ( Fig. 2A). The cell survival rate for C. albicans early-exponential-phase cells exposed to conditioned medium was significantly higher than that for control cells pretreated with fresh medium (101% and 11% survival, respectively). Similar results were found when yeast cells were exposed to conditioned medium generated from hyphal (RPMI at 37°C) cultures ( Fig. 2A). Therefore, the ability to produce the protective factor(s) was not morphology dependent. Additional experiments confirmed that conditioned medium protects cells from oxidative stress in a dose-dependent manner (Fig. 2B). Yeast cells pretreated with conditioned medium which had been diluted with a volume of fresh medium and then challenged with a lethal dose of hydrogen peroxide exhibited lower survival rates than cells treated with undiluted samples (Fig. 2B). It is important to note that since the cells were washed prior to oxidant challenge, the presence of free radical scavengers is unlikely to be the factor(s) responsible for resistance. The results do suggest that conditioned medium has the ability to protect yeast cells from the lethal effects of hydrogen peroxide in a dose-dependent manner. Conditioned medium protects C. albicans from superoxide anion-generating agents. An oxidative stress response can be FIG. 2. Survival of early-log-phase C. albicans cells pretreated with fresh medium or 1-day-old culture supernatants and subsequently challenged with hydrogen peroxide or superoxide anion-generating agents. C. albicans cells were grown in SD medium at 30°C until an OD 600 of 0.15 was reached. Cells were harvested and resuspended in either fresh medium (SD or RPMI) (A to C), filter-sterilized spent medium (A to C), or spent medium that had been diluted with a volume of fresh medium (1:2, 1:5, 1:10, and 1:20) (B). Following 90 min of incubation at 30°C, cells were harvested and washed in PBS, and standardized cell suspensions (1 ϫ 10 7 cells) were challenged with hydrogen peroxide (1.25 mM for 80 min), menadione (0.6 mM for 60 min), or plumbagin (0.05 mM for 60 min). Yeast cell survival was assessed by dilution and plating on SAB plates. Percentages of survival are expressed as the means Ϯ standard deviations of triplicate samples. A survival rate of Ͼ100% reflects the inherent variability associated with the plating process between control and test cultures. **, P Ͻ 0.001 for conditioned medium-compared to fresh medium-treated samples (Student's t test). F and S, fresh and spent medium, respectively; N, undiluted sample. 1656 WESTWATER ET AL. EUKARYOT. CELL triggered when cells sense an increase in ROS (9). Several redox-cycling agents (menadione and plumbagin) are known to sharply increase intracellular levels of superoxide anions (7,30). In S. cerevisiae, pretreatment with menadione induces an adaptive response that protects yeast cells from a subsequent challenge with hydrogen peroxide; however, cells treated with hydrogen peroxide are unable to survive menadione exposure (29). Since the response to these redox-cycling agents appears to be distinct, we determined whether C. albicans cells preexposed to conditioned medium were also resistant to the superoxide anion-generating agents menadione and plumbagin. C. albicans cells exposed to fresh medium were significantly more sensitive than conditioned medium-treated cells to both agents (Fig. 2C). This observation indicates that conditioned medium induces resistance to several forms of reduced oxygen. Addition of the antioxidant alpha-tocopherol to conditioned medium does not neutralize the protective factor. C. albicans has been shown to possess an adaptive stress response to both hydrogen peroxide and superoxide generators (31). Low doses of these compounds can induce a response that protects cells from a subsequent challenge with a higher concentration of the same agent (31). Danley et al. (11) and Schröter et al. (61) have shown that early-log-phase C. albicans cells release ROS into the extracellular environment; however, ROS levels dramatically decline at higher cell concentrations. It is therefore conceivable that conditioned medium may contain sufficient levels of ROS to activate the adaptive stress response, resulting in increased oxidative stress resistance. To address this possi-bility, we added the oxidant scavenger alpha-tocopherol to conditioned medium and assessed whether neutralization of oxygen radicals would negate any protective response. Alphatocopherol is a naturally lipophilic molecule which can easily penetrate the plasma membrane and has been shown to protect cells from oxidative damage (6). Cells exposed to conditioned medium in the presence or absence of alpha-tocopherol were equally resistant to a subsequent lethal challenge with hydrogen peroxide (Fig. 3). These results indicate that conditioned medium is unlikely to mediate a protective response by exposing cells to sublethal concentrations of ROS. Protection is not due to the metabolic waste product ethanol. Yeast cells have evolved specific and overlapping strategies to defend themselves from the harmful effects of various stressors, including ethanol exposure, oxidant attack, and heat shock (9). Short-term exposure to ethanol (7% [vol/vol] for 30 min) leads to the induction of genes involved in ionic homeostasis, heat protection, trehalose synthesis, and antioxidant defense (1). Ethanol is produced in amounts proportional to the concentration of glucose in the medium, and high concentrations of ethanol can result in growth retardation (69). After 17 h of growth at 37°C, for example, C. albicans is able to produce 0.8% ethanol from a 2% glucose solution (69). Exposure to glucose-derived ethanol in the conditioned medium may therefore result in cross-protection against diverse stresses, including oxidative stress. To test whether preexposure to ethanol results in an increased resistance to oxidative stress, we exposed early-exponential-phase cells to various concentrations of ethanol (0.25 to 1%) and subsequently challenged the cells with a lethal dose of hydrogen peroxide. The addition of ethanol to fresh medium did not significantly alter the cells' ability to withstand an oxidative insult (Fig. 4). These results indicate that the physiological levels of ethanol found in conditioned medium are not sufficient to induce a protective response to oxidative stress. Protection is not a result of nutrient deprivation. The use of conditioned medium may create a nutrient-limiting environment that imparts on cells a higher resistance to hydrogen peroxide. For example, stationary-phase S. cerevisiae cells grown under glucose (0.5%)-limiting conditions exhibit higher levels of resistance to hydrogen peroxide than cells grown on 2% glucose (55). In order to confirm that cells exposed to conditioned medium were not simply responding to nutritional starvation, we asked the following question: would the addition of nutrients to conditioned medium negate any protection from oxidative stress? Early-exponential-phase C. albicans yeast cells which were preexposed to conditioned medium with or without the addition of supplemented nutrients (10ϫ concentrated medium [SD] diluted 10-fold) were found to be equally resistant to hydrogen peroxide (82% Ϯ 4% and 89% Ϯ 10% survival, respectively). In contrast, cells exposed to fresh medium only exhibited a 10% survival rate. These results indicate that the observed improvement in cell survival was not due to nutritional effects. Conditioned medium treated with heat and proteolytic enzymes is able to protect cells from oxidative stress. The activity responsible for increased resistance to hydrogen peroxide was insensitive to heat (56°C for 2 h followed by 85°C for 30 min), proteinase K treatment (100 g/ml for 2 h at 56°C), and changes in pH (pH 2 to 7) ( Table 1). These results indicate that the protective substance is unlikely to be a protein; however, the data do not exclude the possibility that a peptide is the responsible factor. Although C. albicans has been reported to secrete a mating type pheromone (4, 38, 53), we do not believe that this diffusible peptide is responsible for increased resistance to oxidative stress. Firstly, most clinical isolates of C. albicans (including SC5314) are heterozygous for the MTL locus (a/␣) (24, 37) and thus will not secrete mating typespecific pheromones. Secondly, since white-opaque switching is inhibited in a/␣ strains (38), these cells will be unable to become mating competent and therefore will not be pheromone responsive. Finally, transcription profiling has indicated that the genes induced in response to the C. albicans alpha factor (4) are different from those induced by conditioned medium (see Fig. 6). Collectively, these studies indicate that the conditioned medium used in this study (harvested from C. albicans SC5314) is unlikely to contain mating type pheromones, and as such, it is doubtful that they are responsible for oxidative stress resistance. QSMs partially protect cells from hydrogen peroxide toxicity. In bacteria, cell-to-cell communication, also referred to as quorum sensing, has been shown to be involved in regulating a range of cellular functions, including bioluminescence, virulence factor production, biofilm development, and oxidative stress resistance. Several studies have shown that the Pseudomonas aeruginosa quorum-sensing molecules (QSMs) 3-oxododecanoyl-homoserine lactone (3-oxo-C12-HSL) and butyrylhomoserine lactone (C4-HSL) are necessary for optimal resistance to hydrogen peroxide and the superoxide aniongenerating agent phenazine methosulfate (5, 18, 23). C. albi-cans is known to produce three QSMs, namely tyrosol, farnesol, and farnesoic acid (8,20,52). These autoregulatory substances accumulate during cell proliferation, and upon reaching a certain threshold, are known to regulate several cell density-dependent phenomena. In view of the fact that hydrogen peroxide resistance correlates with QSM accumulation, we tested the effect of cell-cell signaling molecules on the ability of cells to withstand oxidative stress. We initially focused our studies on the isoprenoid alcohol farnesol, which is produced enzymatically from the sterol biosynthetic intermediate farnesyl pyrophosphate (21). Farnesol is reported to accumulate to a maximum level of 10 to 50 M during stationary phase (20), and only the E,E isomer possesses QSM activity (62). Exponential-phase cells pretreated with physiological levels of (E,E)-farnesol (17.5 and 35 M) were significantly more resistant than control cells to oxidative stress (Fig. 5A). The addition of farnesol to fresh medium, however, did not restore hydrogen peroxide resistance to the levels seen with Candida-conditioned medium. At the concentrations tested (up to 35 M), farnesol did not alter the growth rate of the cells (data not shown). Furthermore, although farnesol is known to influence the yeast-to-hypha conversion at the concentrations used in this study (20), control and farnesoltreated cells appeared as budding yeast cells before and after exposure to either fresh or conditioned medium. This was not surprising since the conditions used (SD medium, pH 6, at 30°C and RPMI medium, pH 6, at 30°C) do not normally stimulate yeast-to-hypha morphogenesis (13). The inability of farnesol to completely mimic the properties of conditioned medium raises the possibility that other molecules are partly responsible for conferring oxidative stress resistance or that the conditioned medium used contains higher levels of farnesol than those tested. Drugs that block the sterol biosynthetic pathway beyond farnesyl pyrophosphate cause an increase in intracellular and extracellular farnesol levels (21,22). Miconazole (0.5 M), for example, has been shown in C. albicans A72 to increase basal farnesol levels (127 g per gram [cell dry weight]) 44-fold (22). In order to bolster the supposition that farnesol is linked to conditioned medium's protective effect, we tested whether conditioned medium generated from azole-treated cells would provide greater levels of protection compared to conditioned medium from untreated cells. We therefore exposed C. albicans strain A72 to the fungistatic drug miconazole (2 g/ml) for 24 h and generated conditioned medium by filter sterilizing Fig. 2. Conditioned medium was either left untreated (pH 2) or brought to a pH of 6 and subsequently treated with proteinase K (100 g/ml for 2 h at 56°C) and/or heat (56°C for 2 h followed by 85°C for 30 min). Hydrogen peroxide susceptibility was determined according to the legend to Fig. 2. Percentages of survival are expressed as the means Ϯ standard deviations of triplicate samples. 1658 WESTWATER ET AL. EUKARYOT. CELL the culture supernatants. Viable cell counts were ca. 4% those of the untreated control culture (1.33 ϫ 10 8 Ϯ 5.60 ϫ 10 7 CFU/ml and 5.88 ϫ 10 6 Ϯ 1.86 ϫ 10 6 CFU/ml for control and azole-treated cells, respectively). To test whether higher farnesol levels would increase the level of oxidative stress resistance, we exposed exponential-phase cells to conditioned medium generated from azole-treated cells for 90 min and subsequently exposed the cells to a lethal dose of hydrogen peroxide (1.25 mM for 60 min). Since the conditioned media were generated from cultures with different cell densities, we normalized the cell survival rate (78% Ϯ 10% and 86% Ϯ 2% for azole-treated and control-treated cells, respectively) to the number of cells originally found in the conditioned medium culture. Cells exposed to conditioned medium from azoletreated cells were 20-fold more resistant to hydrogen peroxide than control-treated cells (data not shown). These results strengthen the hypothesis that the presence of farnesol in the conditioned medium is at least partly responsible for oxidative stress resistance. Farnesol (100 M) has been shown recently to activate the C. albicans HOG1 (hyperosmotic glycerol) mitogen-activated kinase signal transduction pathway (63). Phosphorylation and translocation of Hog1 to the nucleus results in activation of the general stress response and the phenomenon of stress crossprotection (63). Induction of the core stress response allows cells challenged with a mild stress to acquire resistance to a stronger, seemingly unrelated stress. This is in contrast to the adaptive response, in which pretreatment of cells with a nonlethal stress stimulates adaptation that protects cells from a potentially lethal dose of the same stress (31). Low doses of hydrogen peroxide (0.4 mM), however, do not activate C. albicans Hog1, indicating that adaptation to an oxidative stress is not mediated through the Hog1 stress-activated kinase pathway (63). Since farnesol can increase endogenous levels of ROS (41) and can activate Hog1, it is possible that farnesol may stimulate cell survival through a Hog1-independent adaptive response to oxidative stress and/or through activation of the Hog1-dependent general stress response. Tyrosol [2,4-(hydroxyphenyl)-ethanol], another C. albicans QSM, has been reported to interfere with the phagocytic respiratory burst (10) and can act as an antioxidant scavenger (47). Since tyrosol can act as an antioxidative agent, it was of interest to test whether the protective factor or signaling molecule in conditioned medium was tyrosol. The addition of tyrosol to fresh medium (0 to 25 M) did not change the cells' susceptibility to hydrogen peroxide (Fig. 5B). Chen et al. (8) reported that C. albicans grown in synthetic minimal medium at 30°C accumulates tyrosol to a maximum level of Ϸ3 M. Therefore, the levels tested were greater than those typically found in conditioned medium. In summary, these experiments indicate that in contrast to tyrosol, the C. albicans QSM farnesol may confer a capacity to resist an oxidative insult. Conditioned medium induces the expression of antioxidantencoding genes. To protect against the damaging effects of ROS, cells have evolved specific defense mechanisms which involve the synthesis and/or activation of protective enzymes or molecules (45). In P. aeruginosa, quorum-sensing circuits are essential for the optimal transcription of two superoxide dismutase genes (sodA and sodB) and the major catalase gene katB (18). C. albicans has enlisted several classes of antioxidant enzymes to defend against a variety of ROS; however, superoxide dismutases (Sod1-6p) and catalase (Cat1p) are the primary enzymes involved. We therefore analyzed, by relative RT-PCR, the expression of genes encoding the enzymatic mechanisms responsible for eliminating hydrogen peroxide and superoxide in cells exposed to conditioned medium. Relative RT-PCR analysis of RNA samples extracted from C. albicans cells exposed to fresh or conditioned medium revealed on April 30, 2021 by guest http://ec.asm.org/ differences in the expression patterns of key antioxidant enzymes (Fig. 6). The expression of catalase (CAT1; also known as CTT1), and to a lesser extent, superoxide dismutase (SOD1, -2, and -4) was increased during exposure to conditioned medium (Fig. 6). Commercial farnesol (up to 35 M), however, did not induce any noticeable change in the levels of antioxidant gene expression (data not shown). These results suggest that conditioned medium contains a factor that is capable of regulating at the transcriptional level antioxidant-encoding genes with activities responsible for detoxifying both superoxide and hydrogen peroxide. Antioxidant genes are expressed divergently under different growth conditions, and exposure to certain stresses can induce their expression (35,43,50). Alterations in the expression profiles of these genes may therefore be a consequence of nutrient depletion, exposure to metabolic waste products, or unknown stressors present in the conditioned medium. C. albicans catalase, which promotes the conversion of hydrogen peroxide to water and molecular oxygen, has been shown to be essential for peroxide resistance and protection against macrophage killing (68). Interestingly, the induced su-peroxide dismutases (encoded by SOD1-6) are located in both the cytoplasm (Sod1p and Sod4p) and the mitochondrial intermembrane space (Sod2p) (27,43,56). Previous studies have shown that the sod1 and sod2 null mutants both display heightened sensitivity to menadione (26,28); however, additional phenotypes suggest that Sod2p is primarily responsible for scavenging intracellularly produced superoxides and that Sod1p plays an important role in removing extracellular, macrophage-generated superoxide (26,28). Although SOD4 (also known as orf19.2062 and orf6.7493) is regulated during phenotypic switching (36), a role for this isozyme remains to be established. Together, these results clearly emphasize the importance of enzymatic defense mechanisms and provide a possible explanation for conditioned medium protection against oxidative stress. FIG. 6. Relative RT-PCR analysis of antioxidant gene expression. C. albicans was grown to early log phase (OD 600 of 0.15) at 30°C in SD medium, harvested, and resuspended in either fresh or conditioned SD medium. Following 90 min of incubation at 30°C, cells were harvested for RNA analysis. Lane 1, negative template control; lane 2, cells exposed to fresh medium minus RT control; lane 3, cells exposed to conditioned medium minus RT control; lane 4, RT-PCR products from analysis of RNAs isolated from cells exposed to fresh medium; and lane 5, RT-PCR products from analysis of RNAs isolated from cells exposed to conditioned medium. EFB1, elongation factor 1␤ gene; SOD1, copper/zinc-superoxide dismutase gene; SOD2, manganese-superoxide dismutase gene; SOD4, copper/zinc-superoxide dismutase gene; and CAT1, catalase gene. M, molecular marker.
2014-10-01T00:00:00.000Z
2005-10-01T00:00:00.000
{ "year": 2005, "sha1": "9918316dde2bfe7063b493c65856e410f2aaf2a6", "oa_license": null, "oa_url": "https://ec.asm.org/content/4/10/1654.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b86a800c0dc54db630588f30929f53e3276ee98e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
119724105
pes2o/s2orc
v3-fos-license
Finite element procedures for computing normals and mean curvature on triangulated surfaces and their use for mesh refinement In this paper we consider finite element approaches to computing the mean curvature vector and normal at the vertices of piecewise linear triangulated surfaces. In particular, we adopt a stabilization technique which allows for first order $L^2$-convergence of the mean curvature vector and apply this stabilization technique also to the computation of continuous, recovered, normals using $L^2$-projections of the piecewise constant face normals. Finally, we use our projected normals to define an adaptive mesh refinement approach to geometry resolution where we also employ spline techniques to reconstruct the surface before refinement. We compare or results to previously proposed approaches. Introduction Our aim in this paper is to apply finite element techniques for computing geometrical quantities of interest in computer graphics applications, and to show that they can give accurate results, indeed more accurate that classical approaches. We restrict ourselves to closed surfaces approximated by piecewise linear simplices, and on such surfaces we consider three issues: • accurate computation of surface normals; • adaptive refinement techniques for resolving the curvature. We discretize the normal and curvature vectors using a piecewise linear finite element method based on tangential differential calculus, following the approach initiated by Dziuk [9]. This results in piecewise linear, continuous, vector fields on the discrete surface. In order to make comparisons with standard methods of computing curvature and normals, which are typically only represented at the vertices of the triangulated surface, we will focus mainly on the nodal values of the finite element fields. Mean curvature. The mean curvature vector on a discrete surface plays an important role in computer graphics and computational geometry, as well as in certain surface evolution problems, see, e.g. [4,5,6,8,10,11,25]. It can be obtained by letting the Laplace-Beltrami operator act on the embedding of the surface in R 3 , and various formulas based on this fact have been suggested in the literature, see [21] and the references therein. It is known that the standard mean curvature vector based on the finite element discretization of the Laplace-Beltrami operator on a piecewise linear triangulated surface cannot be expected, in general, to give any order of convergence in the L 2 norm. More generally, for triangulated piecewise polynomial surfaces of order k the expected convergence in L 2 norm is k − 1, cf. [15,7]. Convergence will also not occur in other standard discretization methods without restrictive assumptions on the mesh, see [29]. In this paper we employ a stabilized piecewise linear finite element method first suggested in [13] for approximation of the mean curvature vector, giving first order convergence in the L 2 norm for piecewise linear surfaces. The stabilization consists of adding suitably scaled terms involving the jumps in the tangent gradient of the discrete mean curvature vector in the direction of the outer co-normals at each edge in the surface mesh to the L 2 -projection of the discrete Laplace-Beltrami operator used to compute the discrete mean curvature vector. Normal vectors. Accurately determining the vertex normals on triangulated surfaces is of great importance in computer graphics for the computation of smooth shading [12,24], and it is important in surface meshing/re-meshing [26,23,28] as well as smoothing (fairing) techniques [16]. We here extend the method suggested for computing the mean curvature vector, which can be seen as a general stabilization approach, to the problem of computing accurate vertex normals by stabilized L 2 -projections. Adaptive mesh improvement. Mesh improvement when the geometry is given by an analytical expression (or is otherwise known) can be obtained by local refinement of the simplices, putting new vertices on the known surface. The goal is then to resolve the curvature of the mesh in some predefined way. We suggest an approach based on the difference between the piecewise constant facet normals and the computed finite element normal field. This gives an estimate of the error in discrete facet normals which is closely related to the curvature of the geometry as will be discussed below. If the geometry is not a priori known but we are simply given a point cloud or a mesh, interpolation using vertex normals is standard, cf., e.g., Boschiroli et al. [3]. We combine one such approach, the PN triangle of Vlachos et al. [28], with our finite element normal fields and adaptive scheme in order to enhance the refined geometry. The outline of the remainder of the paper is as follows: In Section 2 we introduce the discrete surface approximations, in Section 3 we define the stabilized mean curvature vector, in Section 4 we discuss a different schemes for computing vertex normals, including our stabilized projection method, in Section 5 we present an adaptive algorithm for resolving curvature, and in Section 6 we give some representative numerical results. Meshed surfaces Consider an embedded orientable closed surface R 3 ⊃ Σ ∈ C 2 with exterior unit normal n. Let φ be the signed distance function such that ∇φ = n on Σ and let p( for δ > 0 of Σ. Then there is δ 0 > 0 such that the closest point mapping p(x) assigns precisely one point on Σ to each x ∈ U δ 0 (Σ). We triangulate Σ using a elementwise planar mesh K h to obtain a quasiuniform triangulated surface Using the closest point mapping any function v on Σ can be extended to U δ 0 (Σ) using the pull back v e = v • p on U δ 0 (Σ) (1) and the lifting w l of a function w defined on Σ h to Σ is defined as the push forward 3 Approximation of the mean curvature vector The continuous mean curvature vector We define the tangential surface gradient ∇ Σ by ∇ Σ := P Σ ∇, where ∇ is the R 3 gradient and P Σ (x) = I − n(x) ⊗ n(x) is the projection onto the tangent plane T Σ (x) of Σ at point x ∈ Σ. The mean curvature vector H : Σ → R is then defined by where x Σ : Σ x → x ∈ R 3 is the coordinate map of Σ into R 3 and ∆ Σ = ∇ Σ · ∇ Σ is the Laplace-Beltrami operator. The relation between the mean curvature vector and mean curvature is given by the identity where κ 1 and κ 2 are the two principal curvatures and (κ 1 + κ 2 )/2 =: H is the mean curvature, see [4]. The mean curvature vector satisfies the following weak problem: find where ∇ Σ w = w ⊗ ∇ Σ for a vector valued function w and is the L 2 -inner product on the set ω with associated norm Given the discrete coordinate map x Σ h : Σ h x → x ∈ R 3 and a discrete projection operator P Σ h = I − n h ⊗ n n , where n h denotes the piecewise constant facet normals, we define the stabilized discrete mean curvature vector H h as follows. Let V h be the space of piecewise linear continuous functions defined on K h and seek where ∇ Σ h = P Σ h ∇ and the stabilization term J h (·, ·) is defined by Here γ ≥ 0 is a stabilization parameter and E h = {E} is the set of edges in the partition K h of Σ h . The jump of the tangential derivative in the direction of the outer co-normals at an edge E ∈ E h shared by elements K 1 and K 2 in K h is defined by where u i = u| K i , i = 1, 2, and t E,K i are the co-normals, i.e., the unit vectors orthogonal to E, tangent and exterior to K i , i = 1, 2, see Figure 1. This stabilization method allows for proving first order convergence of the curvature vector H − H l h Σ h, see [13]. Implementation issues Using the standard Galerkin approximation, where ϕ i is the finite element basis functions and U i the nodal approximations of u we have that were we define the tangential gradient of the basis function by The tangential derivative of the basis function is given by For vector-valued unknowns u we have u ≈ Φu where u denotes nodal values and and using the notation t 1 and t 2 for the two co-normals on a given edge E, we define and the discrete stabilization matrix is given by The linear system corresponding to (6) becomes where M is the so called mass matrix, given by γ H is the mean curvature specific stabilization factor, S is the discrete Laplace-Beltrami operator defined by x is the coordinate vector of the nodal positions in the mesh, and H denotes the vector of vertex values of the approximate mean curvature vector. Alternative approximations of the mean curvature vector There exists several well known approaches to mean curvature estimation, for an extensive overview, see [18]. In the context of finite elements an alternative to ours is proposed by Heine in [14]. Smooth surface fit. Curvatures can be computed using a locally fitted quadratic function around a point x i with u and v are local coordinates of the tangential plane to x i such that f (0, 0) = 0. The tangential plane is determined using one of the edges connected to x i and the normal at the same point. The idea is to compute the shape operator or Weingarten map of this function and subsequently the curvature. See [1, Chap. 8.5] for further details. The discrete local Laplace-Beltrami operator. Let K denote the Laplace-Beltrami operator so that K(x) = 2H(x)n(x) at a given point x on the surface. On triangulated surfaces, one can use Gauss' theorem to extract a discrete version of this operator in the nodes x i of the mesh, cf. Meyer et al. [21]. The integral of K over the discrete 1-ring surface M on a triangulated surface is then given by where the angles α ij and β ij are opposite to the edge i j and N is the set of neighbour vertices to x i , see Figure 2a. Given some definition A V of the local area surrounding a vertex x i we can then define the discrete approximation K h of K as In [21], it is proposed to use the Voroni regions as the definition for the local area, and an algorithm to improve the robustness for arbitrary meshes was provided. Similarly, Desbrun et. al [8] used the barycentric area to average the discrete Laplacian. In both cases, in order to compute the vertex normal, K h (x i ) is simply normalized and cases where the curvature is zero are treated by computing the mean face-normal of the 1-ring neighbourhood. The mean (discrete) curvature at the vertices is then given by Stabilized projection of the normal field In analogy with (6) we define the recovered discrete normal vector n h as follows: find where n K is the piecewise constant exterior normal to the facet elements K. The corresponding linear system becomes where γ n is the normal-specific stabilization factor, n h the vector of vertex normals, and Note that (12) can be efficiently solved using a conjugate gradient method since M is symmetric, positive definite and sparse. When translating the computed normal vector field to a set of discrete vertex normals, these will here be normalized (the nodal vectors contained in n h are not in general of unit length). Alternative approaches to computing vertex normals Traditionally, vertex normals are estimated either from a local neighborhood of surrounding face normals using some type of local averaging, see e.g., [17,23] and the references therein. Other estimation methodologies also exists such as local smooth surface fits, see, e. g., [20]. We use the notations for the local vertex normals introduced in [23] and give a brief description; see Figure 2b for an explanation of the notations used. Mean weighted equally. Arguably, the most widespread estimation of the vertex normal was introduced by Gouraud [12] as where n i is the face-normal of triangle i , n is the total number of triangles that share a common vertex for which the vertex normal is to be estimated and |.| denotes the norm. Note that we shall subsequently omit making the normalization step of the vertex normal explicit and assume n =n := n/|n|. Mean weighted by angle. A vertex normal approximation using angles between the inner edges was proposed by Thürrner and Wüthrich [27]. where α i is the angle between two edges e k and e k+1 of a face i sharing the vertex. Mean weighted by sine and edge length reciprocals. Max [19] proposed several methods of weighting the face normals, one of which is to weight by the sine and edge length reciprocals to take into account the difference in lengths of surrounding edges. Mean weighted by areas of adjacent triangles. Another method proposed by Max [19] is to weight the normals by the area of the face. where the symbol × denotes the vector cross product. Mean weighted by edge length reciprocals. Max [19] also proposed to just use the edge length reciprocals as weights. Mean weighted by square root of edge length reciprocals. Finally, Max [19] also suggested to use the square root of the length reciprocals. Normal from the discretized local Laplace-Beltrami operator. Another approach is to define the normal using the discretized local Laplace-Beltrami operator (DLLB) defined in Section 3.3. The normal is defined by normalizing the discrete mean curvature vector. In the numerical example below, Section 6, we compare the accuracy of these different approaches. Error estimate We base our adaptive algorithm on the Zienkiewicz-Zhu approach [30] which employs the difference between recovered derivatives and actual discrete piecewise derivatives of a finite element solution. By analogy we consider the piecewise constant normals to play the role of the piecewise derivatives, and compare these to the L 2 −projected normals. Since we are focusing on vertex normals, and since we will in the following compare methods that only produce such normals, we define a norm which is an approximation of the L 2 -norm, where meas(K) denotes the area of K and x i K the vertex coordinates on K. This represents a Newton-Cotes numerical integration scheme for the L 2 (Σ h )-norm using the vertices as integration points. The error in normals is thus approximated and we aim at achieving n h − n K L 2 h ≤ TOL where TOL is a given tolerance. We note that we also have n e − n K L 2 h ≈ h∇ Σ n Σ where h is the local mesh size and ∇ Σ n is the curvature tensor, which indicates that we counter large curvature by reduced mesh size for resolution of the geometry. Triangle refinement In cases where the exact geometry is not accessible, we consider triangle refinement approaches that utilise vertex normals for interpolation. An overview of such methods is given by Boschiroli et al. in [3]. Nagata [22] proposed a simple quadratic interpolation of triangles using vertex normals and positions at the end-nodes. The approach by Nagata depends on a curvature parameter that fixes a curvature coefficient in order to stabilize the method. The curvature coefficient is highly dependent on the vertex normal, and in cases where normals are near parallel, the method cannot capture inflections and without a stabilizing parameter, cusps will be introduced to the surface, see [23] where the authors point out this problem and suggest a possible solution. The solution suggested in [23] eliminates the problem of cusps in the interpolated surface but also eliminates the inflection, since the segment becomes linear. Another approach is to use higher order interpolation which are able to capture inflection points. PN triangles Vlachos et al. [28] proposed a cubic interpolation scheme that similarly to Nagata only depends on the positions and vertex normals of a triangular patch. We here write their algorithm in a vectorized manner. Let then b : R 2 → R 3 denote a cubic triangular patch given by Here U is the matrix representation of the parameters defined by where u = i/N , v = j/N for i, j = {0, 1, . . . , N } such that w := 1 − u − v ≥ 0. Here N gives a subtriangulation of the initial patch, see Figure 3. B denotes the cubic coefficients in matrix form and is given by where b denote the control points of the control grid for the PN triangle, see Figure ( where p i and n i are the input corner points and normals. Finally the total set of interpolated points is given as a matrix product by Note that U can be evaluated for a certain number of refinements N in a pre-processing step. In the local refinement section of this paper we use N = 1 see Figure 5. As for the internal vertex normal computation, we do not interpolate the normals locally, instead we compute n h using (12) for the total mesh in each iteration. The reason behind why we limit the tessellation step to 1 is the subsequent complexity of the local refinement procedure. Local refinement procedure Since the PN refinement with N = 1 splits the face of a flat triangle into four child elements, we need a way of handling the hanging nodes. In this work we adapt the Red-Green refinement method proposed by Banks et al in [2]. This method preserves the aspect ratio of the initial mesh which is crucial in order to secure the accuracy of the associated finite element method. Geometry We choose to analyze the errors on an implicitly defined torus which we can modify in order to generate slightly more complex features. The surface equation for the torus is given by where R is the torus radius, r the tube radius and a is a "squish-factor" used to squish the torus in the z-direction in order to induce a higher curvature on the inside and outside, see Figure 6. In the following, the torus will be analyzed with a = 1 and a = 4, in order to compare errors with respect to strongly and smoothly varying curvature. Vertex normal error What follows is a comparison of different vertex normals with the exact normal. The measure for the mesh-size used in this context is defined as where N v denotes the number of vertices in the mesh. Using an implicitly defined surface Σ = {x : φ(x) = 0}, where φ is a signed distance function with the property |∇φ| = 1, we have that n(x Σ ) = ∇φ(x Σ ). As discussed above, we will use (20) and define the error as where n a is the approximate and n e the exact normal defined by n e = ∇φ, computed at the vertex i using n i e = ∇φ(x i ). The convergence rates are defined as Evaluation of the accuracy of computed vertex normals The vertex normal error analysis was done on an unstructured mesh of a torus with R = 1, and r = 1/2 and a = {1, 4}, see Figure 6. The convergence of L 2 errors defined in (34) are shown in Figure 7 were it can be seen that the stabilized L 2 -projection of the normals converges optimally. The raw data for this graph is available in Table 1. The relative difference between the stabilized L 2 normals and the next best traditional method n MWA can be seen in Table 2 where we can see a relative error decrease from MWA of ∼ 29% to ∼ 88% depending on mesh-size and geometry. The convergence rates can be viewed in Table 3. In the next section we shall analyze the impact of the stabilization on the normal errors. Effect of the stabilization on the accuracy of the computed normal In this section we analyze the influence of the stabilization factor on the vertex normal error numerically by employing a golden search method to find the optimal stabilization factor γ * n that minimizes the normal error defined in (34). where we use γ 0 n = 0 and γ 1 n = 1. This is done for several mesh-sizes and on a torus with a = 1 and a = 4, see Figure 8 and 9. Notice how the curves become more planar, i.e., choosing a "good" γ n becomes less sensitive with the decrease in h. The error difference is shown in Figure 10 and Table 4 where L 2 is the L 2 error, defined in (34), of the L 2 vertex normals without stabilization and L 2 stab is the error of the stabilized L 2 vertex normal which is stabilized with a optimal stabilization factor γ * . Interpolation In a 2D case we can see in Figure 11 how the choice of vertex normals affects the resulting cubic interpolation. The initial mesh is coarse and the (unstabilized) L 2 -projected normals are not just depending on the nearest neighbors to each vertex but globally. The resulting difference is apparent. We compare the impact of different vertex normals on the interpolation by measuring the geometrical error defined as Figure 11: 2D cubic Hermite interpolation of a coarse line segment using two different approximations of the vertex normals. , where x Σ (n) denotes the discrete surface interpolated with a particular normal approximation method. We measure the L 2 h -norm of the signed distance. The refinement algorithm employed is the PN triangles using 1 tessellation per face see Figure 5. The mesh-size in this section is defined as where N e denotes the number of elements and A K is the area of the K-th element. The initial mesh-size is h = 0.1618 and the initial L 2 h -norm of the signed distance error is geom = 0.0863. See Figure 12 for the convergence comparison, Table 5 for the regular refinement data and Table 6 for the local refinement data. Examples of interpolation using PN triangles with local refinement are shown in Figure 13 for the Torus, Figure 14 for the Utah teapot and Figure 15 for the Stanford bunny. Table 6: Local refinement of a torus with a = 4, initial mesh-size of 0.0321 and initial geometrical error of 0.0863. The local refinement method is compared to local refinement with projection to the exact surface, see Figure 16. We compare the approximite normal error with the exact normal error by computing the effectivity index, given by where n e is the exact normal to the surface, n f is the face normal and n L 2 stab is the recovered stabilized L 2 -projected normal, see Table 7. Table 7: Local refinement with projection to the exact surface of a torus with a = 4, initial mesh-size of 0.1618. Mean curvature The mean curvature is computed on a structured and unstructured torus with R = 1, r = 1/2 and a = 1. We compare the mean curvature approximation to the exact mean curvature, the smooth surface fit approach (SSF) and the discrete local Laplace-Beltrami (DLLB) approach described in Section 3.3, and our stabilized discrete curvature vector solving (10). In the last case we compute the mean curvature H h through where n h denotes the normal computed using the stabilized L 2 -projection from (12). In our computational experience, this gives a more accurate result than the immediate H h = 1 2 |H h |. In Figure 17 we give iso-plots of the mean curvature. Figure 18 shows the convergence of mean curvature. Table 9: Stabilization factor γ H as a function of mesh-size h.
2017-03-16T17:36:27.000Z
2017-03-16T00:00:00.000
{ "year": 2020, "sha1": "3c00f6600c0717c3f4346eb63acf0d829f9280d2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1703.05745", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3c00f6600c0717c3f4346eb63acf0d829f9280d2", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
6351803
pes2o/s2orc
v3-fos-license
The clinical course of acute otitis media in high-risk Australian Aboriginal children: a longitudinal study Background It is unclear why some children with acute otitis media (AOM) have poor outcomes. Our aim was to describe the clinical course of AOM and the associated bacterial nasopharyngeal colonisation in a high-risk population of Australian Aboriginal children. Methods We examined Aboriginal children younger than eight years who had a clinical diagnosis of AOM. Pneumatic otoscopy and video-otoscopy of the tympanic membrane (TM) and tympanometry was done every weekday if possible. We followed children for either two weeks (AOM without perforation), or three weeks (AOM with perforation), or for longer periods if the infection persisted. Nasopharyngeal swabs were taken at study entry and then weekly. Results We enrolled 31 children and conducted a total of 219 assessments. Most children had bulging of the TM or recent middle ear discharge at diagnosis. Persistent signs of suppurative OM (without ear pain) were present in most children 7 days (23/30, 77%), and 14 days (20/26, 77%) later. Episodes of AOM did not usually have a sudden onset or short duration. Six of the 14 children with fresh discharge in their ear canal had an intact or functionally intact TM. Perforation size generally remained very small (<2% of the TM). Healing followed by re-perforation was common. Ninety-three nasophyngeal swabs were taken. Most swabs cultured Streptococcus pneumoniae (82%), Haemophilus influenzae (71%), and Moraxella catarrhalis (95%); 63% of swabs cultured all three pathogens. Conclusion In this high-risk population, AOM was generally painless and persistent. These infections were associated with persistent bacterial colonisation of the nasopharynx and any benefits of antibiotics were modest at best. Systematic follow up with careful examination and review of treatment are required and clinical resolution cannot be assumed. Background Today, the majority of episodes of acute otitis media (AOM) in developed countries will resolve even if they are not treated with antibiotics. This conclusion is based on the findings of randomised placebo-controlled trials, longitudinal studies of initially withholding antibiotic treatment, and meta-analyses of randomised controlled trials. [1][2][3]. However, the outcomes in children from populations where there is an increased risk of complications remain uncertain. We have described the onset of early and severe otitis media in Australian Aboriginal children [4,5]. This is associated with dense colonisation of the nasopharynx with respiratory bacteria. In previous studies in this population, AOM (defined as the presence of an effusion plus bulging of the tympanic membrane (TM) or recent perforation) was frequently asymptomatic. AOM was often present on otoscopic examination 4 weeks after onset, despite antibiotic treatment [6]. Children with frequent episodes of bulging of their TM were those most likely to develop new perforations. However, we could not determine whether AOM in high-risk populations initially responded to treatment and then recurred, or persisted despite treatment. The aim of this study was to describe the clinical course of AOM and the associated bacterial nasopharyngeal colonization in Aboriginal children from a remote community in the period immediately after diagnosis. Setting The local Human Research Ethics Committee and the community-controlled Tiwi Health Board approved the study. It took place in the year 2000 in a remote Aboriginal community situated 70 km north of Darwin (population 1300). The community has an average of 30 births per year and an infant mortality rate of 30 per 1000 live births. The standard of housing is poor and overcrowding is common [7]. Participants We enrolled Aboriginal children younger than eight years who lived in the community if: i) they had AOM; ii) they were resident in the community; and iii) parents provided written consent for their participation. All children in this community develop otitis media by 12 weeks of age and around 50% experience a perforated TM in the first year of life [4]. Children in this study were similarly prone to severe infections and all had been treated for otitis media in the past. Staff at the community health centre and otitis media researchers working on other projects made the initial diagnosis. Health centre staff examined children's ears when they presented to the children's clinic either unwell or for follow-up of a medical condition. Researchers examined children's ears as part of a regular 4 weekly surveillance program. Children with otorrhoea that had persisted for longer than six weeks were not eligible unless they had a diagnosis of AOM in the other ear. We attempted to assess children every weekday they were in the study. The planned duration of follow-up was either two weeks (AOM without perforation), or three weeks (AOM with perforation). We followed children with persistent AOM until the AOM resolved, or the study period ended. Clinical assessments We used a questionnaire and review of the clinic notes to collect information about each child's current and past ear health. Wax and pus were removed from the ear canal under direct vision using a voroscope (WelchAllyn LumiView). A Siegel's speculum was used for pneumatic otoscopy. Images of the TM were video recorded and classified on a standardised data collection form. The position of the TM was described as retracted (R), neutral (N), mild bulging (B1), moderate bulging (B2), and marked bulging (B3). TMs that had a perforation that was associated with mild, moderate, or marked bulging (P1-P3) were considered to be functionally intact. Examples of these different positions and "pinhole perforations" are available in a training video [8]. Tympanometry (Grason Stadler GSI-38, Madison, Wisconsin, USA) was used in ears without any discharge. Tympanograms were classified according to a modified Jerger's classification [9]. Tympanograms with a canal volume of <0.3 cm 3 were excluded. We classified children according to the ear with the more severe acute infection. When an assessment was only successful in one ear, we used the diagnosis of that ear and assumed the disease was unilateral. Generally, the person who made the initial diagnosis started antibiotic treatment before referring the child to this study. Medical or nursing research staff prescribed additional antibiotics during the study according to population-specific, evidence-based guidelines [10]. We calculated compliance with prescribed antibiotics on a weekly basis (the number of doses reported as taken divided by the number of doses prescribed for that week). All families were reminded about their medication at each assessment and some had help in dispensing medicine from Aboriginal research assistants. Definitions Our classification of otitis media was based on current population-specific, evidence-based guidelines [10]. In this study (where close follow-up was ensured), we used the following criteria: i) Aerated middle ear-normal TM mobility and Type A or C tympanogram; ii) Otitis media with effusion (OME)-middle ear effusion behind an intact TM identified by an air-fluid level or bubble seen through a translucent TM or decreased mobility of the TM or type B tympanogram (admittance <0.2 mmho); iii) AOM without perforation-clinical diagnosis by health staff or moderate to marked bulging of an intact TM plus decreased TM mobility or Type B tympanogram; iv) AOM with perforation-evidence of recent TM perforation provided by clinical history and visualisation of fresh pus in the ear canal. The TM was intact if the perforation had already healed by the time of the assessment, and 'functionally intact' if the TM was bulging and the perforation could only be identified by pneumatic otoscopy; v) Chronic suppurative otitis media (CSOM)-TM perforation with otorrhoea present for more than six weeks; vi) Dry perforation-TM perforation with no discharge seen in the ear canal or within the middle ear space. AOM was cured when there was no bulging and no discharge present on examination. Similarly, we described an episode of AOM as improved if: i) the ear discharge had resolved but bulging of the TM persisted in AOM with perforation, or ii) the bulging of the TM was reduced to mild in AOM without perforation. Examples of mild, moderate, and marked bulging of the TM are available in our training video [8]. The outcome at "1 week" was determined by the examination that was closest to day 7 (at least 5 days after the diagnosis). The outcome at "2 weeks" was determined by the examination that was closest to day 14 (at least 10 days after diagnosis). Microbiology: specimen collection and processing We took swabs of the nasopharynx (and ear discharge if present) on up to five occasions: day 0 (the day of enrolment) and days 4-7, 10-14, 17-21 and 24-28. All swabs were then smeared for gram staining and frozen in 1.0 ml skim-milk-glucose-glycerol broth (SMGGB). We processed swabs (after completion of clinical observations) using standard methods that have been previously published [4]. We tested isolates of pneumococcus for sensitivity to oxacillin, penicillin, erythromycin, sulphamethoxazole, tetracycline, and chloramphenicol using a disc diffusion method (calibrated dichotomous susceptibility, CDS) [11]. Colonies resistant to oxacillin or penicillin were classified as penicillin resistant. Colonies resistant to three or more classes of antibiotics were classified as multi-resistant. Penicillin minimum inhibitory concentration (MIC) was determined by E-test and categorised according to the following breakpoints: susceptible <0.064 µg; intermediate resistance 0.064 -1.0 µg/ml; and high resistance >1.0 µg/ml. Features of participants and examinations We enrolled 31 children in this study and completed 219 assessments. Clinic staff referred 13 children and other researchers referred 18 children. Children referred from the clinic were more likely to have otorrhea (8/13 versus 6/18). The mean duration of follow up was 21 days (range 3-57 days). Most children were less than 2 years of age and had a past history of otorrhea (see Table 1). Overall, 230 tympanograms were recorded. Nearly all of these were Type B (223/230, 97%). Tympanograms were not done if ear discharge was present. Initial diagnosis of AOM Seventeen children had an initial diagnosis of AOM without perforation and 14 children had AOM with perforation. Nearly all children (27/31, 87%) had at least moderate bulging of the TM or recent discharge when they were first seen in this study; 13 had bilateral disease (42%). Symptoms of AOM could be assessed in 24/31 (77%) of study participants. Mothers reported otorrhea in 11/24 (46%) and ear pain in 7/24 (29%). Of the 14 children with AOM with perforation, two had a TM that appeared to have healed by the time of our examination and four had a TM that was functionally intact. All perforations seen were initially tiny in size (less than 2% of the area of the TM) and just antero-inferior to the centre of the TM. Antibiotic treatment Six children were known to be receiving antibiotics effective against respiratory pathogens at the time of the initial AOM diagnosis (amoxicillin 50 mg/kg/day). All the other children had received antibiotics for otitis media in the past but the time since their last treatment was not recorded. Following diagnosis, antibiotic treatment was started in an additional 22 children. Two of the three children not treated initially developed moderate bulging of the TM and started treatment on day 6-8 (A31 and A32). One child (A19) did not receive any antibiotics. The antibiotics prescribed were twice daily amoxicillin (25), amoxicillin-clavulanate (1), trimethoprim-sulphamethoxazole (3), or daily intramuscular procaine penicillin Clinical outcomes AOM was persistent in most children from this high-risk population (see Table 2 and Table 3). Overall, 77% of children still signs of ongoing inflammation at the 7 day and 14 day assessments. Very few ears returned to normal. Of the 438 ear examinations attempted during this study, only 5 (1%) were consistent with an aerated, intact TM (A03 on 1 occasion, A19 on 1 occasion, and A29 on 3 occasions). Even when the acute infection resolved, recurrence was common. Children who received more than 50% of the prescribed antibiotics had similar rates of treatment failure to the children who did not (75% vs 71%, Risk difference 4%, 95%CI -33, 41). Similarly, children with a past history of otorrhea had similar rates of treatment failure to the children who did not (68% vs 75%, Risk difference -7%, 95%CI -43, 30). There were also no obvious differences in the clinical course of the six children who were already on antibiotics when they were enrolled in the study. Seventeen children (55%) had a TM perforation documented at some point during the study. Of the 4 children with AOM with perforation who were followed for 6 weeks, 3 had healing and recurrence of their TM perforation documented. None of these four children had the typical otoscopic features of CSOM. In all cases the perforation was too small (usually a "pinhole") to allow adequate delivery of topical antibiotics to the middle ear space. We were able to take an initial swab in 27/31 children. All but one of these swabs were positive for at least one pathogen, and 18/27 (67%) were positive for all three otitis media pathogens-see Table 4. The prescription of antibiotics did not reduce carriage of these pathogens at follow Onset and progression of AOM with perforation The changes in TM position and diagnosis over the 219 examinations are shown in Figure 1. Of the 14 children who entered the study with a diagnosis of AOM with perforation, two TMs had already healed and six were functionally intact (ie. pinhole perforation only seen on pneumatic otoscopy). Both children who initially had fresh discharge obscuring an intact TM subsequently experienced recurrent perforations (A02 and A03). In the other 12 children with identifiable perforations, discharge continued seeping through the perforation almost immediately after completing the cleaning or swabbing of the middle ear canal. Eight TMs were initially perforated and in the neutral position. Over time, five of these also became functionally intact (ie. the perforation size reduced and the TM became bulging). AOM without perforation progressed to perforation in three children (A01, A10 and A26). In each case, the TM was observed to be bulging prior to perforation. On one occasion when a new perforation was observed, the TM had perforated and healed again within a 24 hour period (A26). There were two patterns of resolution of AOM with perforation. In four children, the suppurative process resolved and the perforation became dry (A07, A18, A22, A29). Two of these TMs subsequently healed during the study period. More commonly, the perforation appeared to be healing while the suppurative process was ongoing (A01, A02, A03, A10, A12, A14, A20, A26, and A28). Perfora-tions were observed to heal and re-perforate in 4 children (A02, A03, A12, and A14). In one child, the right and left TM healed and re-perforated 8 times over a 6 week period (A02), confirming that healing and re-perforation can occur frequently. Discussion This was the first study describing the clinical course of AOM in Australian Aboriginal children. It is also the first study providing a detailed description of otoscopy findings in a population of high-risk of TM perforation. AOM (or suppurative OM) was common, usually not associated with ear pain, frequently bilateral, and often associated with perforation of the TM. In this population, AOM was generally persistent. Infections with a sudden onset and short duration were uncommon. Strengths and limitations of the study We used a standardised clinical assessment that included tympanometry and video-otoscopy. Since definitions of AOM vary [12,13],. the detailed description of the position and integrity of the TM over time is especially useful (see Figure 1). This information has not been reported previously. Limitations of the study include the small number of children enrolled and the number of scheduled examinations that were missed. This is a consequence of performing research in remote Aboriginal communities. It reflects the challenges associated with research in remote locations with small populations. Participating families are culturally different, highly mobile, and have competing priorities that are not easily predicted. These factors make daily follow-up difficult. The small sample size and relative homogeneity of the study population mean that potentially important factors that predict outcome may no be identified. However, a larger study (or observation over a longer duration) is unlikely to change our conclusion that "persistent AOM" is common in this population (overall risk of persistent AOM 77% 95%CI 58, 90). Since there are currently no data available in the world on the clinical course of AOM in a population at high-risk of AOM, the information contained within Figure 1 is both unique and clinically important. The high rates of nasopharyngeal carriage of all three respiratory pathogens are striking. The microbiological methods used in this small study cannot determine i) which pathogens extend from the nasopharynx to the middle ear space; ii) the relative importance of concurrent infection with multiple organisms; iii) the rate of acquisition of new pathogens while on treatment; or iv) the role of antibiotic resistant bacteria. While we did not find pneumococcal penicillin resistance of organisms in the nasopahrynx to be a good predictor of outcome, the small sample size mean that important effects cannot be excluded. Comparison with outcomes in clinical trials Should we be surprised that "persistent AOM" is so common in this population? In published prospective studies where children have not been treated with antibiotics, the rate of clinical failure in the first week ranged from 2% to 83% [14][15][16][17][18][19][20][21][22][23]. The median failure rate was 24%. The most striking influence on reported rates of clinical failure was [14,[16][17][18][19]. The median clinical failure rate for these studies was 15%. In contrast, the five studies with the highest clinical failure rates (range 38-83%, median 73%) defined clinical failure as persistence of otoscopic signs [15,[20][21][22][23]. Only one study described both the rate of persistent symptoms (13%) and persistent otoscopic signs (73%) after seven days [22]. Overall, these studies suggest that symptoms resolve quickly in most children with AOM not receiving antibiotics while otoscopic signs do not. Consequently, we should probably not be surprised by our findings of persistent otoscopic signs in a high-risk population where compliance with recommended antibiotic treatment is poor and antibiotic resistance is common. Unfortunately, in nearly all published clinical trials, there is insufficient information to determine whether persistent bulging of the TM was a common finding [24]. Persistent middle ear discharge was probably unusual (as it is a readily identified complication of AOM). None of the studies described the outcomes specifically for the subgroup of children who had AOM with perforation at the time of diagnosis. Similarly, none of these studies described the associated nasopharyngeal colonisation during episodes of AOM. However, other studies in developed countries have found that dense colonisation with multiple bacterial pathogens is unusual [25]. This may be an important risk factor for the children included in this study. Implications of the study For populations where perforation of the TM and CSOM are uncommon, this study provides important information about: i) appropriate definitions for different types of OM; and ii) identification of individual children most at risk of CSOM. For high-risk populations, we believe our description of "persistent AOM" is likely to be generalisable. This persistence may be related to the severity of clinical presentation, the bacterial load (multiplicity of species and strains and their density of infection), the frequency of exposure to multiple pathogenic strains, or poor compliance with antibiotics. The early age of onset of suppurative ear infections and the low rates of reported symptoms make early recognition difficult. Consequently, active surveillance in all infants in this population is recommended.
2014-10-01T00:00:00.000Z
2005-06-14T00:00:00.000
{ "year": 2005, "sha1": "2e82e19454deb86e5391f8a7d75eaad18ea8a34d", "oa_license": "CCBY", "oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/1471-2431-5-16", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e82e19454deb86e5391f8a7d75eaad18ea8a34d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245719227
pes2o/s2orc
v3-fos-license
PRIMACY EFFECTS AND VOTING METHODS (LITERATURE REVIEW) The primacy effect has long been considered a decisive factor in determining election outcomes and has consequently developed a robust literature dedicated to studying it in different scenarios and contexts. However, existing research has offered limited insight into how the choice of voting method, particularly by-mail voting, may infl uence its impact on elections by altering how and when voters participate. This article fi rst reviews why the primacy effect exists, how researchers identify its infl uence, and its overall impact on election outcomes. It then discusses why the use of by-mail voting should be expected to alter the primacy effect, and analyzes two works where this relationship has been explored. Their results highlight several of the issues faced when trying to examine this relationship, but also indicate that further study is warranted and likely to be fruitful. INTRODUCTION The electoral systems literature contains a sizable collection of works studying how the fundamental design features of ballots can infl uence election outcomes. Design choices such as the placement of punch-holes (Wand et. al. 2001) and ballot instructions (Kimball & Kropf, 2005) have been found to greatly impact a voter's ability to choose their preferred candidates, or have their vote counted at all. Other studies have identifi ed that features such as including candidate party-affi liations, or descriptive characteristics (Klein & Baum, 2001, Matson & Fine, 2006 subtly infl uence a voter's perception of alternatives by altering available heuristic information. However, the feature to receive the most attention from scholars has been on how the placement of alternatives on the ballot alters a candidate's electoral performance. Variously labelled as the primacy, name-order, or ballotposition effect, researchers have consistently shown that being listed fi rst on a ballot, or otherwise strategically ranked, causes a direct increase in the number of votes that a candidate receives. Yet despite the number and variety of studies on the subject, researchers have generally avoided examining how the use of convenienceenhancing voting alternatives, particularly by-mail voting, may alter the scale and signifi cance of the primacy effect. THE PRIMACY EFFECT Studies of primacy effects commonly address three fundamental questions; why it exists, does it exist, and how much does it impact election outcomes. In theorizing why, scholars have drawn from the rational-choice literature to explain how voters rationally economizing their information gathering can sometimes result in uninformed choices. DECYZJE Why would voters choose the first alternative on the ballot? In an ideal scenario, voters participating in an election would have full knowledge on the candidates and issues they are voting on, and full comprehension of the implied consequences of their choices. In reality voters rarely, if ever, have the time, resources, or engagement necessary to fully educate themselves on the numerous and variable subjects requiring their input (Bowler et. al. 1992, Verba et. al. 1995, Shugart et al. 2005. They instead attempt to maximize the returns on their information gathering by strategically concentrating their attention and resources towards subjects that are most pertinent to their interests and wellbeing (Miller & Krosnick, 1998, Boehmke et. al. 2012. The choice of subjects is similarly biased towards high-visibility issues with plentiful media coverage such as national-level elections, or major social issues, since they are the most readily accessible (Selb, 2008, Van Erkel & Thijssen, 2016. While this strategy ultimately helps voters conserve time and resources, it is also problematic as it frequently results in voters being confronted with choices in less salient political contests that they are not prepared to make informed decisions on. Voters typically enter into elections under pressure to participate within a set time-frame (Bowler et. al. 1992, Koppel & Steen, 2004. When confronted with low-information choices on the ballot, voters rely on heuristic cues such as party affi liation, platform positions, or even basic descriptive characteristics such as names and gender, to quickly choose among alternatives (Niemi & Herrnson, 2003, Binder et. al. 2015. These cues provide voters an immediate reference point for how an alternative may behave or impact them. For example, candidates' party affi liations can help voters infer their positions on policy issues based on their party's platform (Bonneau & Cann, 2015). However, these associations are imperfect and provide an incomplete or inaccurate depiction of candidates that can lead voters towards making choices that contradict their interests (Binder et. al. 2015, Augenblick & Nicholson, 2016. When the heuristic aids intentionally available on a ballot are not enough for voters to overcome their lack of information, they are then pressured towards either choosing no alternative or relying on especially arbitrary evaluations to guide their decisions (Matson & Fine, 2006, Devroe & Wauters, 2020. The primacy effect thus comes into play when voters choose the latter option and decide out of convenience or pre-conceived assumptions of prominence to choose alternatives listed fi rst on a ballot. The top of a list is typically the fi rst viewed by readers, giving information placed there a considerable advantage to being processed and remembered (Brockington, 2003, Edwards, 2015. Ranked listings frequently capitalize on this heightened visibility by placing the highest ranked, or valued, alternatives at the top to advertise their achievements or qualities (Geys & Heyndels, 2003, Lutz, 2010. This practice occasionally sees use in some political contests as well, where incumbents or DECYZJE NR 35/2021 DOI: 10.7206/DEC.1733-0092.151 prominent candidates are placed fi rst on a ballot or a party list to signal their status to readers (Niemi & Herrnson, 2003). Other contextual factors such as the amount time a voter is willing to spend participating in an election, or their physical comfort, can also incentivize voters to make snap decisions (Darcy & Schneider, 1989, Augenblick & Nicholson, 2016. Reinforced by experience, common intuition, and their desire to fi nish participating, voters may assume that alternatives on a ballot are ranked according to some inherent quality when they are actually listed with no regard for merit or skill. This combination of intuitive factors has made the primacy effect an extremely appealing theory for otherwise contradictory patterns of voter behavior; and has long made it a target for study by researchers trying to determine its veracity. Does primacy effect really exist and how to estimate it? While scholars have come to a consensus on why the primacy effect exists in theory, they have had considerably more diffi culty trying to empirically prove that its size warrants our attention. Being listed fi rst on the ballot has long been assumed by political actors and theorists to improve electoral performance, and several early attempts were made to observe it in action (Darcy, 1986, 1998, Miller & Krosnick, 1998. However, it was not until the late 20 th century that researchers began to fi nd success in empirically exploring the validity of this common assumption, and if so, what were its causes. Scholars quickly found out that unavoidable institutional obstacles prevented them from directly observing primacy effects in a controlled setting (Edwards, 2015). As a critical component of elections, and a traditional target for manipulation by political interests, democratic states universally maintain strict control over the design, creation, and distribution of ballots (Darcy & Schneider, 1989, Kimball, 2005. Consequently, the ordering of alternatives on ballots is often uniform across electoral districts and non-randomized. This limits researchers to a small pool of locations where variation between districts exists and they can potentially distinguish the primacy effect from other ballot-design characteristics. Additional confounding factors such as other competing selection biases among voters, inconsistent participation among voting populations, and changing electoral rules, can also hinder the ability of researchers to positively associate changes in a candidate's electoral performance with their position on the ballot (Miller & Krosnick, 1998, Däubler & Rudolph, 2020. In response to these limitations, scholars have developed an effective research strategy by capitalizing on the few existing locations with favorable electoral rules and observing the primacy effect through natural experiments (Blom-Hansen et. al. 2016, Flis & Kaminski, 2021. A similar collection of works utilizing controlled laboratory experiments has also developed, though are less common ( Miles, 2011, Devroe, & Wauters, 2020). With these approaches, researchers have been able to identify the existence of the primacy effect in a variety of electoral systems, and in both national-level and local elections. Initial entries into the contemporary literature provided confl icting conclusions on the presence and infl uence of the primacy effect; with works by Darcy (1986Darcy ( , 1998 detecting no appreciable presence, while others such as Lijphart & López Pintor (1988), and Taebel (1975), fi nding the effect to be present, but with a relatively modest impact. Despite the initial uncertainty, a common fi nding among those works that did detect the presence of the primacy effect was that even if the overall percentage of voters infl uenced was mild, it was enough to have affected the outcomes of many of the observed elections (Hamilton & Ladd, 1996). Subsequent studies have since been able to consistently fi nd the primacy effect at work in a variety of electoral contexts, ranging from local appointments to representatives for national legislatures (King & Leigh, 2009, Meredith & Salant, 2013, Marcinkiewicz, 2014. Additionally, it has been found within a diverse range of electoral systems, with studies in single-member plurality, proportional, and mixed-member systems all showing that being placed fi rst on a ballot boosts electoral success (Faas & Schoen, 2006, Ho & Imai, 2008, Marcinkiewicz, 2014, Flis and Kaminski, 2021. Determining the impact on election outcomes and working it down With the primacy effect found to operate in a wide variety of electoral settings, the pertinent question that now dominates the literature is in determining the extent of its infl uence on election outcomes. Results have generally shown the effect to have a highly variable infl uence; with alternatives receiving between one to fi fteen percent increases in votes when placed fi rst on a ballot (Blom-Hansen et. al. 2016, Devroe & Wauters, 2020, Flis & Kaminski, 2021; while some others have found hardly any effect at all (Alvarez et. al. 2006). Much of this variability has been traced to the behavioral characteristics that were theorized to make voters susceptible to the effect in the fi rst place, with voters demonstrating a lack of information, greater apathy, and lower cognitive skills being more likely to select the fi rst-placed candidate (Johnson & Miles, 2011). Other related factors such as the intensity of media coverage and the saliency of an election to a voter have also been found to have a major impact on the primacy effect's infl uence (Miller & Krosnick, 1998, Marcinkiewicz, 2014. In down-ballot local elections where popular interest is likely to be lower, studies have found candidates receive substantially more votes when listed fi rst on the ballot (Webber et. al. 2016). In contrast, prominent national elections where information is readily available and interest is high show the primacy effect has an extremely limited impact on the performance of alternatives ( Kim et. al., 2015). DECYZJE One factor that is often theorized as being a particularly signifi cant infl uence on the primacy effect is the complexity of elections and the ballot. As ballots become longer and making strategic choices in an election becomes more intricate, voters are more likely to become fatigued and simply vote for the top-ranked alternative out of confusion or exhaustion. An illustrative example of this can be found in Flis and Kaminski (2021), which uniquely provides a comparative study of the primacy effect under multiple local electoral systems within Poland. Their results indicate that while relatively short single member elections exhibited effectively no primacy effect, longer and more complex open-list elections have the potential to provide top-ranked parties an eight percent boost to their vote shares. The strength and number of fi ndings showing the primacy effect having a powerful infl uence on electoral outcomes has invariably led scholars to include discussions on how to mitigate the systemic advantages enjoyed by benefi ciary alternatives. The most commonly proposed solution has been to introduce greater randomization into the ballot design and distribution process (Klein, & Baum, 2001, Alvarez et. al. 2006. The rationale being that since it is considered morally suspect and near-Sisyphean to force voters to be more informed about elections, it is more feasible to instead reduce the benefi t a given alternative receives from the primacy effect by ensuring that they do not systematically show up as the fi rst option. As opposed to a fully randomized process though, proposals usually encourage semi-randomized systems where alternatives appear fi rst in roughly equal proportion (Edwards, 2015, Flis andKaminski, 2021). This is typically accomplished through the use of some form of chance-based selection process such as generating a randomized alphabet, or holding lotteries to determine alternative listings (Pasek et. al. 2014, Marcinkiewicz, 2014. Such strategies cannot guarantee a complete elimination of the primacy effect, but their use helps roughly ensure that all alternatives benefi t. THE PRIMACY EFFECT AND MAIL VOTING. Alternative voting methods are typically introduced to entice voters to participate in elections by making it more convenient to vote. By-mail voting is particularly appealing to many voters because it eliminates many of the transaction costs to voting and allows users to complete their ballot at a time and place of their choice before election day (Southwell & Burchett, 2000). Despite its ostensibly broad appeal, these conveniences are disproportionately used by voters who are more educated, politically active and have access to more resources ( Alvarez et. al. 2012( Alvarez et. al. , 2013. Notably, the same characteristics that make voters likely to use mail ballots closely correspond to those found to reduce a voter's susceptibility to the primacy effect. In addition to convenience, by-mail voting also has the potential to infl uence a voter's access to information when making their decisions. When voting in-person, access to outside information is restricted due to traveling to a designated polling site, limitations on what can be brought into the voting area, and time pressures preventing voters from pausing mid-act to research their options (Karp & Banducci, 2000). In contrast, by-mail voters can freely choose to conduct additional research on a candidate or position for an extended time before making a decision; providing them additional opportunities to make more informed, or motivated, choices in elections where their political knowledge would otherwise be insuffi cient. In allowing voters to complete their ballot at their discretion, by-mail voting has the potential to greatly reduce the pressure on voters to decide between candidates or alternatives when information is low or nonexistent; removing one of the key theoretical incentives for voters to be susceptible to the primacy effect. While such observations suggest that the use mail ballots should correspond to a reduced impact of the primacy effect, this relationship has remained underexamined. Among the existing literature, two articles in particular analyze the interaction between by-mail voting and the primacy effect, but arrive at notably different outcomes. In their 2014 article, Pasek et. al. fi nd only mixed results in the by-mail portion of their study. In contrast, Jankowski and Frank fi nd in their 2021 article that postal voters are signifi cantly less susceptible to the primacy effect. Prevalence and Moderators of the Candidate Name-Order Effect In their article, Pasek et. al. (2014) examine all California statewide elections held from 1976 to 2006 to determine the extent to which an array of electoral characteristics that infl uence the power of the primacy effect. Since 1976, California has required that the order of candidates on ballots be rotated among state assembly districts to help prevent any one candidate systemically benefi ting from the primacy effect. This policy has also inadvertently allowed us to observe how much candidates benefi t from the primacy effect as their ballot position changes across districts. Consisting of circumstantial electoral characteristics such as public turnout, the prominence of the contested political offi ce, or if the election was partisan, the titular moderators were expected to infl uence voter behavior through their presence, or absence, and in-turn alter the strength of the primacy effect. While by-mail voting was only one moderator within their broader study, the article's analysis indicated that a higher number of mail voters reduced the impact of candidate primacy effects, but that this was only present in low-visibility downballot elections such as for the state treasurer or insurer. In more prominent elections for offi ces such as the president or state governor, the authors found no DECYZJE NR 35/2021 DOI: 10.7206/DEC. 1733-0092.151 difference in behavior between absentee and in-person voters, indicating that the type of ballot used had no impact. Despite the theoretically greater opportunities to access information, by-mail voters made more informed decisions only in already low-information elections; leading the authors to conclude that the greater number of by-mail voters did not infl uence the primacy effect. While the results from Pasek et. al. (2014) suggest that the infl uence of by-mail voting on the primacy effect is fairly weak, they do notably succeed in identifying a limited relationship between the two that fi ts with the existing literature's expectations of bymail voter behavior. As previously noted, studies have identifi ed that the infl uence of the primacy effect generally declines as the salience of an election increases (Klein, & Baum, 2001). However, the main theorized advantage that mail ballots provide over conventional methods is a greater freedom to access information before voting (Barreto et al. 2006); as the information gap between by-mail and conventional voters closes in high-visibility elections, this advantage should be expected to diminish as well. Additionally, research has also shown that more informed and engaged voters tend to utilize available convenience voting alternatives like voting by-mail (Alvarez et. al. 2012(Alvarez et. al. , 2013, but as the proportion of the population using mail ballots increases, this tendency should become less apparent as well. Ballot Position Effects in Open-List PR Systems. Jankowski and Frank (2021) explicitly analyzed the 2015 and 2020 elections in the German state of Hamburg to determine if the primacy effect was weaker among by-mail voters. The study was able to separately observe candidate performance among in-person and postal voters due to the ballot boxes in Hamburg each being assigned a unique identifi cation number, with boxes for mail-ballots given numbers two digits longer than election-day boxes. Contrasting the strategy employed in most other studies, ballot positions in the observed elections were not randomized and so the authors instead included individual candidate characteristics and voter selection biases to isolate the effect of mail voting on the primacy effect. Their results fi rst showed that among in-person voters, top placement on the ballot created a sizable electoral advantage; with the top-ranked candidate receiving at-minimum 27% more votes over lower ranked candidates. The signifi cance of this fi nding is limited though since most of this effect should be attributed to the fact that parties place their most popular candidates on top of their lists (Marcinkiewicz, 2014). When comparing between in-person and mail voters, they found that the top-ranked candidates experienced a 3% reduction in their vote share among mail voter. This difference can be attributed to a lower primacy effect and indicates that use of mail-ballots correlates with a sizable reduction of the primacy effect. Additionally, they found that this reduction was consistent across ballot positions and that all candidates performed better against the top-ranked candidate among postal voters. Unlike with Pasek et. al. (2014), the results from Jankowski and Frank (2021) indicate that not only is the theorized relationship between mail-voting and the primacy effect present, but that for some voting methods its impact may be stronger than initially assumed. CONCLUSION Both studies described above suffer from certain methodological diffi culties. The most immediate issue in Pasek et. al. (2014) was that the mail voters were not separated from the total voting population. Thus, the estimates were based on the comparative differences between elections with a high percentage of by-mail voters and those with low percentages. This restricts the utility of their fi ndings since they are ultimately incapable of directly determining if by-mail voters are systematically experiencing the primacy effect in a manner similar to in-person voters. Similarly, the use of aggregate data and non-randomized ballots by Jankowski and Frank (2021) limited the scope of their conclusions to offering only a broad correlation between mail voting and a reduced primacy effect. Both studies were also only able to explore the interaction between mail voting and the primacy effect in the context of one electoral system. The existing literature has shown that the primacy effect has an appreciable presence in a variety of electoral systems and contexts, with its impact ranging from barely noticeable, to providing a decisive advantage to benefi ting candidates. Similarly, alternative voting methods like by-mail voting have been found to greatly alter voter behavior and may also infl uence how voters process political information. The studies by Pasek et.al. (2014) and Jankowski and Frank (2021) have had limited success in identifying strong causal relationships. Their fi ndings do, however, show promising synergy with existing theories of how mail voting interacts with voters' access to information and provide a useful departure point for future studies.
2022-01-06T16:26:07.366Z
2021-06-15T00:00:00.000
{ "year": 2021, "sha1": "a8083d3d4034b7b84b2d0211224ecfe2f27057fb", "oa_license": "CCBY", "oa_url": "https://journals.kozminski.edu.pl/system/files/Decyzje%2035_2021%20art.%202.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3bf4b401e2b577c890670c92bffefe69d5b02b47", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
247312248
pes2o/s2orc
v3-fos-license
A bibliometric analysis of CD38-targeting antibody therapy in multiple myeloma from 1985 to 2021 Background CD38 is a transmembrane glycoprotein that is relatively highly expressed on multiple myeloma (MM) cells, and CD38-targeting antibodies use pleiotropic mechanisms to kill MM cells. Immunotherapy, with an increased quality of response and acceptable toxicity, shows tremendous potential for treating MM. This research field study aimed to analyze all the relevant literature via bibliometrics to identify its course of development and structural relationships. Methods A total of 1,030 relevant articles were retrieved from the Web of Science Core Collection (WoSCC) from 1985 to June 21, 2021. CiteSpace was employed to map authors/references/countries with nodes and links, extract highly cited keywords, analyze the time trends of keywords, recognize cocited authors/references, set timezone or timeline views, analyze burstness and generate a dual map. VOSviewer was used to recognize connections among journals and construct collaboration networks. bibliometric.com was utilized to trace advanced countries/regions in the research field. Results All of the articles were cited 24,332 times in total, with an average of 23.62 times. Most articles were published in the United States of America (USA), far outweighing other countries/regions. The current hotspots in this field are related to the following keywords: “monoclonal antibody”, “refractory MM”, “idecabtagene vicleucel”, and “B cell maturation antigen (BCMA)”. Ten significant clusters, namely, “flow cytometry”, “daratumumab”, “BCMA”, “cell line”, “antitumor activity”, “gene”, “non-Hodgkin’s lymphoma”, “peripheral blood”, “survival” and “anti-CD38”, were extracted. The mechanism and effectiveness of CD38-targeting antibodies in treating MM have been studied. Future research hotspots will focus on new therapies for relapsed and refractory multiple myeloma (RRMM) patients. Conclusions In the past, efforts were applied to elucidate the mechanism and effectiveness of CD38-targeting antibodies in treating MM. Future research hotspots will focus on anti-BCMA chimeric antigen receptor T cell (CAR-T) immunotherapy for patients with RRMM. According to this article, new researchers can discover its course of development and structural relationships in this field. Introduction Multiple myeloma (MM) is a monoclonal plasma cell malignant disease in which the secretion of monoclonal immunoglobulins can be found in the serum or urine (1). Proteasome inhibitors (PIs), immunomodulatory drugs (IMiDs) and traditional drugs such as glucocorticoids are widely accepted treatment options. However, inherent or acquired drug tolerance results in poor prognosis in the long term and has encouraged researchers to search for new drugs (2). While CD38 is a transmembrane glycoprotein that is relatively highly expressed on MM cells (3)(4)(5), it activates the invention and development of CD38-targeting antibodies such as daratumumab (DARA) to treat MM (6)(7)(8). CD38-targeting antibodies have pleiotropic mechanisms, including killing tumor cells via Fc-dependent immune effectors, immunomodulatory activity and apoptotic effects (9)(10)(11)(12). According to clinical studies, CD38-targeting antibodies have a significant curative effect on newly diagnosed/relapsed and refractory MM (RRMM) patients alone or in combination (8,13,14). There is an urgent need to develop antibody therapeutics for hematologic malignancies such as MM. The therapeutic potential of CD38-targeting antibodies has emerged before our eyes. These antibodies show tremendous potential for treating other hematologic malignancies via antibodydependent cellular cytotoxicity (ADCC) or phagocytosis (ADCP) (9,11). As there are an increasing number of studies concerning CD38-targeting antibody therapy in MM and other hematological malignancies, it is necessary to make a brief introduction and summarization. It is of great importance to review the academic progress of CD38targeting antibody therapy in MM to discover not only new applications of CD38-targeting antibody therapy but also progress in the treatment of MM. Bibliometric is an established methodology to analyze and visualize the further identifying important issues in research field and connection between authors, institutions and countries using software such as VOSviewer (15), HistCite and CiteSpace (16). These tools were used to analyze all the relevant articles from 1985 to June 21, 2021, and the results, including the key contributors of this research, the milestones of the field, the progress of the research hotspots and the prediction of future forefronts, are presented in the form of vivid graphs. This article, aiming to analyze the literature concerning CD38targeting antibody therapy in MM to identify its course of development and structural relationships in this research field, is the first comprehensive article on immunological therapy in MM. Data collection and extraction Relevant literatures were retrieved from the Web of Science Core Collection (WoSCC) database on June 21, 2021. The keywords "multiple myeloma" and "CD38" were used to extract publications published between 1985 and June 21, 2021. Then, we exported full records and cited references in the form of plain text and tab-delimited (Win, UTF-8) for the use of scientometric implements. Statistical analysis In total, 1,030 articles were exploited from the WoSCC. Three types of bibliometric implements, CiteSpace (Chaomei Chen, Drexel University, USA), VOSviewer (Nees Jan van Eck and Ludo Waltman, Leiden University of Centre for Science and Technology Studies, Netherlands) and bibliometric.com, were used to analyze the bibliometric indicators relating to authors, institutes and countries/ regions. CiteSpace (Version 5.7.R5 64-bit), a free Java-based software, was used to visualize and identify hot spots of the research field by mapping authors/references/countries with nodes and links, exploiting highly cited keywords and analyzing the time trend of keywords. According to the definition of CiteSpace, every node indicates an author/ reference/country, while the links between nodes indicate a cowork or cocitation (17). Cocitation demonstrates the frequency of jointly cited documents. Every node represents a cited article, and every connecting line represents a cocitation project. This tool can be used not only to identify the relationships among studies but also to visualize highly recognized relationships. To reveal the knowledge evolution in the topic of "CD38-targeting antibody therapy in MM", we utilized cluster analysis, set visualization in the form of a timeline or timezone view, analyzed burstness and generated a dual-map. Cluster analysis is a statistical technique employed to identify the structure of the literature, in which studies are separated into different clusters. In addition, different clusters appear to be dissimilar. Similarity and dissimilarity are represented by the distance between clusters. A timeline view is used to visualize the variation tendency of every keyword in clusters. Timezone view is used to identify the emerging keywords over time. Every circle implies a keyword, and its size indicates the frequency of appearance. The lines between circles imply co-occurrence. Burst detection serves to display the strength tendency of a phrase over time. Dual-map overlay is a portfolio analysis, ranging from authors/institutes/countries, term clustering and quotation systems, to inform researchers about the position of the object of interest. VOSviewer (Version 1.6.16.0) is also free-access Javabased software that helps in determining not only the coauthorship and cocitation relationships among authors, journals and countries/regions but also the co-occurrence of the keywords. Networks are also constructed to collect information on the top authors, journals, institutions and countries. Network visualization focuses on the distribution of hotspots, the relationship between research individuals/ institutes/countries and its strength. Overlay visualization places emphasis on the annual change, while density visualization emphasizes the density of occurrence. bibliometric.com is an online platform for publication analysis collected by China Science Digital Library. It is used for tracking advanced countries/regions in the research field over the years. Annual publications and trend The data processing procedure is shown in the flow diagram ( Figure 1). A total of 1,030 articles from the WoSCC were searched. The number of articles meeting the search criteria remained at a steady and relatively low level from 1999 to 2014, increased rapidly from 2015 and reached a three-digit number after 2018 ( Figure 2A). All these articles were cited 24,332 times in total, with an average of 23.62 times. Contribution of countries/regions and institutions The countries/regions published a total of 57 articles The USA shared the highest production density, followed by Italy, Germany and France ( Figure 2C). The burst of article publication differed from country to country. As Japan and Spain principally published articles before 2015, the USA, Italy, France and the Netherlands had a burst of publication from 2015 to 2021 ( Figure 2D). To determine the variation trend of publication proportions in the top 10 productive countries/regions, we used bibliometric.com and obtained relevant results affirming the former conclusions ( Figure 2E). The number of citations per paper is crucial for analyzing the research quality and contribution of countries/regions on CD38-targeting antibody therapy in MM, while the citations between countries/regions can demonstrate the collaborations between countries/regions. We analyzed the collected data from the WoSCC and found that the USA (n=13,823), the Netherlands (n=6,310) and Spain (n=5,656) were the top three countries with the highest total citation numbers, while the USA far exceeded other countries/ regions. However, countries such as Denmark (n=99.96) had the highest citation frequency ( Figure 2F), suggesting the significant breakthrough and high-quality articles were derived from it. All 1,635 institutions were interrelated in this field. The top 15 productive institutions were graphed and visualized with their total publication number and citations per paper ( Figure 2G). Emory University (n=45) ranked first, followed by Harvard Medical School (n=43) and Vrije University Amsterdam (n=39). Most studies were published after 2017 ( Figure 2H). Among these 15 institutions, 11 were assigned to the USA, 2 to Italy, and the rest to Denmark and the Netherlands ( Table 1). Among them, Genmab, an international biotech company, had the highest number of citations per publication (170 times per paper). Publication distribution among journals All 1,030 publications were included in 286 Science Citation Index-Expanded (SCI-E) recorded journals. In terms of the number of publications, Blood (n=109), Clinical Lymphoma Myeloma and Leukemia (n=40) and Haematologica (n=33) Table 2). VOSviewer was used to analyze the citation network among these 286 journals. The minimum number of publications was set to 5 for each journal, and then a graphical view was generated. The clusters in diverse colors represent publications concerning diverse research fields ( Figure 3A). A dot on the graphical view represents a periodical view, and the line between them represents the cocitation ( Figure 3B). Landmark authors and publications These 1,030 articles were completed and published by 5,462 authors, and on average, 5.3 researchers worked as a group to publish a manuscript. Van De Donk N, Mutis T and Richardson PG were the three highest-ranked researchers, publishing more than 30 manuscripts. The software VOSviewer and CiteSpace were used to discover the cooperation and citation relationships among researchers. The researchers who possessed the papers cited more than 450 times (global citation score >450) were known as "Key researchers". In the graphical view, the larger the dot, the higher the number of citations. The lines between two dots prove the degree of cooperation ( Figure 3C,3D). The cited number of publications indicates the contribution of the author and his or her research status in the field. With more than 2000 citations in total, Lokhorst HM, Mutis T and Ahmadi T were the landmark authors, and their contributions in this field were considerable ( Figure 3D). A total of 1,030 publications on CD38-targeting antibody therapy in MM also cited one another over time. The CiteSpace system was applied to analyze 11,572 references, generating a cocitation network of articles ( Figure 3E) and the top 20 strongest citation bursts ( Figure 3F). The number of citations of these 20 manuscripts increased rapidly during a certain time when they were read, accepted and disseminated diffusely. These papers occupied a key position in this research field. field of research and trace the variation in research focus. VOSviewer was used to analyze and visualize the keywords in the publications ( Figure 4A,4B). By using CiteSpace in this course, we fetched the cocitation network, which was separated into 10 clusters ( Figure 4C): cluster 0 "flow cytometry", cluster 1 "daratumumab", cluster 2 "B cell maturation antigen (BCMA)", cluster 3 "cell line", cluster 4 "antitumor activity", cluster 5 "gene", cluster 6 "non-Hodegkin lymphoma", cluster 7 "peripheral blood", cluster 8 "survival" and cluster 9 "anti-CD38". Timeline view and time zone view of CD38 in MM co-citation network The timeline and timezone views of CD38-targeting antibody therapy in the MM cocitation network were determined with CiteSpace. At the top of the figure, we can see the publication time, and on the right of figure, we can view the terms or keywords in the publications, while nodes on the left transverse lines indicate the behavior of hotspots, and links between them indicate citing. The emergence, popularity and decline of hotspots can be seen. Furthermore, the clusters in the timeline view disclosed the course of MM development. The evolution of cluster 0 (bone marrow) arose first, showing the usage of CD38targeting antibodies to treat MM started with the discovery that scientists found CD38-positive myeloma cells to be adhesive to bone marrow. Cluster 1 (monoclonal antibodies), cluster 2 (refractory multiple myeloma), cluster 9 (monoclonal antibody therapy) and cluster 10 (lymphocytic leukemia) were immediate fields of research focus ( Figure 5A). The keyword timeline view extracted keywords such as DARA, flow cytometry, cell line and BCMA ( Figure 5B). The timezone view also revealed the cocitation network over time demonstrating the evolution of the research field ( Figure 5C). Currently, researchers focus on how to choose the proper therapeutic regime to treat RRMM, while CD38targeting antibodies are becoming increasingly popular and approachable in real-world therapeutic regimens to obtain a better treatment response. The more articles there were in the timezone view, the more important the period was. The dual-map overlay is a citation overlay that sets left side as citing outline, right side as a cited outline and the link between them as the citation relationship ( Figure 5D). This overlay is a work of art that connects different research fields together to enhance our understanding of different specialties. Discussion As vividly shown above, bibliometric analysis can provide the visual results that help scientific research personnel who are new to the field improve their command of it. These userfriendly and freely accessible bibliometric software programs can not only uncover milestones in the process and present hotspots but can also courses of disease development. In this study, we presented a bibliometric analysis concerning CD38-targeting antibody therapy in MM. According to the results obtained above, the numbers of publications on CD38 and MM have increased year by year and increased dramatically after 2015 ( Figure 2A); 2015 is a specific point in time that divided the history of research into two parts. Before 2015, the expression of CD38 acted as an indicator of the diagnosis and prognosis of MM and was used to detect the minimal residual disease (MRD) of MM. With the emergence of CD38-targeting antibodies, in vitro and in vivo experiments were conducted to prove the efficiency and safety of this immunotherapy. At the end of 2015, the Food and Drug Administration (FDA) approved DARA (Darzalex, Johnson & Johnson), the world's first CD38 monoclonal antibody for the treatment of MM (18). Clinical trials were performed to identify a better therapeutic regimen to improve the overall response rate (ORR) of newly diagnosed MM patients and extend progression-free survival (PFS) of RRMM patients. Among all 57 countries/regions, the USA published 450 articles, accounting for the highest proportion (43.7%). It was followed by some European countries, such as Italy, Germany, France and the Netherlands, which processed more than 100 publications each ( Figure 2B) 15 productive institutions, 73.3% are located in the USA ( Figure 2G). The USA ranks far ahead of other countries regarding the total number of cited publications, while Denmark has the highest average number of citations ( Figure 2F). This result means that the USA is a leading country in research of CD38-targeting antibody therapy in MM, while some European countries, such as Denmark, have published profound and commonly accepted articles (8,12). In regard to the related journals, VOSviewer was used to analyze 47 of the most-cited journals and divided them into 6 clusters ( Figure 3A,3B). One color represents the congeneric cluster referring to the same research field. CiteSpace software was also used to group research fields into 10 clusters ( Figure 4C): cluster 1 "flow cytometry", cluster 2 "daratumumab", cluster 3 "BCMA", cluster 4 "cell line", cluster 5 "antitumor activity", cluster 6 "gene", cluster 7 "non-Hodgkin lymphoma", cluster 8 "peripheral blood", cluster 9 "survival", cluster 10 "anti-CD38". These are 10 major research orientations. MM is the second most common hematological malignancy in Europe, and the number of patients is growing year by year (19). Monoclonal antibodies plus IMiDs and PIs, followed by long-term chemotherapeutic drugs and/or hematopoietic stem cell transplantation (HSCT) and chimeric antigen receptor T cell (CAR-T) therapy, are currently the widely accepted induction treatment (20,21). The anti-CD38 monoclonal antibody showed a good safety profile and favorable efficacy (8,(22)(23)(24). As the influence of the immunerelated microenvironment needs to be confirmed in further clinical practice, anti-CD38 monoclonal antibodies still have research valve. Moreover, the cocitation analysis of CiteSpace sifted some landmark articles ( Figure 3E,3F). The works of de Weers M, Lokhorst HM, Dimopoulos MA, Palumbo A, Lonial S were contained in the top 5 in centrality and among the top 20 with citation bursts, indicating the improvement of the response rate in patients with newly diagnosed MM or RRMM with the usage of anti-CD38 monoclonal antibody attracting the attention of researchers (8,11,(22)(23)(24). For patients in different stages of MM, researchers have been devoted to determining proper therapeutic regimens for first-line or palliative treatment to achieve the best effect with the fewest sideeffect. The work of de Weers M and van de Donk NWCJ revealed the mechanism of antitumor activity, suggesting a rule for anti-CD38 monoclonal antibodies in treatment of other hematological malignancies (11,25). Multiple mechanisms contribute to antitumor activity against CD38positive lymphoma cells, such as ADCC and ADCP. Further clinical studies are required to explore the use of CD38targeting antibodies to treat non-Hodgkin lymphoma (26). These antibodies were included in cluster 5 "antitumor activity" and cluster 7 "non-Hodgkin lymphoma". Included in cluster 4 "cell line" and cluster 8 "peripheral blood", Krejcik et al. found the undiscovered and multi-dimensional immunomodulatory role of DARA (12). Nijhof IS revealed the mechanism and solution of DARA resistance (27). CD38 expression and inherent or drug-induced increased CD55 and CD59 expression influence the outcome of treatment. Suggestions such as choosing a suitable interval time or adding all-trans retinoic acid (ATRA) were given. Further clinical practice is needed to prove their feasibility. Overdijk MB expounded that Fcg receptor-mediated cross-linking induces programmed cell death, which is conductive to the antitumor activity of CD38-targeting antibodies (28). Casneuf T illuminated the influence of NK cells on the safety and efficacy of DARA by performing in vitro and in vivo experiments (29). Although NK cells play an important role in ADCC, their reduction during the treatment with DARA dose not interfere with clinical outcomes. This series of articles helps new researchers in the field to have an overall grasp of the mechanism and importance of CD38-targeting antibodies in treating MM. Bibliometric analysis by CiteSpace revealed that the current hotspots of this field are monoclonal antibodies, refractory MM, idecabtagene vicleucel and BCMA ( Figure 5A-5C). BCMA, a transmembrane glycoprotein selectively expressed on mature B cell, accelerate the proliferation, differentiation, maturation and survival of B cells (30). The fact that MM cells markedly express higher BCMA than normal cells provides a new prospect for antibody-based immunotherapy. Idecabtagene vicleucel, an anti-BCMA CAR-T therapy approved by the FDA for four or more prior lines for RRMM therapy, suggests that a promising kind of immunotherapy has matured (31). Researchers are devoting themselves to developing therapeutic regimens to obtain deep and durable responses in refractory MM patients. Efforts have been made to achieve long-term remission and improve the response rate and quality of life of these patients. The dualmap overlay analysis showed that the prime domains of CD38-targeting antibody therapy in MM are medicine and biology ( Figure 5D). In brief, there some potential problems to be addressed in relation to CD38-targeting antibody therapy in MM. The first is to improve the prognosis of CD38-targrting antibody-refractory patients. Novel immunotherapies targeting BCMA, such as CAR-T therapy, antibody-drug conjugates and bispecific T cell engagers, are emerging and will be worth the wait (32)(33)(34). The second is to explore other combination regimens containing CD38-targeting antibodies for follow-up treatment (35,36). Rechallenge patients with IMiDs and/or PIs after DARA therapy may be efficacious to those who were previously refractory to IMiDs and/or PIs. This hypothesis remains to be verified in further clinical practice. Nevertheless, there are some limitations to this research. On the one hand, as we used the data only from the WoSCC, the results of this study might be slightly different if more data are included. On the other hand, the analysis of the development tendency was qualitative and accordingly subjective. Conclusions In the past, efforts were applied to elucidate the mechanism and effectiveness of CD38-targeting antibodies in treating MM. Future research hotspots will focus on anti-BCMA CAR-T immunotherapy for patients with RRMM. According to this article, new researchers can discover its course of development and structural relationships in this field. Acknowledgments Funding: This work was supported by Wenzhou Science & Technology Bureau (No. ZY2021013). Footnote Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://tcr.amegroups. com/article/view/10.21037/tcr-21-1962/coif). The authors have no conflicts of interest to declare. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
2022-03-09T16:27:05.132Z
2021-01-01T00:00:00.000
{ "year": 2022, "sha1": "1d0b7fd51037f8015c937f76ee2b4735547890f1", "oa_license": "CCBYNCND", "oa_url": "https://tcr.amegroups.com/article/viewFile/61803/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "67e4d5a3ada4d688a0c4ff377849e63ee8ac8d17", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
271609741
pes2o/s2orc
v3-fos-license
Evaluating the potential for denitrification in permeable interlocking concrete pavements This research explored the denitrification potential of a submerged zone incorporated into the present permeable interlocking concrete pavement (PICP) design. The following three main factors controlling denitrification were investigated through a laboratory study: (i) detention time, (ii) inclusion of a carbon source (newspaper), and (iii) submerged zone depth. The study used 10 columns, each fully packed with 50– 63 mm washed aggregate and including a 1.5 m deep submerged zone. Columns were paired according to detention time (1, 2, 5, 10 and ‘varied’ days) with one set including newspaper, the other set not. All columns were loaded with synthetic stormwater over a 4-month period. Samples were taken from different submerged depths (0, 300, 600, 900, 1200 and 1 500 mm) and analysed for concentrations of ammonia (NH 3 ), nitrate (NO 3− ) and phosphate (PO 43− ) every 10 days. This study found that 10 – or more – days detention and the provision of a carbon source had the most significant impact on denitrification – providing an overall mean NO 3− and nitrogen removal above 41% and 59%, respectively. Moreover, a submerged depth of 300 mm was sufficient to achieve a minimum NO 3− removal of 41% in columns which included a carbon source and had 10 days detention. Generally, an increase in detention time resulted in an increase in NH 3 and PO 43− removal with overall mean values of 86% and 30%, respectively, achieved with 10-day detention periods. INTRODUCTION The United Nations (2018) has estimated that the global population living in urban areas will have increased by 4.7% from the 2018 figure by the year 2030 -an additional one billion people.This will lead to increasing demands for infrastructure and safe water sources in many urban areas, particularly in Asia and Africa.However, the provision of these are unsustainable with current development models.The 2030 Sustainable Development Goals (particularly 'Water and Sanitation Goal 6' and 'Sustainable Consumption and Production Goal 12') were thus developed to promote solutions.Authorities and regulators are now being pressed to protect existing water sources within cities and promote water reuse.There is a greater focus on minimising the negative effects associated with urbanisation. Urbanisation has led to an increase in the number of impermeable surfaces, such as roadways, which act as large waterways during a rainfall event.Fertilizers, animal excrement, atmospheric deposition, sewage, motor-oil, and vehicle emissions, inter alia, settle and accumulate on these surfaces (Collins et al., 2010a).They contaminate stormwater with nitrogen compounds, sediments, heavy metals, pathogens, hydrocarbons, and organics, which are then washed into nearby rivers, streams and dams or infiltrate into groundwater (Bean et al., 2007).Elevated concentrations of nitrate (NO 3 − ) can result in eutrophication and toxic algal growth (Kim et al., 2003;Kuruppu et al., 2019).Moreover, elevated NO 3 − concentrations (above regulation limit) in drinking water are a concern to many authorities as regularly ingesting nitrates can increase one's risk of developing cancer and cause methemoglobinemia in infants (Ward et al., 2018). Historically, stormwater control measures (SCMs) have been primarily implemented as hydrological management tools used to optimise the removal of runoff and reduce flooding (Collins et al., 2010a, b).Recent research on SCMs has focused on their pollutant removal capabilities, but SCMs have not always been effective at removing nitrogen compounds (e.g., NO 3 − and NH 3 ) present within stormwater.This has led many researchers to investigate the treatment efficacies of permeable pavements (PPs), which have been successful at removing NH 3 , total suspended solids, hydrocarbons, ammonium, total Kjeldahl nitrogen, total phosphorus, biochemical oxygen demand, chemical oxygen demand (COD), heavy metals and pathogenic bacteria (Collins et al., 2010a, b;Tota-Maharaj and Scholz, 2010;Kuruppu et al., 2019).However, most researchers consider PPs to be ineffective at reducing NO 3 − concentrations -which were often found to be elevated in PP effluent (Collins et al., 2010a;Kuruppu et al., 2019). Few studies have investigated the potential for removing NO 3 − with the inclusion of a submerged zone within a PP structure to facilitate denitrification.Within this zone, denitrifying bacteria can convert NO 3 − to nitric oxide (NO), nitrous oxide (N 2 O) and nitrogen gas (N 2 ), which escape to the surrounding atmosphere.The efficiency of denitrification is dependent on numerous factors including: (i) detention time, (ii) inclusion of a carbon source and (iii) submerged (anoxic) depth (Kim et al., 2003).This study aimed to provide insight into how detention time, the inclusion of a carbon source (newspaper in this case), and submerged depth impact NO 3 − , NH 3 and PO 4 3− removal with a view to improving PICP design (the most widely used PP structure) and thus improving the quality of stormwater entering water sources (e.g., dams, rivers, etc.). METHODS Ten column reactors were filled with 50-63 mm washed aggregate (Fig. 1) and loaded with synthetic stormwater to a depth of 1.5 m over 4 months in a laboratory column study.Columns 1P, 2P, 5P, 10P and 10MP were provided with newspaper as a carbon source to support denitrification, while Columns 1N, 2N, 5N, 10N and 10MN were not.Five different detention times were evaluated (1, 2, 5, 10 and 'varied' days) in pairs of columns with one column with newspaper and the other not.The 'varied' days columns (Columns 10MP and 10MN), with an overall mean detention time of 10 days, were subjected to drying and wetting cycles that mimicked Cape Town's rainfall patterns. The pollutant concentrations (NH 3 , NO 3 − and PO 4 3− -the latter added as an additional nutrient of concern) were measured at different submerged depths (300 mm depth intervals) of each column every 10 days.The pH, temperature, dissolved oxygen, and soluble COD concentrations present in each column were also measured to monitor submerged zone conditions. Column reactor design and experimental setup Ten columns were designed and constructed specifically for this study (Fig. 1).Each column weighed an estimated 250 kg with aggregate and synthetic stormwater included.A steel frame was thus constructed to support all 10 columns with removable sections that allowed columns to be safely placed or removed. A notional 50−63 mm aggregate was chosen as this is what is typically used in the sub-base layer of PICP to provide the reservoir volume as well as support against vehicle loads (Biggs, 2016).The aggregate also provides the surface area for biofilm formation. Apart from a 100 mm depth that was kept free of aggregate to reduce the risk of overflow, a 400 mm deep unsubmerged zone was left at the top of each column to mimic the upper PICP layers that are generally never submerged and thus provide aerobic conditions for nitrification.The total submerged depth of 1 500 mm was maintained using a raised underdrain (Fig. 1b). Compared with previous studies (Kim et al., 2003;Lynn et al., 2016;Kuruppu et al., 2019), this study incorporated a greater submerged depth of 1 500 mm with sample ports located at depths of 0, 300, 600, 900, 1 200 and 1 500 mm (Fig. 1b). 1 500 mm was chosen as the maximum submerged depth likely to be implementable in the field.The sampling ports were made by threading taps into the wall of the 315 mm diameter uPVC column and sealing them with marine glue to prevent leaks and air from entering the submerged zone. Newspaper was the chosen carbon source in this experiment as it was concluded in a study by Kim et al. (2003) to be the best overall electron-donor substrate for providing efficient NO 3 − removal.The newspaper was first torn into strips before approximately 950 g of it was packed into the 75 mm diameter uPVC pipes located in the centre of each of Columns 1P, 2P, 5P, 10P and 10MP so that the entire 1 500 mm submerged depth of each column was filled with newspaper.Four holes of 8 mm diameter were drilled around the central pipe at 100 mm intervals along its length to allow the synthetic stormwater and the relevant organisms to encounter the newspaper in the inner pipe.Since newspaper remained at the end of the experiment and the soluble COD concentrations remained reasonably consistent through the study, it was assumed that there was sufficient newspaper.No microbiological analysis was undertaken. Synthetic stormwater preparation and loading Table 1 presents the concentrations of the nutrients used in the synthetic stormwater feed for this study.These were determined by Liu (2020) who considered them 'worst-case' concentrations of pollutants in stormwater according to literature.The synthetic stormwater feed was made in 500 L batches in a 500 L low-density polyethylene tank covered in black plastic to reduce algal growth (Fig. 2a).The procedure for the preparation of the synthetic stormwater is provided in Appendix A. A new batch of synthetic stormwater was required every 2-3 days.The first 5 batches of synthetic stormwater were tested for consistency and once it had been established that the nutrient concentrations were as per Table 1, no further testing was carried out on subsequent batches.The synthetic stormwater in the 500 L tank was stirred directly before loading the columns to ensure homogeneity of the solution. The full submerged volume in Columns 1P, 2P, 5P, 10P, 1N, 2N, 5N and 10N was replaced with fresh synthetic stormwater using a watering can to mimic rainfall (Fig. 2b), in accordance with the loading schedule presented in Table 2.While most of the detention times were kept constant throughout the experiment, Columns 10MP and 10MN were subjected to an irregular loading designed to represent the variable detention times in the field more realistically. The determination of the loading pattern for Columns 10MP and 10MN was informed by Cape Town's rainfall patterns (Table B1, Appendix B) using daily rainfall data for the months May to August (124 days) -the rainy season -from Cape Town International Airport (CTIA) for 2002, a year in which the total rainfall depth closely matched the long-term mean for this station.The rainfall depths were then increased by a factor of 5 thereby mimicking a run-on factor (ratio of the area of the catchment contributing to the flow onto the permeable pavement to the area of the permeable pavement) of 4 -typical of many PICP designs (Winston et al., 2016).Daily rainfalls of 5 mm or less were ignored as negligible discharge from PICPs has been recorded for small rainfalls such as these (Pratt et al., 1989;Drake and Bradford, 2013).The daily rainfall data in millimetres was then multiplied by a factor of 0.45 L/mm to generate a loading schedule in litres for both columns that incidentally had a mean 10-day detention time over the 124-day experimental period (Table B2, Appendix B). Chemical analysis Samples of the stormwater in the 10 columns were analysed every 10 days over 4 months.Samples from the bottom of each column were tested for soluble COD concentrations.A sample was also taken from each of 6 depths (0, 300, 600, 900, 1 200 and 1 500 mm) in the submerged zone of all 10 columns -a grand total of 60 samples every 10 th day.These were then tested at the University of Cape Town's Water Quality Laboratory for NH 3 , NO 3 − and PO 4 3− concentrations using a Thermo Scientific Gallery Discrete Analyser.The DO, pH and temperature readings were taken for each sample using hand-held OHAUS probes. A total of 13 batches of samples were analysed over the 4-month period, which came to a total of 780 samples tested -excluding soluble COD.A total of 130 samples of soluble COD were tested over this same period.which easily percolates through filter media and soil (Collins et al., 2010a;Kuruppu et al., 2019). RESULTS AND DISCUSSION Figure 3a presents the overall mean NH 3 concentration (mean across submerged depth and time) for each column over the duration of the study.All columns reduced NH 3 by a mean concentration of at least 1.210 mg/L (60% NH 3 removal) over the duration of the study.Moreover, columns with longer detention times displayed greater reduction in NH 3 concentration than those with shorter detention times, except for Columns 10P and 10MP.This is displayed in Fig. 3a where mean NH 3 concentration decreased from Columns 1P to 5P (1, 2-and 5-days detention) and Columns 1N to 10N (1-, 2-, 5-and 10-days detention) by 0.220 mg/L and 0.350 mg/L, respectively.Out of these columns, Column 1P with the shortest detention time of 1 day provided the lowest NH 3 reduction of 1.374 mg/L (69% NH 3 removal), and Column 10N with the longest detention time of 10 days provided the highest NH 3 reduction of 1.725 mg/L (86% NH 3 removal).The longer the detention time of a column, the more time the nitrifying microorganisms had to reduce NH 3 .In addition, columns both with and without carbon performed similarly in terms of their ability to reduce NH 3 .This is supported by having conducted a one-way analysis of variance (ANOVA) test which indicated that the inclusion of a carbon source had no significant effect on NH 3 concentration at the 5% significance level (p = 0.13). In general, Fig. 3b shows that the NH 3 concentration increased with an increase in depth from 600 to 1 500 mm and most columns displayed their greatest increase in NH 3 concentration between 1 200 and 1 500 mm.This could possibly be because an increase in depth resulted in a decrease in DO concentration, which inhibited the nitrifying microorganism's ability to reduce NH 3 due to limited O 2 availability (Collins et al., 2010a;Kuruppu et al., 2019). NO 3 − After nitrification had occurred in the columns, NO 3 − was further reduced to nitrogen oxides in a process called denitrification.Biological denitrification occurs in the anoxic zone through 4 main pathways: the reduction of NO 3 − to NO 2 − (Eq.3), the conversion of NO 2 − to NO (Eq.4), the production of N 2 O from NO reduction (Eq.5), and lastly the formation of N 2 via further reduction of N 2 O (Eq. 6) (Kuruppu et al., 2019). Figure 4a shows that an increase in detention time -for columns which included newspaper -allowed for a decrease in NO 3 − concentration as denitrifying microorganisms had more time to reduce NO 3 − .With reference to columns which included newspaper, the shorter detention times of 1 and 2 days provided an increase in overall mean NO 3 − concentration by 1.690 and 1.350 mg/L (282 and 225%), respectively.The longest detention time of 10 days (Column 10P) provided the greatest decrease in overall mean NO 3 − concentration, of 0.421 mg/L (70% NO 3 − removal).Detention times of 1, 2 and 5 days resulted in an increase in NO 3 − concentration, possibly due to the denitrifying microorganisms not having enough time to reduce NO 3 − , causing NO 3 − accumulation.Column 10MP suggests that an overall mean NO 3 − removal of 41% is achievable in the field if PICP includes newspaper in its submerged zone and allows for a mean 10-day detention.The overall mean NO 3 − removal of 70% (Column 10P) and 41% (Column 10MP) in this study was far greater than that reported in other PP studies.Collins et al. (2010a) found that in most cases studies reported negative NO 3 − removal (production of NO 3 − ) in PPs. The inclusion of newspaper had a significant effect on NO 3 − concentration.Columns 1N to 10MN shows that the absence of newspaper resulted in an increase in overall mean NO 3 − concentration by over 200%.This was possibly due to denitrifying microorganisms not having the carbon source (electron donor) they needed to reduce NO 3 − to N 2 (Kuruppu et al., 2019).Instead, nitrifying microorganisms continued to convert NH 3 to NO 3 − without the ability for denitrifying microorganisms to reduce NO 3 − , thus resulting in NO 3 − accumulating in these columns.This indicates that short detention time (1 and 2 days) and exclusion of a carbon source both hindered the ability for denitrifying microorganisms to reduce NO 3 − to a similar extent.Figure 4b shows that, generally, NO 3 − concentration decreased with an increase in depth from 0 to 1 500 mm and most columns displayed their greatest decrease in NO 3 − concentration at depths between 1 200 and 1 500 mm.This could possibly be because an increase in depth resulted in a decrease in DO concentration which improved anoxic conditions and allowed denitrifying microorganisms to reduce NO 3 − at a higher rate (Knowles, 1982;Sperling, 2007;Kuruppu et al., 2019).Columns 10P and 10MP, with the highest overall mean NO 3 − removal of 70% and 41%, respectively, remained relatively unaffected by an increase in depth greater than 300 mm.This was possibly due to there being limited availability of NO 3 − remaining for denitrifying microorganisms to further reduce. The mean DO, pH and temperature readings for all columns ranged from 1.5 to 2.0 mg/L, 6.29 to 6.83, and 23.2 to 23.3°C, respectively.These values are within the necessary range for denitrification (Knowles, 1982;Volokita et al., 1996;Xu et al., 2009).Moreover, mean soluble COD concentrations ranged from 2 to 100 mg/L for all columns over the duration of the study -indicating the availability of a soluble carbon substrate for denitrification. − concentration decreased with an increase in depth from 0 to 900 mm.Thereafter, an increase in PO 4 3− concentration occurred in most columns when the depth increased from 900 to 1 500 mm.It is not known why PO 4 3− concentration increased in columns between depths of 900 and 1 500 mm.There is very little published research on PO4 3− concentrations at such depths in PICP. Nitrogen mass balance Figure 6 presents the distribution of the overall mean mass of nitrogen (%) in each column.Nitrogen compound 'X' represents nitrogen oxides such as NO, N 2 O and N 2 which can escape to the surrounding atmosphere and are therefore assumed to be removed from the system.No assessment could be made of nitrogen incorporated within the biomass of organisms growing within the columns, which was assumed to be negligible -but may not be.With these assumptions, all columns appeared to remove between 41% and 72% of nitrogen from the influent synthetic stormwater.Although Column 10P with a 10-day detention provided the second highest overall mean nitrogen removal of 69%, it performed the best in terms of NO 3 − removal.Column 10MP indicates that an overall mean nitrogen removal of 59% is achievable in the field if PICP includes newspaper and has a mean 10-day detention. CONCLUSIONS This study concludes that the inclusion of a submerged zone in PICP in the field has the potential to promote denitrification.A submerged depth of 300 mm was sufficient to achieve a minimum NO 3 − removal of 41% in columns which included a carbon source (newspaper in this instance) and had a 10-day detention time.59% nitrogen removal is possible.An increase in detention time is associated with a decrease in both NH 3 and PO 4 3− concentrations, with the longest detention time in the laboratory of 10 days resulting in an overall mean removal of 86% and 30%, respectively.However, the inclusion of a carbon source had no significant impact on NH 3 and PO 4 3− removal.In most cases an increase in submerged depth resulted in an increase in NH 3 concentration from 600 to 1 500 mm, and a decrease in PO 4 3− concentration from 0 to 900 mm.PICP thus has the potential to significantly reduce NH 3 , NO 3 − and PO 4 3− compounds present in stormwater through the incorporation of a submerged zone, ultimately improving the quality of runoff entering the natural environment. Figure 1 . Figure 1.(a) Sectional view through a column; (b) photo of a column supported by the steel frame NH 3 Figure 2 . Figure 2. (a) creating the synthetic stormwater solution; (b) columns being loaded using a watering can Figure 3 . Figure 3. (a) Overall mean (± standard deviation) NH 3 concentration for each column; (b) mean NH 3 concentration as a function of submerged depth Figure5ashows that Columns 1P, 2P, 1N and 2N, with detention times of 1 and 2 days, provided the highest overall mean PO 4 3− concentration at a range of 0.696 to 0.671 mg/L (13% to 16% PO 4 3− removal).Columns 5P, 10P, 10MP, 5N, 10N and 10MN, with longer detention times of 5, 10 and 'varied' days, provided the lowest mean PO 4 3− concentrations with a range of 0.626 to 0.535 mg/L (22% to 33% PO 4 3− removal).This indicates that an increase in detention time allowed for a decrease in PO 4 3− concentration as the microorganisms had more time to reduce PO 4 3− .A one-way ANOVA test confirmed that the inclusion of a carbon source had no significant effect on PO 43− concentration at the 5% significance level (p = 0.45).Columns both with (Columns 1P, 2P, 5P, 10P and 10MP) and without (Columns 1N, 2N, 5N, 10N and 10MN) newspaper were able to reduce overall mean PO 4 3− concentrations by 0.104 to 0.265 mg/L (13% to 33% PO 4 3− removal). Figure 4 . Figure 4. (a) Overall mean (± standard deviation) NO 3 − concentration for each column; (b) mean NO 3 − concentration as a function of submerged depth Loading schedule for Columns 10MP and 10MN (in litres) Table 1 . Synthetic stormwater nutrient concentrations Table 2 . Detention time and carbon source for each column
2024-08-02T15:07:45.764Z
2024-07-31T00:00:00.000
{ "year": 2024, "sha1": "7d6067b3efc9c2cefa23ac735a077ea6b3cd3489", "oa_license": "CCBY", "oa_url": "https://doi.org/10.17159/wsa/2024.v50.i3.4086", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "02ac1043478c9302c0d1cb7675b799436399919a", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
136361018
pes2o/s2orc
v3-fos-license
Evaluation of the properties and storage stability of EVA polymer modified asphalt Polymers are being increasingly used for modification of Asphalt to enhance highway pavement performance. This paper describes the polymer modification of two penetration grade Asphalt with ethylene vinyl acetate (EVA). Two base asphalt from two crude oil sources (Baiji paraffinic asphalt & Qaiyarah aromatic asphalt) were mixed with ethylene vinyl acetate (EVA) at different polymer content. The physical properties including softening point, penetration at 25°C, and ductility of the basic asphalt and ethylene vinyl acetate modified asphalt were studied. It has been observed that the softening point has increased on the other hand, the penetration and ductility values has decreased according to the test results. The effect of storage time on uniformity at higher temperature was studied by storage stability test . Keyword: polymer modified asphalt; phase separation; EVA copolymer; storage stability. Evaluation of the properties and storage stability of EVA polymer modified ... 15 Introduction Asphalt is a natural derivative of distillation of crude oil, which is particularly suitable as a binder for road construction.At room temperature Asphalt is a flexible material with a density of 1 gm/cm 3 , but at low temperatures it becomes brittle and at high temperatures is flows like a viscous liquid.The physical, mechanical and rheological properties of the Asphalt depend basically on its colloidal structure, linked to the chemical composition, in particular, to the proportion of asphaltenes and maltenes.Asphaltense are polar materials of high molecular weight (10.000 to 100.000), which are insoluble in n-heptane and constitute between 5% and 25% of the Asphalt.On the other hand, maltenes are constituted by resins, and aromatic and saturate oils that are soluble in nheptane and posses lower molecular weight.[1,2] According to some authors, there is a relationship between the asphaltene content and physical properties, such as viscosity, penetration, softening point, etc. [3] The use of synthetic polymers for the modification of asphalt binder dates back to the early 1970 [4], with these binders, subsequently having decreased temperature susceptibility, increased cohesion and modified rheological characteristic [5,6].The most commonly used additives are copolymers, such as SBS, EVA, …etc. The wide use of this type of polymer for modification is due to their thermoplastic nature at higher temperatures and their ability to form networks upon cooling.It has been shown [7,8] that rheological properties may change dramatically by modification of the base asphalt by these types of polymers. Materials: Two base Asphalt (A and B) were used to produce a number of Laboratory Asphalt -EVA blends. The Asphalt A is obtained from Baiji refinery (Iraq) that derived from Kirkuk paraffinic crude oil.But the Asphalt B is obtained from Qaiyarah refinery (Iraq) that derived from Qaiyarah aromatic (nonparaffinic) crude oil.Table 1 lists the physical properties of this asphalt (A, B). EVA copolymer available as pellets 2 to 3 mm in diameter (vinyl acetate content = 19%) supplied by special material trading limited Co. The EVA copolymers are thermoplastic materials formed by copolymerization of ethylene and vinyl acetate.Their characteristics lie between those of low-density polyethylene, a semirigid and translucent product and those of a transparent and rubbery material similar to plasticised PVC and certain rubbers. Preparation of samples and measurement of physical properties. Three hundred grams of the asphalt were heated to fluid condition and poured into a 1000 ml spherical flask, which was then placed in a heating mantle.Upon reaching 165 °C a required amount of EVA copolymer (2%, 4%, 6%, 8%, 10%, and 12% by asphalt weight) was added to the asphalt (slowly, to prevent the polymer particles from possible agglomeration).Mixing was performed using mechanical mixer (250 rpm) for 5 minutes, after which the speed was lowered to 150 rpm.Mixing was continued for 3 h. When blended was completed, the individual modified blends were divided for testing to penetration at 25 °C, softening point, ductility at 25°C according to ASTM D5, ASTM D36 and ASTM D113, respectively. Storage Stability Test. The storage stability for modified and unmodified asphalts was measured according to ASTM D 5892-96a: each sample was put into the glass tube 32 mm in diameter and 150 mm in height. After closing the tube by silicon Rubber stopper, it was stored vertically at 163 °C in an oven for 48 h, then the tube was cooled to room temperature and cut the tube with a glasscutter transversely into three equal parts. The upper and lower parts were melted and tested to softening point, if the difference between top and bottom softening temperature is less than 2.5 °C the blend is considered as good storage stability modified asphalt binder and string is not required up to 48 h storage [9]. Effect of EVA content Polymer modification has significantly enhanced the rheological (physical) properties of asphalt.Viscous and elastic properties of modified asphalt increased with the increase of polymer content. The effect of EVA polymer modification on the conventional binder properties of the two polymer modified asphalt groups can be seen in table 2; Figs. 1, 2 and 3.As a decrease in penetration and ductility and an increase in softening point with increasing polymer content, because additive polymer due to increasing molecular weight of asphalt.The rate of change of penetration and softening point gradually decreases as the polymer concentration is increased. storage stability of EVA modified asphalts Due to the difference in the solubility parameter and density between EVA and asphalt, phase separation would take place in EVAmodified asphalts during storage at elevated temperatures. By comparing the results from fig. 4 it seems that the storage stability of EVA modified asphalt (Qaiyarah) is better than EVA modified asphalt (Baiji) specialize at (2, 4, 6% EVA) because Qaiyarah asphalt was found to contain 9.25% (w/w) sulfur with respect to total asphalt in crude oil [10].It is commonly believed that sulfur chemically crosslinks the polymer molecules and chemically couples polymer and asphalt through sulfide and / or polysulfide bonds [11]. We can seen also that the softening point of bottom section is higher than those of the upper section for some samples (8, 10, 12% EVA).This proved that the crosslinking and chemical bonds that formed by sulfur are destroyed at high content EVA polymer [12]. Conclusions The environmental variations, especially between summer and winter and between day and night, are greatly affecting the durability of pavement's asphalt. EVA was one of the first polymers to be used successfully in asphalt applications.It essentially stiffens the binder and thereby makes the asphalt more resistant to traffic loading and rutting, particularly at higher road temperatures during hot summers when asphalt surfaces are at higher risk of softening and rutting under traffic. The data presented on the modified asphalt binder show that very good stability results are obtained for blends containing 2% EVA of both asphalts: No phase separation is detected after 2 days. Fig. 3 : Fig. 1: change in softening point of asphalt A & B at different EVA concentration.
2019-04-29T13:17:52.392Z
1999-11-30T00:00:00.000
{ "year": 1999, "sha1": "0b3bcdd1b364b5133d4300360e3a823de33405e2", "oa_license": "CCBY", "oa_url": "https://edusj.mosuljournals.com/article_58799_7c820e806f5ae681af2f6d93ccf177ae.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "0b3bcdd1b364b5133d4300360e3a823de33405e2", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
37775943
pes2o/s2orc
v3-fos-license
Environmental Information—Explanatory Factors for Information Behavior As sustainable waste management has become an important environmental concern, growing emphasis is being given to policy tools aimed at increasing recycling behavior by households. Information is a common policy tool, but may not always reach the individuals whose behavior is being targeted, i.e., those reluctant to recycle. This study examined individual differences in attention to recycling information and demand for such information. A nationwide survey in Sweden showed that having personal norms for recycling is important when it comes to obeying and seeking environmentally relevant information. In contrast to earlier research, this study found that lack of information alone is not a significant antecedent to the intention to seek information. Personal norms were found to moderate the effect of perceived lack of information on the intention to seek information. Introduction Disproportionate solid waste production is a serious problem in modern societies, and sustainable management has therefore become an important concern for governments [1].Recycling can reduce the amount of waste that goes to incineration or landfills, as well as reducing energy use and emissions generated, bringing environmental and economic benefits [2].Along with the development of technical solutions, policy measures promoting individuals to recycle are critically important.Information is a OPEN ACCESS widespread policy measure aimed at increasing environmental concern, awareness about environmental problems and (for example) participation in recycling programs.Depending on the source and message, information as a policy tool can be an effective way to spread knowledge about new systems of waste handling, facilities and the recycling procedure.Information also has the potential to persuade, creating positive attitudes towards the recycling system among the public [3,4].Problem awareness has been shown to be an important antecedent for pro-environmental behavioral intentions and behavior [5][6][7][8][9].This problem awareness could be activated by information, and an improved knowledge about the problem.However, it has been questioned whether improved knowledge and positive attitudes are successful ways of changing actual behavior [10,11].According to Martinez and Scicchitano [12], the use of mass media information can be effective within limits, but the effectiveness is partly due to the -design‖ of the message.An effective message is suggested to take motivational factors as well as possible barriers for changing the behavior into account [10,[13][14][15]. Another question addressed here is whether mass media information could actually pass by without attention from recipients and consequently without being processed or taken into account.For example, implementation of a recycling program must often be accompanied by sufficient information and promotion in order to make householders aware about how and when to use it.Informational promotions may make use of various media such as leaflets delivered to households or announcements in the local press or radio.Though leaflet drops or adverts in the local press are among the cheapest ways to administer policy tools as information, it has been questioned whether their message is even received or understood [16].Despite extensive publicity over many years for the kerbside recycling program in England, many residents claimed to have seen none, which supports the argument that information in terms of leaflet drops are often regarded as junk mail [17].Thus, the persuasive impact of information depends not only on factors associated with the information (e.g., the type and structure of arguments), but also on the recipient's attention and cognitive processing of the information, as well as individual factors.These individual factors include socio-demographic variables such as age, gender and education, as well as psychological variables such as attitudes, values and beliefs, and they can all impact how a message is attended to and how persuasive it is [18].Apart from difficulties arising in reaching the target group of the information, more general education and information to give a broader picture of waste management are required in order to apprise the public of the necessity of sustainable waste management.This study addresses individual differences in relation to two aspects of information behavior, namely attention and information seeking. Information Behavior This study examines information behavior, which is an -umbrella term‖ that includes, among others, perceiving and processing information, as well as actively seeking information [18].Information processing has perhaps been the most commonly researched information behavior in psychology, e.g., [3,[19][20][21][22].Previous research on information processing typically attempts to explain the different steps and ways in which information is processed and how it affects attitudes [20].Another field of research has examined information seeking [23], focusing on motivational factors and antecedents to seek information, e.g., [23][24][25].Overall, there is a need among policy makers to acknowledge that the ability to comprehend, accept and process information may vary between individuals.It can therefore be assumed that there are individual differences with regard to how information is comprehended and sought, in respect to a policy tool.Previous research has been dominated by studying information behavior regarding product information (e.g., consumer information; [21]) and information regarding health (e.g., health information-seeking; [26]).Less attention has been given to behaviors associated with information regarding the environment.It can be assumed that reactions toward environmental information, such as recycling information, differ from reactions toward information regarding consumer products [10].Environmental information often appeals to self-sacrificing actions and, in contrast to health information, it contains information about impersonal risk rather than personal risk [27].There is thus a need to study information behavior specifically in relation to environmental information. By recognizing individual differences in regard to responses to information, recycling information campaigns may be designed to reach even those people who are less likely to attend to, or seek, recycling information. Information Attention In accordance with early information processing research, McGuire [21] proposed a behavioral -chain of responses‖ comprising six steps in an attempt to understand information processing [21].The argument was that omission of any of these six information processing steps would cause the sequence to be broken, so that subsequent steps would not occur.Attention is one of the early steps in McGuire's paradigm, as it is essential for further processing of the information and, in turn, whether the message has any impact on the recipient's attitudes or behavior [20,21].According to the information-processing paradigm proposed by McGuire, the recipient must be presented messages in a suitable way and, given that exposure occurs, the recipient must pay attention to the message in order for it to produce attitude change.Today, most of us are exposed to a vast amount of information, in many different forms, but only a fraction of the information is processed.People tend to engage in information that comforts and agrees with their own ideas and avoid information that contradicts their opinion or that does not seem relevant to them.This phenomenon has been described in terms of selective exposure [28].Both attention and exposure have been more or less operationalized in similar ways in previous research.However, while exposure has been measured as individuals' preferences for exposing themselves to different information, attention should be regarded as a more passive process of perception [20].There are individual factors associated with how information is selectively attended to, e.g., socio-demographical factors such as age, gender, education and lifestyle correlate with attention to information [18].Moreover, factors associated with attention to information can also be psychological in the sense that they relate to a person's beliefs and attitudes.It has been recognized that people are more likely to notice information that is relevant for their current goals.In addition, it is widely accepted that people attend to information that agrees with the attitudes they already hold [20,28,29].At the same time, there is a tendency for people to prefer information that confirms their preconceptions or hypotheses, regardless of whether it is true or not [30].Therefore, it is reasonable to assume that people with strong positive environmental attitudes will pay attention to information about environmental protection to a greater extent than those who have neutral or ambivalent environmental attitudes. Information Seeking Information seeking is a conscious, energetic way of acquiring information [22].Compared with attention, information seeking can be viewed as a more active and directed behavior [31].Analysis of information-seeking behavior is worthwhile, since recipients must primarily pay attention to a persuasive message in order to generate an attitude change [20].In research on information-seeking behavior, people are assumed to search for information when they experience a lack of knowledge [22].However, beliefs about one's own capacity to gather information have been found to have a moderating effect on information seeking [27]. Furthermore, people sometimes avoid information despite feeling a lack of knowledge [32], again illustrating that more intervening factors need to be identified.Beliefs about the topic concerned and personal goals in relation to that topic could be additional factors to explore in relation to information seeking.If people are presented with information that they think is relevant for them, they may also seek information with the intention of making appropriate decisions.For example, changes in the surroundings world can induce a sense of personal relevance and by that have an effect on information seeking.A possible influence of changes in circumstances such as critical incidents (e.g., the ultimate up-to-date oil leak outside the Mexican Gulf) could therefore serve as a stimulating factor for information attention and seeking.Other factors found to be significant in predicting information seeking include a notion of normative social pressure to be informed [33]. Norms Previous studies have concluded that norms constitute a strong motive for environmental behavior [34,35].A norm is generally defined as an expectation held by an individual about how he or she ought to act in a particular social situation [36].The norm provides an impetus for proper behavior and the individual need not deliberate about consequences.Norms can further be divided into two groups, one at the societal level, that is social norms, and one at the personal level, personal norms.In general, personal norms are social norms that have been internalized and have become a part of a person's conscience.The essential distinguishing factor between social norms and personal norms is where the threat of sanctions or the promise of rewards comes from.Such sanctions or rewards can be administered by other people in a social group (social norms) or by the actor her-or himself (personal norms).In short, sanctions and rewards can come from outside or from within [37]. Social norms are further divided into descriptive and prescriptive norms, each referring to a separate source of human motivation [38].The descriptive norm describes what is typical or normal behavior in a specific situation.A descriptive norm can offer an information-processing advantage and a decisional shortcut when people choose how to behave in a particular situation [38].A prescriptive or injunctive norm specifies what people ought to do; how people in the same culture or society ought to act to preserve everybody's best interest.It refers to rules or beliefs as to what constitutes morally approved or disapproved conduct. Research in environmental psychology has stressed the role of personal norms as personal moral obligations (e.g., -what I ought to do‖) in environmental behavior, perhaps even more predominantly when it comes to recycling [39][40][41][42].Personal norms are specific personal guidelines for appropriate behavior and may either be internalized social norms or norms derived from higher order values [36]. However, results regarding the unique influence of social norms on recycling are mixed and indirect [42], whereas personal norms are directly related to recycling behavior [40,43,44].Consequently, the effect of personal norms appears to be stronger than the effect of social norms, especially on recycling [42,45].In addition, some studies show that social norms influence behavior only via personal norms [40], see also [41,44,46].If the behavior involves self-sacrifice, personal norms serve as a reminder of values important to the individual [35,47], which may help overcome the barrier to high-cost recycling.In addition, personal norms to take pro-environmental action are generally activated by beliefs that environmental conditions are threatened.One aim of the present study was to investigate whether personal norms have a moderating effect on information-seeking behavior, e.g., whether in order to be willing to attend to and seek environmentally relevant information it is necessary to have more specific problem awareness, namely that recycling can be an important contribution to reducing environmental problems.Therefore, personal norms can be assumed to also influence information behavior in relation to recycling information. The Present Study In this study, we examined the antecedents of attention to information on recycling.In particular, we investigated how demographic factors (gender, education and type of dwelling) and psychological factors (environmental concern and personal norm) affect attention to information.The primary aim was to identify factors that can improve the likelihood of information being attended to. A second aim was to examine factors that can predict seeking of recycling information.Here, we used the intention to seek information as the dependent variable.We hypothesized that lack of information as well as personal norms and an intention to change behavior can predict an individual's intention to seek information.In addition, we expected personal norms to moderate the relationship between lack of information and the intention to seek information (see Figure 1).This relationship (as an interaction term) was tested by a hierarchical regression model. Sample and Procedure A postal questionnaire was administered to 1,000 randomly selected respondents aged 20-65 years living in Sweden.A lower age limit was set in order to avoid adolescences that still live at their parents' home and an upper limit was set in order to comprise respondents that are not yet retired from work.The questionnaire was sent out in two batches during the months of May and June 2007.Each batch included a cover letter, a copy of the questionnaire and a pre-paid return envelope.In addition, all respondents were sent a combined reminder and thank you card.After these two reminders, the total response rate was 48% (N = 430).Usable questionnaires from 418 respondents were included in subsequent analyses.The mean age of the respondents was 45 years and gender distribution included 44.5% men.Approximately one-third of the respondents had a university degree. Questionnaire In the introduction to the questionnaire, respondents were informed about the overall purpose of the study, which was to investigate people's thoughts and behavior in relation to source separation of waste.The focus was mainly on the respondents' current knowledge about waste separation, and on their motives for recycling or their reasons for not doing so.Attitudes, norms and self-reported behavior were also measured and there were questions tapping beliefs and behavior regarding recycling information.The questionnaire consisted of seven parts, three of which are analyzed here, namely attitudes, norms and knowledge about recycling behavior.Questions about background characteristics (age, education, type of dwelling and gender) concluded the questionnaire. Measures Information attention.The dependent variable was measured by asking respondents if -they had paid attention to any information regarding waste separation or recycling during the past month‖.Initially, answers were given on a 3-point scale consisting of 1 (several times); 2 (sometimes) and 3 (not at all).The variable was then recoded into a dichotomous variable of noticed (1-2); or not noticed (3). Information-seeking intention.The second dependent measure was measured by a single statement: -I am prepared to search for more information in order to sort more of my household waste‖.Answers were given on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). Lack of information.This was measured by asking the respondents whether they believed they had sufficient knowledge about what to recycle; whether they perceive a lack of information.Answers were given as one of three options: no, uncertain and yes. Behavior change intention.This was measured by the statement: -I plan to recycle more of my waste in the coming year‖.Answers were given on a 7-point scale ranging from 1 (strongly disagree) to 7 (strongly agree). Each subsequent construct, i.e., personal norm and environmental concern, was derived from a principal component analysis (PCA) with Varimax rotation. Personal norm.PCA on the statements capturing the normative component resulted in a one-factor solution.Answers to the statements included were all given on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). The items -I believe I have a personal responsibility to recycle my waste‖, -I feel a personal obligation to contribute to environmentally friendly waste separation‖, -I would react negatively if I discovered that waste that could have been recycled had been disposed of in the wrong place‖, and -I feel bad if I don't recycle my waste‖ formed a factor labeled personal norms (α = 0.84). Environmental concern.This factor appeared from the PCA and included the following items: -Environmental issues are important and should receive more attention‖, -It is important that we do what we can to minimize the load on the environment‖, -I am prepared to lower my standard of living if necessary to protect the environment‖, -It takes a lot of effort to act in an environmental friendly manner‖ (reversed) and -Too much tax money is devoted to environmental protection‖ (reversed).Ratings of the items included were all given on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree).The factor was labeled environmental concern (α = 0.70). For the demographic variables, gender was coded as -0 = man‖ and -1 = woman‖; education was re-coded into three groups: 1 = statutory schooling, 2 = higher schooling and 3 = college or above.Finally, type of dwelling was coded as living in a flat (0) or in a house (1). Data Analysis Analyses were carried out in two parts.In the first part, a binary logistic regression was conducted to test three demographic variables (gender, education and type of dwelling) and two psychological variables (environmental concern and personal norm for recycling) for the dependent variable attention of information. In the second part, information seeking (a continuous variable) was used as the dependent variable and a multiple regression was performed to test the model proposed for information-seeking intention.Lack of information, personal norm and behavior change intention were used as independent variables. Attention to Recycling Information A logistic regression was performed with information attention as the binary dependent variable.Type of dwelling, gender, education, environmental concern and personal norm were used as predictor variables.A total of 397 cases were analyzed and the full model was found to be statistically reliable (chi-square 13.37, df = 5, p < 0.05).The model accounted for between 3.3% and 4.5% of the variance in information attention.Personal norm, type of dwelling and gender were all significantly associated with paying attention to information.Those who felt a normative obligation to recycle were more likely to have noticed the information, as were women, and those living in a house rather than in a flat.Table 1 gives the Wald statistic and associated degrees of freedom and probability values for each predictor variable.The model correctly predicted the attention to information for 63.6% of respondents, 27.3% of those who had not paid attention to any information and 84.4% of those who had paid attention to information. Intention to Seek Information In the second part, a hierarchical regression analysis was performed in order to test the predictability of lack of information, behavior change intention and personal norm on information-seeking intention.When questionnaires with missing data (on the variables included in the analysis) were removed from the analyses, 408 questionnaires remained and were included in the analysis. Table 2 shows the means, standard deviations and product-moment correlations between the items and indices.As expected, the means were near the middle of the scale, except for personal norm, which had a slightly higher mean.The correlations were all non-significant.A hierarchical regression was performed in order to examine the model (see Figure 1) for prediction of intention to seek information.In a first step, three predictors were added: intention to recycle more waste, personal norm and lack of information.This model was statistically significant, F(3,402) = 22.5, p < 0.001, R² = 0.144.As shown in Table 3, personal norm and intention to recycle more both had a significant effect, while lack of information was not a significant predictor.Since the hierarchical regression included an interaction term, all variables were standardized. The interaction variable was entered in a second step.The interaction variable coded for the interaction between lack of information and personal norm.The addition of this variable significantly increased the model; F(4,401) = 4.16, p = 0.042, R 2 adj = 0.009.The resulting model was significantly greater than zero, F(4,401) = 18.1, p < 0.001, R² = 0.153. The relationship between lack of information and intention to seek information was not a straightforward linear main effect.Only when lack of information interacted with personal norm did it influence the intention to seek information.As the data show, lack of information had an effect on information-seeking intention only for those who felt a personal obligation (personal norm) to recycle. Discussion The main purpose of this paper was to examine individual differences in relation to information behavior.This was done by exploring attention to information and seeking of information.The study first proposed and tested different factors influencing individual information attention.The data obtained supported the predicted relationship between personal norms and readiness to pay attention to environment-related information, suggesting that a personal norm, which signifies what the individual ought to do in relation to recycling, framed information attention.This result is partly in accordance with the view presented by [47][48][49] positing that values direct attention toward value-congruent information, resulting in an increased general awareness of environmental problems.The results in the present study suggest that personal norms could have the same influence.Although the factors included in the logistic regression could only account for a small proportion of the variance in information attention, those factors that were not statistically significant are of theoretical interest.Level of education was not a significant predictor, signifying that lack of formal education is not a barrier to information attention.Environmental concern was also unrelated to attention to information.This result contradicts earlier findings that people are biased towards information consistent with their attitudes [50].Confirmation bias is a cognitive bias, whereby people tend to notice and look for information that confirms their existing beliefs, whilst ignoring anything that contradicts those beliefs.It is thus a type of selective thinking.However, the present study found that having a positive attitude to environmental issues did not explain attention to environmentally relevant information. A general awareness concerning environmental problems as threats to the biosphere and humankind did not influence the level of information attention.Having general awareness of the problem was thus shown to be important for generating problem awareness concerning a specific situation, in this case unsorted waste as a cause of environmental problems and the seriousness of these waste-induced problems. The second part of the analysis used intention to seek for information as a dependent variable, and here too the personal norm was an important predictor.As well as being related to the intention to seek information, the personal norm was an important moderator of the effect of lack of information on intention to seek information. Taken together, these findings may have practical implications for information seeking, as information insufficiency (here: lack of information) has often been seen as the main motivator for information-seeking behavior.People with neutral or negative attitudes to sorting household waste may not perceive that they lack knowledge-they may feel they are well-informed but suspicious about the necessity or effectiveness of recycling.If this is the case, a possible solution is to frame environmental information differently in order to reach those who hold neutral or negative attitudes toward environmental issues.By exploring the antecedents of attention to recycling information, we can find indications why some information fails to reach the target audience. Moreover, the results showed that a personal intention to change behavior had a significant effect on the intention to seek information, assuming that information will be sought only when it is relevant for current goals.A practical conclusion is that people holding personal norms favoring recycling are more prone to search for environmental information.This calls for a rethink about how different kinds of information are channeled.Furthermore, considering household waste management (where people put their household waste) a habitual behavior, with little or no reasoning or planning required, recycling information aimed at changing attitudes will probably pass by without any notice from recipients.However, when establishing new environmentally friendly habits, e.g., from not separating waste to separating, the model of habit change proposes that different kinds of information are more useful in different phases of behavior change [51].A prerequisite for changing a habit is to be aware of current behavior and know that there are alternative ways.For example, Biel et al. [51] argue that changing a habitual behavior into a new stable habit progresses through several steps, where each step generates a need for different kinds of support.As a consequence, different kinds of information are needed in diverse phases.People who have internalized social norms into personal norms can be assumed to be in the latter stages of behavior change and feedback information can motivate them to continue with the behavior.Thus recycling information provided e.g., on internet sites should focus less on creating stronger positive attitudes (and for that reason probably easily processed messages, which in turn may promote the use of heuristics) and more on giving positive feedback.Since those who find the information here are more likely to already be recycling, information on how local goals for source separation are being met (descriptive social norms) as well as on the environmental benefits (effectiveness) can give further motivation. Previous studies suggest that an effective design of the kerbside intervention scheme, by up-following public consultation and gradual introduction of kerbside recycling into targeted areas with a -high quality communications campaign‖, the degree of -satisfaction‖ with the information about the kerbside scheme was highly improved [52][53][54].Not only satisfaction with the information campaign has been reported.It has also been claimed that a communications campaign had strongly -influenced‖ individuals to recycle more, and that newsletters were the most effective communications method.By recognizing the central role of a quality communications strategy in delivering high participation rates in kerbside collection schemes, research has shown the importance to a dedicated communication with a plethora of different communication strategies [55]. A weakness in the present study was that by using the survey method we were not able to check for any differences in the amount and type of information the participants had been presented with or accessed.Overall, further research is needed to understand how people with different levels of waste separation behavior and in different phases of habitual change respond to different types of information in different media.In regard to the representativeness of the sample in this study, a few notions are worth mentioning here.It should be clear that a sample of 1,000 is not a guarantee of its ability to accurately represent a target population.A survey sample's ability to represent a population has in large to do with the sampling frame: that is the list from which the sample is selected.Selection bias is a risk when some parts of the target population are not included in the sampled population.However, the sample in this study is recruited by a Swedish company that is specialized in information management and responsible for the operation of the so called SPAR database and by that it guarantees the randomly selected sample.We ordered a randomized sample with an age criteria that was set between 20 and 65 in order to minimize the possibility of inclusion of adolescents who still live with their parents and to only include people that still work and are healthy enough to live at their own home, in order to ensure the representativeness. The take-home message here would be that by communicating different kinds of information that can reach different target groups, there is a greater chance of influencing behavior and attaining better sustainability in waste management. Figure 1 . Figure 1.Hypothesized model of information seeking intention. Table 1 . Logistic regression of information attention *. Table 2 . Mean values (M), standard deviation (SD) and Pearson product moment correlations among the variables in the hierarchical regression.All measures range from 1 to 7, except lack of information that ranges from 1 to 3. Table 3 . Results of the hierarchical regression analysis of intention to seek information.
2017-05-22T13:35:03.413Z
2010-09-02T00:00:00.000
{ "year": 2010, "sha1": "eeb08a1edcb79b0dd57ac757ba436fe5f938308f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/2/9/2785/pdf?version=1424776425", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "81c57c4250bfb9cfa03fdae424d68309e1968cb0", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "extfieldsofstudy": [ "Economics" ] }
213613351
pes2o/s2orc
v3-fos-license
MULTIPLICATION AND DIVISION OF FRACTIONS: NUMERICAL COGNITION DEVELOPMENT AND ASSESSMENT PROCEDURES The number and its basic operations can be conceptualised within a general system of relations. Children need to construct a system of numbers within which they can add, subtract, multiply and divide any rational number. Products and quotients can be defined in terms of general relational schemes. In this study, we examine whether elementary school children can construct a system of numbers such that fraction multiplication and division are based on the construction of general relational schemes. Groups of students are not homogeneous and children progress at different rates. For reliable assessment teachers need methods to examine developmental and individual differences in cognitive representations of mathematical concepts and operations. A logistic regression curve offers a visualisation of the learning process as a function of average marks. The analysis of fraction multiplication and division items shows an improvement on correct response probability, especially for students with a higher average mark. PALABRAS CLAVE: Esquemas relacionales Enseñanza de multiplicación de fracciones Curva de regresión logística Evaluación del desarrollo educativo ABSTRACT The number and its basic operations can be conceptualised within a general system of relations. Children need to construct a system of numbers within which they can add, subtract, multiply and divide any rational number. Products and quotients can be defined in terms of general relational schemes. In this study, we examine whether elementary school children can construct a system of numbers such that fraction multiplication and division are based on the construction of general relational schemes. Groups of students are not homogeneous and children progress at different rates. For reliable assessment teachers need methods to examine developmental and individual differences in cognitive representations of mathematical concepts and operations. A logistic regression curve offers a visualisation of the learning process as a function of average marks. The analysis of fraction multiplication and division items shows an improvement on correct response probability, especially for students with a higher average mark. O número e suas operações básicas podem ser conceituados dentro de um sistema geral de relações. As crianças precisam construir um sistema de números dentro do qual possam somar, subtrair, multiplicar e dividir qualquer número racional. Produtos e quocientes podem ser definidos em termos de esquemas relacionais gerais. Neste estudo, examinamos se as crianças do ensino fundamental podem construir um sistema de números de tal forma que a multiplicação e divisão de frações são baseadas na construção de esquemas relacionais gerais. Grupos de estudantes não são homogêneos e as crianças progridem em taxas diferentes. Para uma avaliação confiável, os professores precisam de métodos para examinar diferenças individuais e de desenvolvimento nas representações cognitivas de conceitos e operações matemáticas. Uma curva de regressão logística oferece uma visualização do processo de aprendizagem como uma função das notas médias. A análise dos itens de multiplicação e divisão de frações mostra uma melhoria na probabilidade de resposta correta, especialmente para os alunos com uma nota média mais alta. Introduction The conceptual development of number and its basic operations (addition and multiplication) has constituted an essential part of research on cognitive development (Carpenter, Fennema, Franke, Levi & Empson, 2015;Empson, Levi, & Carpenter, 2011;Piaget, 1952;Lamon, 2005;Lortie-Forgues, Tian, & Siegler, 2015;Piaget, 1952;Piaget & Inhelder,1958;Siegler, et al., 2010;Siegler & Lortie-Forgues, 2015;Torbeyns, Schneider, Xin, & Siegler, 2015;Vygotsky, 1986). Scholastic education is one of the principal sources of the children's scientific and mathematical concepts and is also a powerful force in directing their development (Vygotsky, 1986). The main educational goal in elementary mathematics is that children develop mathematical descriptions and explanations and use mathematical tools to solve academic and real problems (Organisation for Economic Cooperation and Development (oecd), 2016). It has been proposed that elementary school children's development of fraction knowledge (including decimals, percentages, ratios, rates, and proportions) seems to be especially important for overall mathematics achievement and later academic success. Moreover, children's understanding of decimals simultaneously draws on their understanding of fractions. In addition to their importance for educational and occupational success, fractions are crucial for theories of numerical development (Siegler and Lortie-Forgues, 2015;Torbeyns, et al., 2015). However, elementary school teachers and students tend to understand arithmetic as a collection of procedures, and students often are taught computational procedures with fractions without an adequate explanation of how or why the procedures work (Siegler, et al., 2010;. Although elementary school teaching focuses on both conceptual understanding and procedural fluency teachers should emphasise the connections between them (Siegler, et al., 2010). Academic tasks at elementary school create the necessary demands and conditions to conceptualise the number and its basic operations. According to Vygotsky (1986), systematic learning plays a leading role in the conceptual development of elementary school children. Vygotsky upholds that the development of spontaneous concepts knows no systematisation and goes from the particular event, object or situation upward toward generalisations. In an opposite way, the development of mathematical and scientific concepts is the consequence of a systematic cooperation between the children and the teacher. The mathematical and scientific concepts, therefore, stand in a different relation to the events, objects or situations. This relation is only achievable in conceptual terms, which, in its turn, is possible only through a system of concepts. Vygotsky (1986) emphasises that the acquisition of academic concepts is carried out with the mediation provided by already acquired concepts. In general, Gergen (2009) contends that the meaning of a word is not contained within itself but derives from a process of coordinating words and that language (and other actions), in essence, gain their intelligibility in their social use. In addition, Piaget (1952;Piaget & Inhelder, 1958) suggests that in formal thought there is a reversal of the direction of thinking of reality and possibility, and it is the reality that is now secondary to the possibility. Children conceive, for the first time, that the given facts form part of a set of possible transformations that has actually come about from a system of relationships. According to Piaget (1952), every totality is a system of relationships just as every relationship is a segment of totality. The possibilities entertained in formal thought are by no means arbitrary or equivalent to imagination freed of all control and objectivity. Quite to the contrary, the advent of possibilities must be viewed from the dual perspective of logic and physics; this is the indispensable condition for the attainment of a general form of equilibrium. Children recognise relations, which in the first instance they assume as real, in the totality of those which they recognise as possible. The number and its basic operations can be conceptualised within a system of relations. At the beginning, certain aspects of objects are abstracted and generalised into the concept of number and the mathematical basic operations (addition and multiplication). However, mathematical concepts represent generalisations and schematic representations of certain aspects of numbers, not objects, and thus signify a new level of cognitive processes (Zapatera Llinares, 2017). This new processing level transforms the meaning of the first conceptualisations of number and its basic operations. This produces the construction of one general system of numbers. Generalisations can be developed using different approaches. Children in the first courses of elementary school can develop concepts about fraction numbers through counting or measuring activities. Simona, Placab, Avitzurc, & Karad (2018) show how students can develop a measurement concept of fractions. Their proposal is consistent with the E-D approach developed by Davydov & Tsvetkovich and the Japanese text series, Tokyo Shoseki, developed by Fujii & Iitaka. From the perspective of the E-D curriculum, measurement is not just a basis for fraction numbers, but for numbers in general from the first elementary grades. The proposal is based on the idea that number should be developed as a general concept, and that any number, whole or fraction, does not require a change in the general basic concept. In contrast to the counting and measuring cognitive activities, we focus on children's understanding of fractions based on relational schemes. Our activities promote children's generalisation of multiplication and division computational procedures to include whole and fraction numbers in general schemes. The images children construct might imply measuring cognitive activities, but measuring does not play a central role in our learning sessions. The core of our programme is the concept of number as a relational scheme. Our proposal is based on the construction of generalised conceptualisation of, at least, rational numbers and the development of generalised procedures to perform rational numbers mathematical operations. Cognitive schemes of fractions and their basic operations As a general rule, instruction in fraction numbers, i.e. a number that can be represented by an ordered pair of whole numbers a ⁄ b (Musser, Burger, & Peterson, 2008), and their basic operations begins with addition and subtraction of fractions with common denominators, proceeds to instruction in those operations with unequal denominators and to fraction multiplication, and then moves to fraction division. We propose that the best approach to present this subject is to begin with fraction multiplication and fraction division. That is because children need to know how to multiply and/or divide fractions in order to obtain equivalent fractions with the aim of adding, subtracting or comparing fractions with unequal denominators. Consequently, in this paper, we constrain our research to multiplication of rational numbers and its related operation, division. The focus of our inquiry is on children's schemes that define multiplication as a mathematical process whereby a rational number multiplied by another rational number results in a third rational number. Cognitive construction of rational numbers Elementary school children do not discriminate between the set of natural numbers and the set of rational numbers. Numbers, in general, are signs or symbols representing an amount or total and they can be conceptually understood in relation to other numbers. Every natural number is specifically represented by a unique symbol (Siegler & Lortie-Forgues, 2015). However, in general, any number can be represented in a great variety of mathematical relationships. Vygotsky (1986) asserts that through the study of arithmetic, children learn that any number can be expressed in countless ways because the concept of any number contains also all of its relations to all other numbers. For example, a whole number can be represented as a fraction and hence has an infinite number of fraction equivalences (Musser, Burger & Peterson, 2008). The number one, for instance, can be expressed as the difference between any two consecutive numbers, or as any number divided by itself, or in a myriad of other ways. According to this relational perspective, every number can be represented by infinite expressions. The number 5 can be defined or represented as: 5 = 4 + 1 = 6 -1 = 3 + 2 = 10 = √25 = 7 5 2 7 ⁐ In view of this, we conclude that children's cognitive structures conceptualising numbers constitute relational schemes. A relational scheme can be defined as any scheme whose essential characteristic or feature is a relationship between at least two concepts, objects or situations (Díaz-Cárdenas, Sankey-García, Díaz-Furlong, & Díaz-Furlong, 2014). In Vygotskian words, we cannot study concepts as isolated entities but we must study the "fabric" made of concepts. We must discover the connections between concepts based on the principle of the relation of generality, not based on either associative or structural relationship. Teacher instruction relying on the typical mathematical tasks of elementary school create the conditions that engender children's need to construct a system of numbers within which they can add, subtract, multiply and divide. Scholastic tasks like calculating the number which added to five equals three, or calculating the number which multiplied by five equals thirty one, constitute the basis for expanding the number system, restricted at first, to the positive integers to include the negative and rational numbers. Natural numbers are not closed under subtraction and they are not closed under division either. Therefore children need to expand the numbers system to include zero, negative numbers and fractions. At least, they need to understand and conceptualise the rational numbers (ℚ, from quotient). Within ℚ they can subtract and divide any number (except divide by zero). This number system includes a variety of relations in terms of comparisons and equivalences of spatial or temporal magnitudes and quantities (length, surfaces, volumes, units of weight or time) or abstract numbers. In this paper we present data about a very important issue related to opposing approaches to the introduction of fraction multiplication and division. One research perspective that contends that fractions and decimals need to be treated differently from whole numbers, and a second approach, which we adopt, that is based on the construction of general relational schemes for any mathematical basic operation that combines two real numbers to form a single real number. In this study, we examine whether elementary school children can construct a system of numbers such that fraction multiplication and division are based on the construction of general relational schemes. We also want to test the hypothesis that children achieve an improvement on correct response probability, especially those students with a higher average mark. Fraction multiplication Research on the direction of effects of fraction arithmetic operations suggests that learner' incorrect predictions about products and quotients result from the belief that multiplication yields answer greater than both factors, and that dividing yields answer smaller than the dividend Graeber, Tirosh & Glover, 1989). This question depends on the particular case and it can be answered if the student understands the multiplication scheme or the division scheme in itself. Basically, students must develop a sound understanding of fraction operations so as to analyse and modify their misconceptions about multiplication and division (Greer, 1988). Therefore we need to help children to develop a reconceptualization of number that includes the fractional basic operations. In developing general cognitive schemes it is not a relevant issue if a product or quotient is greater o smaller than any of the factors or the division elements. Fraction multiplication and division must be developed as cases of general relational schemes and, basically, as a conceptual generalisation of these operations with natural numbers. Elementary school children can construct a system of numbers such that multiplication and division, products and quotients, are defined by every number comprised in the system. Multiplication can be expressed by the words "multiplied by" or "times" (the corresponding Spanish words are "por" and "veces" respectively). An algebraic expression of a product c is a × b = c. This can be read as a times b or b times a equals c. Likewise, it can be transcribed as the product c results from taking a times the number b or taking b times the number a. In a similar way children can say that a product results from adding a number to itself a particular number of times. To prevent students' belief that multiplication should always yield answers larger than either factor we introduce gradually fraction multiplication exercises that result in products that can be at the same time greater than one of the factors and smaller than the other factor. Cognitive systems, according to Piaget (1975), never reach a final equilibrium point but they are evolving in a continuous process of progressive equilibration. Cognitive schemes are constantly modified by school exercises so they become able to give a comprehensive account of number multiplication and division. Elementary school children commonly learn to calculate a product that can be the result of taking: a) a whole number of times a whole number b) a whole number of times a non-whole number or fraction number c) a non-whole number or fraction number of times a whole number d) a non-whole number or fraction number of times a non-whole number or fraction number. Children learn multiplication and its properties multiplying whole numbers, the first multiplication case (a). Children's understanding of fractions based on relational schemes can be introduced by (b) or (c) multiplications. They can conceptualise multiplication by fraction numbers as taking a whole number times a fraction number (b) or taking a fraction times a whole number (c). If we use the same numbers in both cases children have a fractional multiplication example of the Commutative Property for Number Multiplication (18 × 1 = 1 × 18) part, for example, two fifths times five sevenths. Most elementary school children understand that multiplication computational procedures apply in the same way to fractions when they are provided with opportunities to solve multiplications involving fractions. Problem solving in mathematics requires an understanding of the relations involved in a problem and developing a corresponding translation into a mathematical relation (Vygotsky, 1986). Children can be helped to quickly recognise patterns of information and to organise data in schemes and they will be able to develop relational schemes that generalise these math relations. Products and quotients can be defined in terms of relational schemes (Díaz-Cárdenas, et al., 2014). A general multiplication scheme must include any rational number (decimal or fraction). According to Empson and Levy (2011) children must think of a fraction as a number. Product defined in relational terms factor product xy is factor y y times x x y x times y x y the y-ple of x x y the x-ple of y x In conceptualising different objects in a name or a category it is necessary to select a set of common properties or qualities and determine those that contrast them with other elements belonging to other categories (Díaz-Cárdenas, et al., 2014;Rogers & McClelland, 2004). Children understand that all four multiplications above-mentioned represent a mathematical operation that results from taking one number a number of times. One contrasting feature is the procedural knowledge that produces the resulting factor of: 1) Taking a whole number of times a whole number, 2) Taking a whole number of times a part of another number that is an equivalent operation to taking specific fraction times a whole number. 3) Taking specific fraction times a fraction number. Fraction division Children learn that there is a number that multiplied by 3 equals 9, and there is a number that multiplied by 3 equals 10. But if there is a Closure Property for Fraction Multiplication there must be a number that multiplied by 3 equals 10 and another number that multiplied by 3 equals 11 (see the section learning procedure). Here we can introduce the division of fractions. On the subject of division students also need to avoid some common misconceptions, and a significant number of children and their teachers believe that the quotient must be a whole number (Graeber, et al., 1989). They hardly represent the remainder as a fraction part of the quotient. On the other hand, incorrect responses to the direction of effects on division tasks are by-products of a misconstruction of products and quotients. Therefore, we begin by considering division as a mathematical process that results in dividing a rational number by another rational number that produces a third rational number named quotient and we basically apply the missing-factor approach (Musser, et al., 2008). This means that division consists of three mathematically related numbers: a dividend, a divisor, and a third number called the quotient or missing factor. The children´s task is to find the number that multiplied by the divisor equals the dividend and they can define division for every two numbers within only one general scheme for all rational numbers. q is the number that multiplied by x equals y x ≠ 0 School Assessment Analysis The second, but no less important objective of this study, is based on elementary school teachers' need for reliable assessment methods to examine developmental and individual differences in cognitive representations of fractions and in the effects of interventions aiming at improving conceptual knowledge of fractions. The second, but no less important objective of this study, is based on elementary school teachers' need for reliable assessment methods to examine developmental and individual differences in cognitive representations of fractions and in the effects of interventions aiming at improving conceptual knowledge of fractions. Assessment as part of the learning process is very effective when it is designed to reflect the understanding of how students learn. It is important to know how students progress in learning academic procedures and content. Assessment is an essential ingredient both in research and education processes. A valid assessment system implies a model of student cognition and learning in a specific topic, a set of beliefs about the kinds of data that will provide evidence of students' cognitive processes in learning, and an analysis and information processing for making sense of the evidence (National Academies of Sciences, Engineering, and Medicine, 2018). Assessment design and analysis are becoming as essential as other elements of teaching in Mexico. Teachers must include in their didactic planning detailed rubrics. These must contain evaluation parameters and procedures for performance analysis. In elementary school, children's learning depends on different individual factors. Groups of students are not homogeneous and children progress at different rates. Therefore, when teachers base their analysis on group average achievement, they cannot see how students are differentially progressing. The logistic function depicts the probability of success on an item as a function of a students' specified parameter, i.e. it is possible to analyse learning progress in relation to any variable that can be evaluated with non-categorical scales. With this tool, teachers or researchers can perform basic item analysis in relation to an ability parameter based on academic grades, psychological test scores, or performance on a cognitive scale. To attempt a first approximation analysis, we selected average marks as the parameter that can be related to the probability of right answer to an item. We decided to study children´s average mark or grade as the ability parameter. Average mark is basically a socially defined index that represents academic performance, and this index is only one element of the universal set of social indexes designed to assess and analyse learning processes. Participants Fifth graders attending two elementary middle-income schools in Puebla city, México, participated in this study (N = 104). There were two fifth-grade groups studying in each school. One school pertains to the public school system and the other one is a private school. Only students with parental consent were included in the study. According to the official requirements of the Secretaría de Educación Pública (the Secretary of Education), fifth-grade children participating in this program had their tenth birthday during the year of the study. Tests and learning sessions were developed in the children´s schools. One group played a part as a control group and the other participated as the fraction multiplication and division learning group in each school. As the group A would be the learning group in the private school we decide to take the group B as the learning group in the public school. Therefore, we have two control groups and two learning groups (see Table I). Learning procedure and methods of microgenetic analysis The learning instruction period was necessarily brief because of our commitment to working the same learning sessions with the control group before the academic year finished. During the learning sessions, we asked children to write a verbal expression that makes visible a conceptual understanding of the fraction multiplication, as well as the standard mathematical expression and, when possible, to draw a picture or diagram representing the multiplication. When children calculate products that involve greater numbers they do not need to make a drawing (see Figure 1 as an example of tasks solved in school). Sessions with the fractions multiplication/division learning groups were delivered in a whole-class arrangement in half-hour periods two times per week for three and a half weeks. The first author had charge of the learning sessions and the school teachers did not intervene in the teaching of multiplication or division of fraction. Control groups did not receive any special intervention. To prevent parents or teachers intervention in the multiplication and division learning process we did not assign any homework. Some researchers have suggested that children make of errors that reflect inappropriate generalization from the corresponding whole number arithmetic procedures (Siegler & Lortie-Forgues, 2015;Lortie-Forgues, Tian, & Siegler, 2015, Siegler, Thompson, & Schneider, 2011. According to them an important factor that contributes to the difficulty that children commonly encounter with fraction arithmetic is the opposite direction of effects of multiplying and dividing positive fractions below and above one. Siegler & Lortie-Forgues affirm that understanding the direction of effects of multiplying and dividing proper fractions poses special problems for learners. Multiplying natural numbers always results in an answer greater than either multiplicand but multiplying two proper fractions invariably results in answers less than either multiplicand. Similarly, dividing by a natural number never results in an answer greater than the number being divided, but dividing by a proper fraction or decimal always does. Both an important number of students and some teachers show poor understanding of the directional effects of fraction and decimal multiplication and division (Siegler & Lortie-Forgues, 2015). These researchers recommend that understanding fractions requires recognizing that many properties of natural numbers are not properties of numbers in general. An instructional implication is that teachers and textbooks should emphasize that multiplication and division produce different outcomes, depending on whether the numbers involved are greater than or less than 1, and should discuss why this is true (Lortie-Forgues, Tian, , Siegler, Thompson, & Schneider, 2011. For this reason, we designed school activities that give rise to the construction of a system of numbers such that fraction multiplication and division are based on the development of general relational schemes. In addition, by definition a fraction multiplication can be expressed as: a × c = ac b d bd , but in order to avoid a simple mechanistic procedure we do not use this definition to solve fraction multiplications during learning sessions, and neither do we use the "invert-the-divisor-and-multiply" procedure for fraction division ( a ÷ c = ad b d bc , with b ≠ 0 and c ≠ 0). In our programme children did not learn to multiply fractions in the traditional method for multiplication, whereby numerators and denominators of the multiplying fractions are treated as if they were independent multiplication problems with whole numbers. Microgenetic methods offer a promising way to meet the challenges inherent in trying to understand change processes (Chen & Siegler, 2000, p.12). The brevity of the analyzed period allows us to assume that the observed effects will be largely a result of the interventions carried out, since the other social factors remain, on the whole, without significant changes. Obtaining a precise understanding of cognitive change requires observing such changes while they are occurring and to define the path of change, i.e. the sequence of knowledge states that the child passes through while gaining competence, constitutes a dimension that had proved useful in microgenetic studies (Fazio & Siegler, 2013). Our hypothesis is that children can go through the following path in learning fraction multiplication: a) multiplication of a whole number by a fraction (how much is seven times one fifth? ¿cuánto es siete veces un quinto?). b) multiplication of fractions whose numerators are 1, i.e. unitary fractions, by a whole number (how much is one fifth times twenty? ¿cuánto es una quinta vez veinte?). c) multiplication of a nonunitary fraction by a whole number (how much is three fifths times twenty? ¿cuánto es tres quintas veces veinte?). In this case children can initially use the strategy of calculating first the product of a unitary fraction by the whole number (how much is one fifth times twenty? Four ¿cuánto es una quinta vez veinte? cuatro) and, finally multiplying this product by the remaining whole numerator (three times four equals twelve; tres veces cuatro es igual a doce). d) multiplication of fractions whose numerators are 1, i.e. unitary fractions (how much is one half times one fifth? ¿cuánto es media vez un quinto?). e) multiplication of a unitary fraction by a nonunitary fraction (how much is one half times ten fifths? ¿cuánto es media vez diez quintos?). f) multiplication of nonunitary fractions (how much is seven thirds times six fifths? ¿cuánto es siete tercias veces seis quintos?). Similarly, as mentioned above, children can initially use the strategy of calculating first the product of a unitary fraction by the other nonunitary fraction and finally multiplying this product by the remaining whole numerator (how much is one third times six fifths? two fifths, and seven times two fifths equals fourteen fifths; Un tercio de vez seis quintos son dos quintos, y siete veces dos quintos es igual a catorce quintos). The comprehension activities that we applied to the different types of fraction multiplication were: -Understanding and solving fraction multiplication word problems -Drawing a picture or diagram representing fraction multiplication -Understanding and solving fraction multiplication problems represented numerically Consequently, we began to work with exercises like the following products (we use in the learning sessions examples not included in the evaluation tests): Once these are read, respectively, as two times one-third equals, two times one-fifth equals, two times one-seventh equals, most children correctly answer that the respective products are two-thirds, two-fifths, and two-sevenths. Incidentally, in session discussions, children agree, at least some of them that two-thirds are greater than one-third and smaller than two wholes, i.e. they acknowledge that the product is at the same time greater than one factor and smaller than the other factor. We then calculate products that involve greater whole numbers. For example: This kind of exercise let the children generalise the multiplication relational scheme applied before to greater whole numbers and to apply the associated commutative law for multiplication (a × b = b × a). These activities help students to recognize that n times a fraction equals n times the fraction. Therefore sixty times one third equals sixty thirds. But this is equivalent to saying one third times sixty or one third of sixty. Most fifth-grade children correctly answer that one third of sixty equals twenty. Therefore, students understand and arrive at the conclusion that sixty thirds equals twenty (see Fig. 1). We worked immediately after on the multiplication of two fractions. Yet again, we used the word times (veces) to help children to apply the same product relational scheme when multiplying fractions. Children conceptualise multiplication by one half as a product that results from taking a half times of a fraction. In general, multiplying by one half represents dividing in two halves a fraction and taking away one half of the original fraction (see Fig. 1). The result produces fractions with a denominator equal to two times the original denominator. Therefore, one half times one third equals one sixth, and similarly, one half times one sixth equals one twelfth. We finish working with the children in multiplication with fractions with a numerator greater than one. We tried to use simple justifications to build schematic relationships. Let´s examine two multiplication cases, 15 × 3 5 (a very similar fraction multiplication exercise to those analysed by Empson and Levy, 2011, p. 85), and 1 × 3 4 5 . Children, in the first instance, only need to understand the equivalence of these mathematical expressions to 3 × 15 5 and 3 × 1 5 4 respectively (commutative property of multiplication). Students can immediately construct, correspondingly, these multiplications as three times one fifth times fifteen and three times one fifth times one fourth, that correspond, in the same order, to the mathematical representations: 3 × 1 × 15 5 and 3 × 1 × 1 5 4 . The next step is to calculate one fifth times fifteen (5) or one fifth times one fourth 1 20 ( ) (associative property). As a final point, children, easily, calculate three times those partial products and it reinforces the procedural knowledge to calculate the product of a whole number (3) times a unitary fraction 1 5 ( ) [tres veces un quinto son tres quintos]. This kind of scholastic tasks helps children to consolidate and apply a general relational scheme of multiplication and two basic properties of this number operation. As a final point, we briefly introduce fraction division basically applying the missing-factor approach in which the children's task is to find the number that, multiplied by the divisor, equals the dividend (Musser, et al., 2008). Succinctly children calculate how many times the divisor is contained in the dividend. In the first instance, fraction division problems were designed with the aim of defining multiplication for any integer number and extending the system of numbers. Consequently, children conceptualise division within only one general scheme for all rational numbers. To understand the need of rational numbers children calculated the numbers that multiplied by three equals nine or twelve, which are two and three respectively. But which numbers multiplied by three equals ten and eleven? Children must use rational numbers to answer this question and the resulting sequence would be like this: 3 × 3 = 9 3 × 10 = 10 3 × 11 = 11 3 × 4 = 12 3 Children also can draw a picture or diagram representing every multiplication. For example, three times ten thirds equals ten can be represented as: Therefore, the product sequence (9, 10, 11, 12) is completed if we introduce the numbers 10 3 ( ) and 11 3 ( ). To avoid some previously mentioned misconceptions about fraction division students calculated quotients resulting from the division of an integer by a fraction number. For instance, to divide seven by one half children must find a number that multiplied by one half equals seven, or a number whose half equals seven. The result derived from fraction multiplication schemes used before is 14 × 1 2 or 1 × 14 2 . It is important for children to analyse and to understand that the quotient resulting from dividing 7 ÷ 1 2 is not really greater than seven. The quotient is not an abstract and absolute 14. The meaning of this quotient is that fourteen times one half equals seven, i.e. fourteen halves are contained in seven wholes, and it does not mean that fourteen wholes or units are contained somehow in seven wholes or units. This procedure could also be approached as a division process that solves the question of how many halves are contained in seven units. Pre-and post-assessments The only way to find out how children learn is to study them closely while they are learning (Chen & Siegler, 2000). If we examine thinking before and after changes occur, we can identify those children that move between different levels or stages from those who do not move to an advanced one. Participants solved 5 items on a multiplication fraction problem; three of them were verbally represented (eg, how much is one-third of 18? ¿Cuánto es un tercio de 18?) while the rest were presented in a standard mathematical form (eg, what is 18 × 1 3 ? ¿Cuánto es 18 × 1 3 ?). The assessment also included 5 items on a division problem; this part once again contained three word items (eg, which number that multiplied by one half equals seven? ¿qué número multiplicado por un medio nos da siete?) and two numerical items (eg, find 7 ÷ 1 2 ¿Cuánto es 7 ÷ 1 2 ?). These fraction problems were administered before and after treatment, to both control and learning groups. Data analysis The logistic function describes the relationship between the probability of correctly answering an item and the corresponding examinee's specified ability. The item response curve depicts the probability of success on an item as a function of a person's specified parameter ability. We employ here the two-parameter logistic model based on the following function: where θ is an ability parameter, a stands for the item difficulty (the required ability level for an individual to have a probability of 1 2 to respond correctly to an item), and b is the item discrimination according to item response theory (Fan & Sun, 2013;Baker, 2001). Item discrimination is an important index for assessing the quality of an item for differentiating students by ability levels on the basis of probability of successful response to an item (Wu, 2013;Baker, 2001). The unit on the ability parameter scale is known as logit (abbreviation for "log of odds unit" or logarithm of the odds). While the theoretical range of ability is from negative infinity to positive infinity, practical considerations usually limit the range of values from (-3 to + 3) (Baker, 2001). Higher values of logit represent higher level of the attribute related to the correct answer probability (DeVellis, 2017). If the item response curve reach the point P(θ) = .50 within the defined range (-3 to + 3) the value of logit corresponds to the parameter defined as the item difficulty according to the item response theory. Three of the authors of this paper developed an application to be used in the common ® Microsoft Excel program (a copy of this macro can be obtained freely by request from the authors. As a result, we get an item response curve that constitutes the best fit of children´s item responses to a logistic function calculated using a genetic algorithm. From the perspective of item response theory students who obtained the correct answer are of higher average ability than students who obtained the incorrect answers. However, an item response curve also could be interpreted as showing that students of higher average ability have a better chance of being successful on an item than students of lower average ability. We assumed that item difficulty is not fixed but it changes as learning develops. Abilities are not fixed, and successful item response probability, at least for academic assessments, varies as a function of cognitive development and learning. 1 + e -a(θ-b) 1 P(θ) = There are two extreme cases for which the IRT ability estimation procedure fails. First, when children do not answer correctly any of the items, and second when students answer the test items without any mistakes. In both cases, it is impossible to obtain an ability estimate for the examinee (Baker, 2001). The item response curve is either very low or very high and essentially flat. If there are no answer differences between students calculation of an ability parameter is not possible. Therefore the item contributes very little to our knowledge about children's ability, as it does not differentiate between students with lower versus higher ability (Fan & Sun, 2013). It is not uncommon to find those situations at elementary school, either at the beginning of a learning process or at the final stage of this process. In the case of learning fractions, children show a tendency to answer incorrectly most numerically represented items of a diagnostic test before instruction of this topic begins. As the course progresses, children's responses to fraction mathematics diverge. We analise here an ability parameter that teachers could bring into play that is related to fraction knowledge progress: children´s average mark or grade. In this way, children's item responses along with children´s average mark, as an ability parameter, are the basis for analysing the development of those responses with a logistic function. An item response curve is a useful aid to visualise, item by item, children's progress as a function of their average mark and teachers can compare, per item, correct answer probabilities P(θ), odds ratio Results Commonly teachers use overall group test scores to analyse academic improvements. The simplest way to do this is to apply a Student's two-tailed paired t-test (Pardo, Ruiz, & San Martín, 2009). Table II presents the results of a dependent samples t-test to compare general pre-test/post-test scores of learning groups (teachers do not usually compare their results with control groups) on multiplication items (word and numerical forms), division items (word and numerical forms), and total fraction items (word, numerical, and final total). There are statistically significant differences between any general scores, except for word multiplication items. Fifth-grade children can solve correctly word problems that involve fraction multiplication calculations, although they do not perform any multiplication. They calculate one-third of 18 calculating the third part of 18. On the other hand, they are not able, in general, to make any operation to calculate a quotient for a fractional division problem at the time of the pre-test. Word multiplication items We present here an alternative form of item examination to analyse, item by item, improvements as a function of previous academic marks. The item response curve constitutes a very useful visual device to appreciate changes in correct response probability related to academic mark levels. In the first place, it is important to assess children's knowledge of verbal expressions related to fraction multiplication. Fifth-grade children have informal fraction knowledge, they already understand a fraction as a part of a collection, and they are able to answer a problem expressed in words (how much one-third of 18 are or what is third of 18; ¿cuánto es un tercio de 18). Figure 2 shows the item response curves for the word item: how much is one-third of 18? Students show a relatively good performance in solving this word fractional problem. There is no noticeable change either in control or learning groups. Figure 2. Item response curves depicting the probability of success on the item how much is onethird of 18? as a function of children´s average academic mark, taken as the ability parameter. The horizontal axis is the ability level: from the left to the right, the ability level goes from lower (-3) to higher (+3) levels. The unit on this scale is known as logit (abbreviation for "log of odds unit"). Control groups graphs (above) and learning groups curves (below) show no distinctive differences between pre-test (left) to post-test (right) answers. We performed a one-way repeated-measures anova on these data using Tukey's HSD (Honestly Significant Difference) test for post hoc comparisons (Pardo & San Martín, 2010). This analysis compares the item responses by average mark with the aim of validating the item response curve result. As expected, post hoc comparisons of the item responses of the learning groups show that there is no significant difference for any average mark level (see Table III). Numerical multiplication items The analysis of responses to problems expressed in numerical terms gives a different result. Most of the children give an incorrect answer at pretest when the problem is represented numerically ( Figure 3 presents the item response curves for the numerical item: What is 18 × 1 3 ? Here, the learning groups reveal an improvement on correct response probability, especially for students with a higher average mark. What is 18 × 1 3 as a function of children´s average academic mark, taken as the ability parameter. Control groups graphs (above) show no distinctive differences between pre-test (left) to post-test (right) answers, whereas learning groups curves (below) show post-test (right) marked improvements in the probability of correct answer, in particular for those children with the higher average academic mark. In a similar way, we apply a Tukey's HSD test to compare the item responses by average mark. In this case, post hoc comparisons of the item responses revealed that students with an average mark greater than or equal to 8.0 showed a significant improvement on item response (p < .05). Groups with an average mark less than or equal to 7.5 showed no significant difference on item response for any average mark level (see Table IV). The item response curve indicates that children with these lower average marks are less likely to answer successfully to the item: What is 18 × 1 3 ?. This means that for these children this question remains a difficult item. Teachers also can compare correct answer probability and odds ratio by average mark. Table V presents those data for control and learning groups. We performed a covariate analysis (ancova) to statistically evaluate the effect of the average mark on item response. ancova represents a recommended data statistical analysis in our case and it is a combined application of anova and regression analysis (Kline, 2009;Pardo & Ruiz, 2012). Here we take the average mark as a covariate, i.e. a variable that predicts the outcome but is ideally unrelated to the independent variable (Kline, 2013). The obtained results indicate (once the effect of the average mark was controlled): 1) a significant improvement on item response to the items What is 18 × 1 3 ? (F = 31.52, p = .000) and What is 1 × 18 3 ? (F = 34.42, p = .000); 2) there is a significant difference between pre-test post-test answers to both items (F = 38.33, p = .000 and F = 36.69, p = .000 respectively); 3) there is a significant difference between the learning condition groups and the control groups in post-test correct response to those items (F = 59.89, p = .000 and F = 59.02, p = .000 respectively); and, 4) the average mark as covariate is related to differences on pre-test post-test answers to both items (F = 6.92, p = .010 and F = 8.46, p = .005 respectively). is the number that multiplied by one half equals seven? as a function of children´s average academic mark, taken as the ability parameter. Control groups graphs (above) show no distinctive differences between pre-test (left) to post-test (right) answers, whereas learning groups curves (below) show post-test (right) marked improvements in the probability of correct answer, in particular for those children with the higher average academic mark. Numerical division items On the other hand, to solve fraction division items, even when they are represented verbally, implies difficult concepts for many children. Figure 4 displays the item response curves for control and learning groups corresponding to the item: Which is the number that multiplied by one half equals seven? The increase in correct response probability is lower than in the multiplication item reviewed before. If teachers perform a Student's two-tailed paired t-test to analyse the differences observed in this item they can observe that the learning group, in general, showed better results at post-test assessment: t (54) = -6.465, p = .000. But the item response curve indicates that only children with greater average marks have a better post-test performance. Teachers can corroborate it with a Student's two-tailed paired t-test applied to each average mark. This method produces the calculations displayed in Table VI and they confirm that only children with average marks greater than or equal to 9.0 showed a significant improvement on item response (p < .01). Conclusions We present a microgenetic study that focuses on specific proximal influences in cognitive change (Siegler & Chen, 1998). The learning instruction period was brief, three and a half weeks; therefore, we can assume that most important social factors affecting learning remained unchanged for learning and control groups except our instruction sessions with the first groups. As mentioned above, in this paper we present data about a very important issue related to opposing approaches to the introduction of multiplication or division of fractions: One point of view that contends that fractions and decimals need to be treated differently from whole numbers, and a second approach, which we adopt, that is based on the construction of general relational schemes for any basic mathematical operation. We propose here that fraction multiplication and division must be developed as relational schemes and, basically, as a conceptual generalisation of these operations with natural numbers. We have designed activities in order to develop a general relational scheme of the multiplication and division of numbers. We do not agree that children must understand that fraction multiplication and division produce different outcomes, depending on whether the numbers involved are greater than or less than 1. We could promote the construction of two different sets of numbers if we teach children that understanding fractions requires recognizing that many properties of natural numbers are not properties of numbers in general (Lortie-Forgues, Tian, , Siegler, Thompson, & Schneider, 2011, and this would produce the need of a different procedural scheme to multiply or divide fraction numbers. On the other hand, we avoid mechanistic procedures ( a ÷ c = ad b d bc ; the "invertthe-divisor-and-multiply" procedure) because using them children can develop a fraction definition as if it is composed of two whole numbers (numerator and denominator) that must be conceptualised separately in multiplication and division, obscuring the fraction concept as a unity. Our approach probably allows elementary school children to construct a system of numbers such that multiplication and division, products and quotients, are defined by every number comprised in the system. At the elementary school level, that system corresponds to rational numbers, ℚ. Within that system, every number can be expressed as the product or quotient of, at least, two other numbers. That is, every two numbers of the system can be related according to the definitions of multiplication and division to a number termed product or quotient. For example, 15 can be represented as the product of one-half times ten, or the quotient of five divided by one third, i.e. the number that that multiplied by one third equals five. The authors attempt a first approximation analysis and select students' average mark as the parameter that could related to the probability of a right answer to an item. We describe here an analysis procedure that permits a visualisation of the learning process as a function of average marks and we present data that supports the validity of this approach. According to the social constructionism approach (Gergen, 2001), we assume that all learning is an active process of social construction. Average marks and item difficulty can be modified by social interaction processes. Children can improve or worsen their academic marks as a result of different social factors. The probability of success in answering an item depends on the average mark of the student. Average mark is basically a socially defined index that represents academic performance, and this index is only one element of the universal set of social indexes designed to assess and analyse learning processes. The item difficulty depends on the learning process. Children pick up a good deal of expertise in the learning process and consequently, the item difficulty diminishes substantially. The logistic function depicts the probability of success on an item as a function of a person's specified parameter, i.e. it is possible to analyse learning progress in relation to any variable that can be evaluated with non-categorical scales. Consequently, it is necessary to research further significant relationships among other relevant social factors and the probability of success in an item. Here a tool is offered to analyse the relationships between some of these variables and the methods of assessment that teachers apply in their courses. In this study, we examined whether elementary school children can construct a system of numbers such that fraction multiplication and division are based on the construction of general relational schemes. Learning groups increase their performance following this kind of program. There are statistically significant differences between any general scores, except for word multiplication items. Finally from a Bayesian sequential analysis, we obtained that the results are statistically robust.
2019-12-05T09:15:08.893Z
2019-11-30T00:00:00.000
{ "year": 2019, "sha1": "62f5c1cdf39667b7cde467d49f9bb7bcaecb68ec", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12802/relime.19.2234", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "35bea475d02c488f5fa944a9bfe0cf56dc0b7acf", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
208040332
pes2o/s2orc
v3-fos-license
Feline coronavirus with and without spike gene mutations detected by real-time RT-PCRs in cats with feline infectious peritonitis Objectives Feline infectious peritonitis (FIP) emerges when feline coronaviruses (FCoVs) mutate within their host to a highly virulent biotype and the immune response is not able to control the infection. FCoV spike (S) gene mutations are considered to contribute to the change in virulence by enabling FCoV infection of and replication in macrophages. This study investigated the presence of FCoV with and without S gene mutations in cats with FIP using two different real-time RT-PCRs on different samples obtained under clinical conditions. Methods Fine-needle aspirates (FNAs) and incisional biopsies (IBs) of popliteal and mesenteric lymph nodes, liver, spleen, omentum and kidneys (each n = 20), EDTA blood (n = 13), buffy coat smears (n = 13), serum (n = 11), effusion (n = 14), cerebrospinal fluid (n = 16), aqueous humour (n = 20) and peritoneal lavage (n = 6) were obtained from 20 cats with FIP diagnosed by immunohistochemistry. Samples were examined by RT-PCR targeting the FCoV 7b gene, detecting all FCoV, and S gene mutation RT-PCR targeting mutations in nucleotides 23531 and 23537. The prevalence of FCoV detected in each sample type was calculated. Results In 20/20 cats, FCoV with S gene mutations was present in at least one sample, but there was variation in which sample was positive. FCoV with mutations in the S gene was most frequently found in effusion (64%, 95% confidence interval [CI] 39–89), followed by spleen, omentum and kidney IBs (50%, 95% CI 28–72), mesenteric lymph node IBs and FNAs (45%, 95% CI 23–67), and FNAs of spleen and liver and liver IBs (40%, 95% CI 19–62). Conclusions and relevance In these 20 cats with FIP, FCoVs with S gene mutations were found in every cat in at least one tissue or fluid sample. This highlights the association between mutated S gene and systemic FCoV spread. Examining a combination of different samples increased the probability of finding FCoV with the mutated S gene. Introduction Feline infectious peritonitis (FIP) is one of the most important infectious diseases in cats, but its pathophysiology is still not fully understood. According to the internal mutation theory, FIP emerges when feline coronaviruses (FCoVs) mutate within their host to a highly virulent biotype 1,2 and the host's immune system is not able to control the infection. 3,4 The exact nature of mutations that are responsible for the development of FIP is not known yet. A combination of different mutations on different genes is likely as mutations that have been identified to date do not qualify as sole causes for FIP. [5][6][7][8] This results in FCoV strains with different genome sequences in each cat with FIP, 6,9,10 highlighting that there are multiple pseudo-strains of FCoV within an individual cat and that a single consistent mutation responsible for all cases of FIP does not exist. Following mutation, increased virulence of FCoV is the result of a change in viral cell tropism from enterocytes to macrophages and efficient replication within these cells. 11,12 As the FCoV spike (S) protein plays a key role in viral cell entry, 13 studies have investigated the mutations in the S gene as possible contributing reasons for the change in virulence. [14][15][16] One study identified mutations in close proximity in the S gene's nucleotides 23531 and 23537, causing two different amino acid substitutions in the S protein. 5 In contrast to other S gene mutations, 14 mutations in nucleotides 23531 and 23537 were identified in 96% of FCoVs isolated from cats with FIP in that study. These mutations were not identified in faecal samples of clinically healthy control cats in that study; however, no organ samples from these control cats were analysed. 5 Immunological staining of viral antigen within tissue lesions is considered the reference standard for diagnosing FIP, [17][18][19] but it requires invasive sampling. Molecular methods, such as real-time RT-PCR, have evolved in the past years. RT-PCR detecting FCoV is only partially useful, [20][21][22] as viral RNA also circulates within asymptomatic FCoV-infected cats not suffering from FIP. 20,23,24 Detection of the abovementioned FCoV S gene mutations 5 might help in the diagnosis of FIP as studies examining detection of these S gene mutations via RT-PCR and/or pyrosequencing confirmed that these mutations are present in the majority of cats with FIP. [25][26][27] However, the same mutations were also detected in cats without FIP. 28,29 Therefore, the presence or detection of FCoV with S gene mutations in samples does not automatically equate to the presence of FIP. Sensitivity and specificity of diagnosing FIP by detecting these mutations in specific fluids (eg, serum or effusion) and tissue samples have already been investigated, [25][26][27][28][29] but only a few studies compared different sample types. The present study investigated 20 cats with FIP confirmed by tissue immunohistochemistry (IHC). The study aimed to evaluate the presence of FCoV with and without S gene mutations in a variety of different tissue and fluid samples that can be obtained under clinical conditions. Methods used were two different RT-PCRs using primers to detect all FCoV (7b gene RT-PCR) and primers detecting S gene mutations in nucleotides 23531 and 23537 (S gene mutation RT-PCR). Cats Twenty cats were prospectively included (Table 1). All cats were presented for suspected FIP from 2015 to 2017 and were euthanased owing to poor general condition. FIP was confirmed by histopathology and immunostaining of FCoV antigen in tissue macrophages in all 20 cats. Only cats with positive IHC were included. IHC was performed using clone FIPV3-70 antibody (Linaris Medizinische Produkte GmbH) on formalin-fixed, paraffin-embedded tissue sections. 30 For signal detection, the streptavidin-biotin complex method was implemented (VECTASTAIN ABC Kit; Vector Laboratories). Negative controls were included in which the antibody was substituted by phosphate buffered saline (PBS). Samples were considered as positive if typical histological lesions were present (eg, granulomatous vasculitis or granulomatous inflammation in tissues) and FCoV antigen was detected in macrophages in those lesions. Tissues with positive IHC results are listed in Table 1. Blood samples (EDTA blood, buffy coat smear, serum) were obtained ante mortem for diagnostic purposes in all cats. Effusion was obtained ante mortem for diagnostic and therapeutic purposes. Cerebrospinal fluid (CSF) and aqueous humour were obtained by paracentesis directly after euthanasia. Peritoneal lavage was performed post mortem with 20 ml/kg sodium chloride solution (0.9%) in cats that did not have effusion. Fine-needle aspirates (FNAs) and incisional biopsies (IBs) of all organs were obtained post mortem during necropsy, independently of the presence of lesions. IBs were stored in Eppendorf tubes with sodium chloride solution (0.9%). FNAs were layered on slides without staining. All samples were stored at 4°C until shipping. Refrigeration has no impact on RNA degradation but was performed for logistic reasons. Shipping was performed without refrigeration. Time between sampling and examination never exceeded 72 h. RT-PCRs RT-PCRs were performed at a commercial laboratory (IDEXX Laboratories, Ludwigsburg, Germany). RT-PCRs were performed with six quality controls. Extraction of total nucleic acid (TNA) was performed using QIAamp DNA Blood BioRobot MDx Kit on an automated Qiagen platform, according to the manufacturer's instructions. TNA was extracted from 200 µl of any kind of liquid diagnostic sample. EDTA blood and serum were applied without prior treatment following the extraction protocol of the manufacturer. Effusion, CSF, aqueous humour and peritoneal lavage samples were centrifuged and the sediment resuspended in 200 µl of remaining sample fluid introduced into the extraction procedure. Clinical material on slides was dissolved with 200 µl of PBS and the obtained suspension was used for TNA extraction. In the case of tissue samples, 20 mg was pretreated with Proteinase K according to the manufacturer's protocol. Firstly, the 7b gene RT-PCR targeting FCoV 7b gene was performed to quantify the viral load. 31 Secondly, the two RT-PCRs were performed targeting the M1058L and S1060A single nucleotide polymorphisms (SNPs) within the fusion peptide of the S protein (IDEXX Laboratories, unpublished data). The S gene mutation RT-PCRs allow the typing of an FCoV strain based on the presence or absence of one of two SNPs within the fusion peptide of the S gene. The paired S gene mutation RT-PCRs were previously validated analytically using synthetic DNA positive controls (IDT DNA), as well as clinically using samples collected from cats originally used to identify the two S gene mutations: (1) FCoVinfected and shedding, but otherwise healthy; and (2) affected by FIP. 32 Additional studies have evaluated RT-PCR detection of FCoV mutations in paraffin-embedded tissues and effusion from cats with confirmed FIP. 27,33 Briefly, highly specific hydrolysis probes were used, detecting either the mutation at position 3174 (A → C/T) or 3180 (T → G) on the FCoV genome, corresponding to amino acid positions 1058 and 1060, nucleotide 23531 and 23537, and M1058L and S1060A of reference sequence FJ938051, respectively, or non-mutated sequences by using an allelic discrimination approach (IDEXX Laboratories, unpublished data). Probes for mutated and non-mutated S gene sequences were fluorophore-labelled (6-FAM and VIC, respectively). Results were analysed detecting the 6-FAM:VIC (mutated:non-mutated) fluorescence ratio emitted by the hydrolysis probes. S gene mutation RT-PCR was considered positive for either mutation when fluorescence in the mutation probe was at least two-fold higher than in the non-mutated probe. S gene mutation RT-PCR was classified as negative if: (1) no FCoV was detected; (2) FCoV without one of the two S gene mutations was detected; (3) FCoV load was below the cut-off of 1.5 million RNA equivalents per ml, which did not allow a successful differentiation of the FCoV strains via S gene mutation RT-PCR; or (4) no further differentiation via S gene mutation RT-PCR was possible despite a high FCoV load (above 1.5 million RNA equivalents per ml of sample). S gene mutation RT-PCR was considered as positive if: (1) FCoV with a mutated S gene (either mutation in nucleotide 23531 or 23537); or (2) both mutated and non-mutated S genes were detected in the same sample. Data analysis The prevalance of positive results for 7b gene RT-PCR and S gene mutation RT-PCRs in different tissues and body fluids were calculated by dividing the number of positive results by the total number of examined samples of that specific tissue or fluid. Ninety-five per cent confidence intervals (CIs) were calculated. Results FCoV with a mutated S gene was detected in all 20 cats in at least one tissue or fluid. The type of samples with a positive S gene mutation RT-PCR result differed from cat to cat (Tables 2 and 3). The prevalence of FCoV with and without a mutated S gene detected by RT-PCR in each tissue and fluid are listed in Table 4. S gene mutation RT-PCR was less commonly positive than 7b gene RT-PCR. S gene mutation RT-PCR was most commonly positive in effusion (64.3%). Serum samples and buffy coat smears showed no positive results for S gene mutation RT-PCR in any cats. The percentages of positive results of both RT-PCRs were similar or even identical for FNAs and IBs in intra-abdominal organs. All samples positive in S gene mutation RT-PCRs had the mutation in nucleotide 23531; in none of the examined samples was a mutation in nucleotide 23537 present. The probability of finding FCoV with S gene mutations in an individual cat increased when specific samples were combined for analysis. Combining different organ IBs (mesenteric lymph nodes, liver, spleen, omentum, kidneys), which can be collected in a patient during laparotomy, increased the probability of finding FCoV with a mutated S gene to up to 80.0%. When only samples obtained by minimally invasive techniques (EDTA blood, effusion if present, fine-needle aspiration of mesenteric lymph nodes, liver, spleen) were considered, the probability of finding FCoV with mutated S gene increased to up to 70.0% in a patient with effusion and to up to 60.0% in a patient without effusion. In four cats, a high FCoV load was detected by 7b gene RT-PCR in up to seven different sample types, but no further differentiation was possible by S gene mutation RT-PCR; therefore, these samples were considered as negative for S gene mutations. Discussion This study investigated the presence of FCoV with and without S gene mutations in different tissue and body fluid samples from cats with IHC-confirmed FIP via realtime RT-PCR. The study was able to confirm results of previous studies, in which FCoV with mutated S gene were detected in effusion but not in serum or plasma from cats with FIP. [25][26][27] The prevalence of FCoV with S gene mutations detected by RT-PCR was 64.3% in effusion, which is similar to the results of other studies (68.6% and 65.3%, respectively), 25,27 while in one study, the prevalence was even higher (85.0%). 26 Other fluids examined (EDTA blood, peritoneal lavage, buffy coat smears, CSF, aqueous humour) showed only low-tomoderate numbers of positive RT-PCR results for FCoV with and without S gene mutations. Earlier studies obtained similar results. 27,34,35 As only 3/20 patients of this study's population suffered from ocular or neurological symptoms, a higher prevalence of FCoV with and without S gene mutations might be expected in CSF or aqueous humour of patients with corresponding signs. In a previous study examining CSF, the prevalence of all FCoV detected by RT-PCR increased from 42.1% in all cats to 85.7% when considering only cats with neurological or ocular signs. 36 In the present study, FCoV with a mutated S gene was detected in the CSF of both cats with neurological signs. The study was also able to confirm previous results regarding the prevalence of the two different S gene mutations investigated. In the present study, only S gene mutation in nucleotide 23531 (resulting in amino acid substitution M1058L) was identified; S gene mutation in nucleotide 23537 (resulting in amino acid substitution S1060A) was not identified in any of the examined samples. Already, when those specific S gene mutations were detected for the first time, amino acid substitution M1058L was more common (n = 108/118) than S1060A (n = 5/118) in all examined FCoVs. 5 Later studies confirmed these findings and only detected few [25][26][27] or no FCoV at all with S1060A. 33 As such, M1058L is the more common S protein substitution, which is also reflected by the results of the present study. The present study detected a higher number of samples with FCoV by 7b gene RT-PCR (detecting any FCoV) than by S gene mutation RT-PCR (detecting FCoV with mutated S gene) as only those positive in 7b gene RT-PCR were analysed by S gene mutation RT-PCR. For example, 7b gene RT-PCR was commonly positive in intraabdominal organs (mesenteric lymph nodes, liver, spleen, kidneys, omentum; prevalence of all FCoV 80-95%). This is in accordance with other studies, in which omentum, mesenteric lymph nodes and spleen were identified as the organs with highest viral loads. 37 In contrast, the percentage of samples positive in S gene mutation RT-PCR only ranged from 40% to 50% in intra-abdominal organs. One reason for this could be the presence of S gene mutations that remain undetected by RT-PCR because of a FCoV load below the cut-off for successful differentiation. This has already been observed in other studies using the same method. 27,33 Another reason could be the absence of the particular S gene mutations examined here and the presence of other mutations involved in FIP pathogenesis instead. 6,8,9,14,15,38,39 Some other mutations, such as in the 3c gene, have been discussed as playing a role in FIP Infection with serotype II FCoV could be another reason for a negative S gene mutation RT-PCR despite a high viral load, as S gene mutation RT-PCR is specific for serotype I only. Serotype II is not as common as serotype I in central European cats, 42 but studies showed that mono-infection with serotype II occurs in cats with FIP, as does a concurrent infection with both serotypes. 43,44 Multiple mutations in the S gene of serotype II FCoV that contribute to FIP development have previously been identified. 16 Furthermore, mutations or sequence variations occurring at the primer binding site could cause negative S gene mutation RT-PCR results. These reasons could explain the negative results in four cats (numbers 8,10,13,18) in which FCoV load was high in some samples, but FCoV with S gene mutations was not detected. Interestingly, although S gene mutation RT-PCR was negative despite a high virus load in one sample, FCoV with S gene mutation or mixed FCoV (both FCoVs with and without S gene mutations) were detected in at least one different tissue or fluid in all of the four cats. For example, cat 18 had a high FCoV load in multiple organ samples, but S gene mutation RT-PCR was negative in these samples. However, FCoV with mutated S gene was detected in EDTA blood. This cat had histological lesions typical for FIP and positive IHC in the majority of organs, which confirms that FIP was present. These findings emphasise that a concurrent infection with different FCoV strains (non-mutated and mutated) is obviously possible and that in terms of virus kinetics, the process of evolving FIP in a patient is not a stable state. The fact that non-mutated FCoV was detected in mesenteric lymph nodes and kidneys of cats 7 and 15 also highlights fluctuating virus kinetics. It is either possible that the non-mutated FCoV detected was circulating non-mutated FCoV that had already been present in these cats before FIP evolved or that a superinfection with non-mutated FCoV had occurred, which led to systemic spread of non-mutated FCoV as described previously. 24 Detection of mutated and non-mutated FCoV within one cat in the present study confirms that coexistence of varying FCoV strains is common within one animal. Those findings have to be considered when performing RT-PCR. A 'negative' result of the S gene mutation RT-PCR does not rule out that the cat has FIP. Furthermore, the present study investigated which sample types (IBs, FNAs) are appropriate for virus detection. Percentages of positive RT-PCR results were similar for FNAs and IBs in most intra-abdominal organs and identical in mesenteric lymph nodes and liver for S gene mutation RT-PCR and in spleen for 7b gene RT-PCR. This is an unexpected but important result, as obtaining an IB is highly invasive and usually cannot be performed without anaesthesia. An earlier study examined whether FNA and tissue biopsies taken with a needle core device of liver and kidneys would be equally useful for diagnosing FIP via immunostaining (IHC or immunocytochemistry) and, in contrast to the findings of the present study, reported that sensitivities of immunostaining in the minimally invasive FNA and tissue biopsies were not satisfactory (11-31%). 45 In the present study, the percentage of positive 7b gene RT-PCR results in both FNA and IB was similar to or sometimes even higher than the percentage of positive IHC in the respective organs (Table 4). This demonstrates the advantage of RT-PCR detecting small amounts of virus, 31 whereas immunostaining requires more material and intact cells. Of course, histopathology and IHC, which are performed in combination, have the advantage of giving indicators to the presence of other disease processes and not just presence or absence of FCoV. But when only minimally invasive sampling is possible and cytology is non-diagnostic, RT-PCR should be preferred over immunostaining to detect FCoV. Another advantage of FNA is the possibility of targeting various locations; for example, ultrasound-guided sampling of several lesions or regions within organs. This is beneficial, as virus distribution can be inhomogeneous within an organ. One limitation to this study was the fact that collection of some samples occurred post mortem. Samples collected ante mortem might have provided higher amounts of viable viral RNA. Furthermore, unclassified FCoV strains detected by 7b gene RT-PCR (eg, in cats with high viral loads but negative S gene mutation RT-PCR) were not further analysed by an RNA sequencing approach, so it is unknown whether and which other mutations might have been present. Next generation sequencing of the S2 region would be very valuable in the future, in order to obtain insights into other possible mutations involved in FIP pathogenesis. Conclusions FCoVs with mutated S genes were detected in all examined cats with FIP in at least one tissue or body fluid. Serum and buffy coat smears were the only sample types in which FCoV with mutated S gene was never detected. The prevalence of FCoV with a mutated S gene was highest in effusion. Non-mutated and mixed FCoV infections were detected in some cats, highlighting the possibility that several FCoV strains can be present within one host. Considering FCoV detection, 7b gene RT-PCR can be an alternative to IHC in tissues with histopathological changes consistent with FIP. In this study, it provided a higher number of positive results for FCoV than IHC. Furthermore, it can be used on samples obtained by minimally invasive techniques if tissue biopsies and thus IHC is not possible. Author note Part of the results were presented as an oral presentation at the 26th annual meeting 'Innere Medizin und Klinische Labordiagnostik' of the German Veterinary Society (Deutsche Veterinärmedizinische Gesellschaft) in Hanover, Germany, 2-3 February 2018.
2019-11-16T14:06:15.571Z
2019-11-15T00:00:00.000
{ "year": 2019, "sha1": "039e19b5444008af132bb6013fa73fb6cd5115e0", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1098612X19886671", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5908e8ba6492c4d428747acd1d3d251fc19552d5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55176295
pes2o/s2orc
v3-fos-license
MICRO-FINANCE PROGRAMMES : GROWTH, ISSUES AND CHALLENGES The notion of Microf inance received an increased nnomentum in the after the Wor ld Summit for Social Development which was held in Copenhagen in 1995. The Copenhagen Summit emphasized the importance of improving access to credit for small producers, landless farmers, and rural women. But when we study about the credit for poverty alleviation, some questions needs to be answered. Is the mechanism of micro-f inance viable in India? Microf inance offers savings, credit and insurance services to the poor especially rural women. However, it has been observed that in India women generally have limited or no control over their money, because husband or male family member makes all the important decisions in the family including economic decisions. This paper focuses on the exper ience of Micro-f inance programmes in the Indian context. The paper portrays the limited access of credit to Indian rural women. The paper undertakes the study of various formal and informal f inance institutions' activities in India such as Rural Bank, SEWA (Self Employed Women's Association) etc. The paper concludes with the belief that women's empowerment needs to be a counterpart of every governmental policy and empowerment cannot be assumed to be an outcome of one single programme. |t must be incorporated and acknowledged in the planning procedure and economic policies. M i c r o F i n a n c e P r o g r a m m e s : G r o w t h , I s s u e s a n d C h a l l e n g e s choices. This process not only includes resources but also decision making capacity, power to negotiate and ability to achieve the outcomes of this process (Kabeer 1999). This paper focuses on the experience of f\/Iicro-finance programmes in the Indian context. consumption. This may be another serious problem because in that case men will become supporters and sympathizers of women's financial activities but in reality they were thinking about them self (Mayoux, 2000). Access to finance also changes the period of working hours between men and women in the same house or in the economic activity. Microfinance assists women but on the other hand it also increases the problem of working hours. Financial System In Rural India: It has been observed that in India that the development and reforms of the banking system remained isolated and was not made part of a broader socio-economic transformation in the countryside. In India land and agriculture reforms were explicit and have no direct connection with the reform of financial sector. Land remained inequitably distributed, and inequities in the access to credit followed the inequities in land distribution. This was because borrowers needed land as collateral in order to secure access to credit (Swamy 1979). Howeverthe Government has taken several initiatives to strengthen the institutional rural credit system but in practice these never became visible. It is mandatory in India that commercial banks are required to ensure that 40% of total credit is provided to the priority sectors out of which 18% in the form of direct finance to agriculture and 25% to priority Example of Grameen Bank In the case of Grameen Bank of Bangladesh the defining criteria for the microfinance projects are the size of loans and the targeted population, particularly women, from low-income households. These loans are generally offered without any collateral. This programme is based on innovative financial practices (Chavan and Ramkumar 2002). In this case to obtain loans. (Papa et al, 2006). The Grameen Bank type system also contains some weak point and limitations. Grameen Bank has not oriented itself towards mobilizing rural people's resources. It has a repayment system of 50 weekly equal installments and that is not practical because rural women do not have a stable work and revenues source. Pressure for high repayment may drive rural women to money lenders that may create another problem as Micro-finance is a time consuming process (Tiwari and Fahad 2004). Example of Regional Rural Banks: To Organizations (NGO) and Regional Rural Banks. Micro-finance, by its very nature, cannot attempt to meet the full range of the demand for credit in the whole rural area but whatever it will cover that will be bonus for development process. NGO linked micro-finance is, however, expected to incur more transactions costs and achieve a lower repayment record than the formal banking sector in respect of small-scale, short-term loans but for the social development, economic gains must be ignored up to some extent. Regional Rural Banks can also play a very crucial role and they can do well to work with local governments and self-help groups for the development of rural areas.
2019-05-18T13:08:00.408Z
2009-09-01T00:00:00.000
{ "year": 2009, "sha1": "99427859feda21e84d172a823220378ffac89aa7", "oa_license": "CCBY", "oa_url": "http://www.adarshjournals.in/index.php/ajmr/article/download/88383/67366", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "da0dceb87b66330893a32080d641b9f9e06d49a3", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
19715200
pes2o/s2orc
v3-fos-license
Microarray analysis reveals key genes and pathways in Tetralogy of Fallot The aim of the present study was to identify key genes that may be involved in the pathogenesis of Tetralogy of Fallot (TOF) using bioinformatics methods. The GSE26125 microarray dataset, which includes cardiovascular tissue samples derived from 16 children with TOF and five healthy age-matched control infants, was downloaded from the Gene Expression Omnibus database. Differential expression analysis was performed between TOF and control samples to identify differentially expressed genes (DEGs) using Student's t-test, and the R/limma package, with a log2 fold-change of >2 and a false discovery rate of <0.01 set as thresholds. The biological functions of DEGs were analyzed using the ToppGene database. The ReactomeFIViz application was used to construct functional interaction (FI) networks, and the genes in each module were subjected to pathway enrichment analysis. The iRegulon plugin was used to identify transcription factors predicted to regulate the DEGs in the FI network, and the gene-transcription factor pairs were then visualized using Cytoscape software. A total of 878 DEGs were identified, including 848 upregulated genes and 30 downregulated genes. The gene FI network contained seven function modules, which were all comprised of upregulated genes. Genes enriched in Module 1 were enriched in the following three neurological disorder-associated signaling pathways: Parkinson's disease, Alzheimer's disease and Huntington's disease. Genes in Modules 0, 3 and 5 were dominantly enriched in pathways associated with ribosomes and protein translation. The Xbox binding protein 1 transcription factor was demonstrated to be involved in the regulation of genes encoding the subunits of cytoplasmic and mitochondrial ribosomes, as well as genes involved in neurodegenerative disorders. Therefore, dysfunction of genes involved in signaling pathways associated with neurodegenerative disorders, ribosome function and protein translation may contribute to the pathogenesis of TOF. Introduction Tetralogy of Fallot (TOF) is a congenital heart disease, with an incidence rate estimated at 5-7/10,000 live births worldwide (1). TOF is characterized by ventricular septal defects, sub-pulmonary and pulmonary stenosis, an over-riding aorta and right ventricular hypertrophy (2). At present, the molecular mechanisms underlying the pathogenesis of TOF remain poorly understood. In past decades, the association between the pathogenesis of this disease and mutations in specific genes, including GATA binding protein 4 (3,4), GATA binding protein 6 (5), zinc finger protein, FOG family member 2 (4,6) and jagged 1 (7) have been reported; although the function of these genes remains controversial among different studies. In addition, a deletion in chromosome 22q11 (8,9) and copy number variations in various chromosomes, such as duplications 1q21.1 and micro deletions in 14q23 (10), have been implicated in the pathogenesis of TOF. Through the use of genome-wide gene expression microarrays, Bittel et al (11) revealed that the majority of abnormally expressed genes in cardiac tissue samples derived from patients with TOF were involved in compensatory functions, including hypertrophy, cardiac fibrosis and cardiac dilation, and the expression of genes involved in the WNT and Notch signaling pathways were suppressed. Using bioinformatics methods, the present study employed the microarray results submitted by Bittel et al (11) to further identify key genes that may be involved in the pathogenesis of TOF. Data and methods Microarray data. The GSE26125 expression profile dataset was downloaded from the Gene Expression Omnibus database (http://www.ncbi.nlm.nih.gov/geo/) (12). The mRNA used for array hybridization was extracted from the cardiovascular tissue samples of 16 children with TOF and five healthy age-matched control infants (11). The CodeLink Human Whole Genome Bioarray (Applied Microarrays, Inc., Tempe, AZ, USA), which contains >54,000 probes, was employed to analyze the samples (11). In addition, the GPL11329 CodeLink Human Whole Genome Bioarray (Applied Microarrays, Inc.) annotation platform was used. Data preprocessing and identification of differentially expressed genes (DEGs). The downloaded data were subjected to background correction, quantile normalization and probe summarization using the robust multi-array average method (13) with the R/Affy package version 3.2.2 in Bioconductor release 3.2 (14). Differential expression analysis of genes between the TOF and control groups was then performed using the Student's t-test with the R/limma package version 3.2.2 in Bioconductor release 3.2 (15). Genes with a log 2 fold-change (FC) of ≥2 and a false discovery rate (FDR) of <0.01 were considered to be DEGs. The samples were subsequently clustered based on the identified DEGs using the pvclust R package (16) by calculating the approximate unbiased P-values. Functional a nnotation a nd pathway enrich ment analyses of DEGs. The biological function of the identified DEGs was determined using the following tools: ToppGene (https://toppgene.cchmc.org/) (17), which is a one-stop portal for functional enrichment analysis of gene lists based on the Gene Ontology (GO) database (18); the BioSystems database (19); the BIOCYC database (http://BioCyc.org/) (20); the Kyoto Encyclopedia of Genes and Genomes (KEGG) database (21); the REACTOME pathway database (22,23); WikiPathways September 2015 release (http://www.wikipathways.org/index.php/Wiki Pathways), GenMAPP version 2.1 (Gladstone Institutes, San Francisco, CA, USA), the MSigDB C2 database version 5.0 (http://software.broadinstitute.org/gsea/msigdb), which integrated BioCarta, (Sigma-Aldrich and Signaling Gateway; http://www.biocarta.com/), the PANTHER database version 1.4 (http://pantherdb.org/downloads/index.jsp) (24), Pathway Ontology (http://www.obofoundry.org/ontology/pw.html) (25) and the Small Molecule Pathway Database (SMPDB) version 2.0 (http://www.smpdb.ca) (26,27). FDR values of <0.05 and a gene count of ≥2 were set as the threshold values. GO categories were classified into the following terms: Biological process, molecular function and cellular component. (28) is a Cytoscape software version 3.2.0 (29) application that allows researchers to identify network and signaling pathway patterns of interest, search for gene signatures from gene expression data sets, reveal signaling pathways significantly enriched by a list of genes, as well as integrate multiple genomic data types into a pathway context using probabilistic graphical models. In the present study, an FI network was constructed by merging interactions extracted from curated human pathways, with interactions predicted using a machine learning approach. The correlations among genes involved in the same FI were calculated and then used as weights for edges in the FI network. Then, a Monte Carlo Localization graph clustering algorithm was applied to the weighted FI network to generate a sub-network for a list of selected network modules, based on module size and average correlation. Each parameter was set a default value during the analysis as follows: Size of MCL (Markov Cluster Alorithm) clustering result=7; inflation parameter for MCL=5.0; and average correlation=2.5. The gene FI networks were visualized using Cytoscape software (29). Pathway enrichment analysis of each function module was subsequently performed to identify the signaling pathways enriched by genes in each module, with FDR values of <0.05. Construction of gene-transcription factor regulation networks. The iRegulon plugin (30) in Cytoscape software version 3.2.0, that is associated with and contains information form the integrated databases TRANSFAC, JASPAR, ENCODE (https://genome.ucsc.edu/ ENCODE/), SwissRegulon (http://swissregulon.unibas.ch/sr/), and HOMER (http://homer. ucsd.edu/homer/motif/motifDatabase.html), was used to identify transcription factors predicted to regulate the DEGs in the FI network according to the following parameters: A minimum identity between orthologous genes equal to 0.05, and a maximum FDR value of motif similarity equal to 0.001. A larger normalized enrichment score (NES) indicates a higher reliability, and an NES of >3.5 was set as the threshold. The gene-transcription factor pairs were then visualized using Cytoscape software. Results Identification of DEGs. Using log 2 FC values of >2 and FDR values of <0.01 as thresholds, a total of 878 DEGs were identified, including 848 upregulated genes and 30 downregulated genes. Among the 21 samples, 20 were correctly divided into the TOF or control groups by clustering analysis based on the identified DEGs, with an accuracy rate of ~95% (Fig. 1). One sample (TOF_15) was not divided into the TOF group by clustering analysis. Nevertheless, this indicated a relatively good performance of the clustering analysis. GO functional annotation and pathway enrichment analyses of DEGs. According to the different databases, the upregulated genes were enriched in multiple signaling pathways, including oxidative phosphorylation, the electron transport chain, Huntington's disease, Parkinson's disease, metabolism, the citric acid cycle and respiratory electron transport (Fig. 2). By contrast, the downregulated genes were not significantly enriched in any signaling pathway; however, they were enriched for several GO terms, such as hemoglobin complex, activation of nuclear factor-κB-inducing kinase activity and oxygen transport (Fig. 2). Gene FI network and pathway enrichment analyses. The gene FI network contained 7 function modules, each consisting of upregulated genes (Fig. 3). Genes in Modules 0, 1, 2, 3, 5 and 6 were further enriched in ≥ 1 signaling pathways (Table I) RPS27A, SSR3 and RPS24. Genes in Modules 0, 3 and 5 were dominantly enriched in pathways associated with ribosomes and/or protein translation ( Table I). Discussion Previously, Bittel et al (11) (11) used the Ingenuity Pathway Analysis tool for ontological assessments, which is a curated database and analytical bioinformatics system for identifying interactions, functions and interconnections (networks) between biological molecules. In addition, the differential expression patterns of several genes involved in the WNT or Notch signaling pathways were validated using reverse transcription-quantitative polymerase chain reaction analysis. In the present study, as well as the functional annotation of individual genes, an FI network was constructed and pathway enrichment analysis for each function module was performed. This was used to identify transcription factors predicted to regulate genes in the FI network. Gu et al (31) searched for potential small-molecule drugs by mapping the identified DEGs to the Connectivity Map database. Bittel et al (11) demonstrated that the majority of the DEGs with abnormal expression were involved in compensatory functions, including hypertrophy, cardiac fibrosis and cardiac dilation, while the WNT and Notch signaling pathways, which are involved in spatial and temporal cell differentiation, appeared to be suppressed (11). In the present study, an FI network based on the identified DEGs was constructed and the biological functions of these genes were investigated using (33) observed a marked increase in the expression of small nucleolar RNAs (snoRNAs) in the right ventricular myocardium of 16 infants with nonsyndromic TOF, and demonstrated that the target nucleotides of the differentially expressed snoRNAs were primarily 28S and 18S ribosomal RNAs. These results, together those of the present study suggests that the differential expression of genes encoding ribosome subunits may be associated with the dysregulation of snoRNAs. In a previous study that investigated mutations in RPL5 and RPL11 genes in Czech patients with Diamond-Blackfan anemia (34), a mutation in RPL5 in a patient with TOF was reported. Therefore, this study may support the involvement of RPL genes in the pathogenesis of TOF. In the present study, multiple upregulated genes encoding the subunits of the NADH dehydrogenase complex, including NDUFB8, NDUFB5, NDUFS4 and NDUFS1, were enriched in three signaling pathways associated with Parkinson's disease, Alzheimer's disease and Huntington's disease. This indicates that TOF may share common mechanisms with neurodegenerative disorders. However, the association between NDUFB and NDUFS family genes and TOF has seldom been reported, except for NDUFB5, which has been confirmed to be expressed in mouse heart tissues (35). Despite the lack of direct evidence, previous studies have supported a connection between TOF and neurodegenerative disorders. For instance, Brown et al (36) reported a case of a male infant with infantile neurodegeneration and TOF, and Jinnou et al (37) reported a case of a male infant with pontocerebellar hypoplasia and TOF. These two cases suggest that the pathological mechanisms underlying TOF and neurodegenerative disorders may share common features. Among the eight transcription factors predicted by the iRegulon plugin in the present study, XBP1 expression was observed to be upregulated in patients in the TOF group. Notably, this gene was predicted to regulate genes encoding the subunits of cytoplasmic and mitochondrial ribosomes, as well as genes involved in neurodegenerative disorders. The XBP1 protein is characterized by its ability to bind the conserved transcriptional Xbox element, which is present in the promoter of the human leukocyte antigen DRα (38). XBP1 is known to be a marker of endoplasmic reticulum stress, a phenomenon that manifests with the accumulation of unfolded proteins in the endoplasmic reticulum (39), which frequently occurs during ischemia/reperfusion following myocardial ischemia (40). However, XBP1 is not a known regulator of the aforementioned genes; therefore, further studies that explore the role of XBP1 in the pathogenesis of TOF in more detail are required. In conclusion, the results of the present study suggest that the dysregulation of genes encoding the mitochondrial and cytoplasmic ribosomal subunits may contribute to the pathogenesis of TOF via signaling pathways associated with ribosomes and protein translation. In addition, genes encoding the NADH dehydrogenase complex may contribute to the pathogenesis of this disease via neurodegenerative disorder-associated signaling pathways. As the transcription factor XBP1 was predicted to be implicated in the regulation of genes involved in these signaling pathways, it may therefore be involved in the pathogenesis of TOF. However, this is yet to be validated in future studies. The results of the present study provide an in-depth insight into the molecular mechanisms underlying the pathogenesis of TOF.
2018-04-03T04:59:21.831Z
2017-07-06T00:00:00.000
{ "year": 2017, "sha1": "2d36f9ad2ace2a94d9668087afbf283d0b535c89", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2017.6933/download", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "2d36f9ad2ace2a94d9668087afbf283d0b535c89", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
146001029
pes2o/s2orc
v3-fos-license
Camera trapping reveals a diverse and unique high-elevation mammal community under threat Abstract The Cerros del Sira in Peru is known to hold a diverse composition of endemic birds, amphibians and plants as a result of its geographical isolation, yet its mammalian community remains poorly known. There is increasing awareness of the threats to high-elevation species but studying them is often hindered by rugged terrain. We present the first camera-trap study of the mammal community of the Cerros del Sira. We used 45 camera traps placed at regular elevational intervals over 800–1,920 m, detecting 34 medium-sized and large mammal species. Eight are listed as threatened on the IUCN Red List, three are categorized as Data Deficient and one is yet to be assessed. Although other authors have reported that the upper elevations of the Cerros del Sira are free from hunting, we found evidence of hunting activity occurring above 1,400 m, and inside the core protected area. In addition to this direct evidence of hunting, recent information has identified significant amounts of canopy loss in the northern reaches of the core zone. Despite widespread ecological degradation in the surrounding lowlands, the high-elevation areas of the Cerros del Sira still maintain a unique assemblage of lowland and highland tropical rainforest mammals. It has been assumed that the Cerros del Sira and other similar remote locations are safe from disturbance and protected by their isolation but we suggest this is an increasingly dangerous assumption to make, and secure protection strategies need to be developed. Introduction T he Cerros del Sira is an isolated mountain range in Peru, home to a diverse and unique flora and fauna. The summits rise from the left bank of the Ucayali River, with rugged terrain that extends over five elevational zones (-, m). Such isolation predisposes the Cerros del Sira to host a large number of endemic species but also means that these species and their habitats are sensitive to human-driven forest disturbance and climatic change (Forero-Medina et al., ). Although climatic changes are not of dramatic consequence for species residing in low-lying well-connected habitat, tropical species in isolated ranges, such as the Cerros del Sira, will have no suitable habitat to shift to, and could be outcompeted by low-elevation species moving to higher altitudes (Tewksbury et al., ). Marginalized and difficult-to-access areas with little attraction for agriculture have historically been passively protected from anthropogenic disturbance. These lands are remote, nutrient poor, and steep, and are ideal for governments to assign for protection (Nelson & Chomitz, ; Harris et al., ). However, the passive protection provided by such remoteness is increasingly being questioned (e.g. Poulsen et al., ). Evidence indicates that many remote protected regions are undergoing defaunation of medium-sized and large vertebrates (Fa et al., ; Galetti & Dirzo, ), and fragmented landscapes facilitate access for illegal activities in remote areas (Michalski & Peres, ). In the case of Sira, Novoa et al. () reported the loss of , ha of forest to agriculture and grazing inside the Sira Communal Reserve during -. Camera traps are well-suited for surveys in remote locations with poor local infrastructure and harsh terrain (e.g. Jiménez et al., ; Beirne et al., ), and we present the first camera-trap survey of the mammalian community of the Cerros del Sira. We used  terrestrial camera traps over  years along a previously unstudied elevational gradient, with the aim of laying a foundation for future research, monitoring and conservation planning efforts in one of the last intact forest landscapes in the central region of the Peruvian Amazon. Study area The Sira Communal Reserve is situated between the Ucayali River to the east and the Pachitea River to the west. It is the largest community reserve in Peru, forming part of the Oxapampa-Asháninka-Yánesha Biosphere Reserve, located in the departments of Pasco, Huánuco and Ucayali, with an area of , ha and a buffer area of ,, ha (INRENA, ). The core protected area encompasses the Cerros del Sira Mountains, and the surrounding buffer area comprises a local human population of ethnic groups of Ashaninka, Asheninka, Shipibo-Conibo and Yanesha, and rural communities of Andean migrants (Benavides et al., ). Our study transect is to the east of Puerto Inca, in the province of Huánuco (Fig. ), along a previously unstudied ridgeline on the north-western border of the Sira Communal Reserve. The transect covers both lower and upper montane forests, and elfin forest towards the highest elevations. Mean annual precipitation is c. , mm in the montane forest and up to , mm on the peaks (, m). The remoteness of the area has attracted illegal extractive industries, including coca cultivation (for cocaine production), gold mining, poaching and logging. The construction of roads since  has attracted the private sector, with timber and agricultural corporations replacing many of the local Methods Camera traps (Trophy Cam, Bushnell, Overland Park, USA) were deployed in mid to late March and removed at the beginning of September (dry season) in  and . Cameras were placed  cm above the ground, and all low vegetation within  m was cleared to standardize detection probabilities. Cameras were programmed to record a  s video, with intervals of  s between successive triggers (Meek et al., ; Beirne et al., ). Of the  cameras deployed in ,  were set to monitor medium-sized and large vertebrates (including the Critically Endangered Sira curassow Pauxi koepckeae; Beirne et al., ) and two were placed at the entry point to the ridgeline ( m elevation) and at our principal campsite ( m), to monitor hunting activity. Thirteen cameras were placed at elevational intervals of  m between  and , m, four were placed at intervals of  m between , and , m and two were placed at water sources (a clay lick at , m and a stream at , m). In ,  camera traps were placed at elevational intervals of  m between  and , m to monitor the vertebrate community, and two were placed at the same water source locations used in . As people were also detected by the wildlife cameras along the ridgeline in , camera traps were not set specifically to target hunting activity in . The camera-trap rate was calculated for every given species as the number of videos/ camera-trap days. Non-independent events, defined as videos of the same species at the same location within  minutes of a previous detection, were excluded from the calculation. Cameras were interfered with by both people and wildlife: jaguars Panthera onca moved three cameras (destroying one completely), spectacled bears Tremarctos ornatus moved four cameras, and people moved two cameras into less effective positions for surveying (i.e. facing dense bushes or the ground). The survey team recorded all incidental audio and visual signs of mediumsized and large mammals while the team was present on the study ridge (- March  and  March- April ). Results Overall, we detected  medium-sized and large mammal species, belonging to eight orders and  families (Plate ; The species accumulation curve from camera data for both  and  shows a clear plateau, indicating that the sampling effort was sufficient to characterize the community of medium-sized and large mammals (Fig. ). During the first , camera-trap days most of the mammal community was recorded (S obs = ), with only five more species added to the accumulation curve from an additional effort of . , camera-trap days. The distribution of species across the elevational bands indicates that diversity was highest at ,-, m, with the highest observed species richness (S obs = ) at , m. Only five species were detected at . , m: the oncilla Leopardus tigrinus, the spectacled bear, the long-tailed weasel Mustela frenata, the Andean white-eared opossum Didelphis pernigra and the pacarana Dinomys branickii. The pacarana was captured only once during the study (at , m) and the longtailed weasel only on two occasions, at , and , m ( Fig. ; Supplementary Tables  & ). Of the  species detected, one is categorized as Endangered on the IUCN Red List (the Peruvian woolly monkey Lagothrix cana), four as Vulnerable (the lowland tapir Tapirus terrestris, the spectacled bear, the oncilla and the giant anteater Myrmecophaga tridactyla), and three as Near Threatened (the short-eared dog Atelocynus microtis, the margay Leopardus wiedii and the jaguar). Three of the species detected are categorized as Data Deficient (the South American red brocket deer Mazama americana, the agouti Dasyprocta variegata and the Amazon dwarf squirrel Microsciurus flaviventer), and one is yet to be assessed (the burnished saki monkey). Nineteen of the species detected are listed in the CITES Appendices (CITES, ; Supplementary Table ). The two cameras placed specifically to monitor human presence within the study site in  detected direct hunting activity (i.e. people with guns or carrying dead wildlife) on seven occasions, and shotgun shells were found frequently on the forest floor. According to our guides, some of the hunters observed in the videos were cocaleros (people hired to harvest coca) and some were working on inventorying trees for a timber concession. One of the videos shows a hunter carrying a dead razorbilled curassow Mitu tuberosum. While camping at , m we heard two gunshots. Shortly afterwards, camera-trap footage confirmed the presence of hunters carrying two dead woolly monkeys. Discussion The community of medium-sized and large mammals of the Cerros del Sira is exceptionally diverse, with a unique assemblage of species comprising typical lowland Amazonian species as well as high-elevation species. Illegal hunting activity was detected at , m elevation and within the protected area of the Sira Communal Reserve, despite the suggestion that the montane terrain of the Reserve probably receives little attention from hunters (Mee & Ohlson, ). The species richness of medium-sized and large mammals reported here is higher than that reported from other highland rainforest areas of Peru (Puno = , Apurimac River = , northern Peru = , and Kosñipata = ; Pacheco et al., , ; Jiménez et al., ; Medina et al., ). Furthermore, we detected seven species additional to the  recorded at nearby lowland Panguana Biological Station (Hutterer et al., ). Four of these species have a high-elevation distribution (spectacled bear, oncilla, longtailed weasel and Andean white-eared opossum); the other three are the pacarana, the short-eared dog and the agouti. Of particular significance was the first detection (to our knowledge) of the two largest land predators in South America at the same camera location, the spectacled bear and the jaguar, on the highest camera, at , m. Bears were captured at elevations as low as , m, and both species were subsequently recorded at a camera station at , m in Soqtapata Reserve (.°S, .°W); FIG. 2 Species richness accumulation curve with cumulative number of camera-trap nights, for medium-sized and large mammals in Sira Communal Reserve (Fig. ). The grey shaded area indicates the % confidence interval. Rafael Pilares, pers. comm.). It has been suggested that the elevation range of these two species in Peru and Bolivia does not overlap anywhere within a single mountain slope, and overlaps only slightly at c.  m throughout the Cordillera Oriental (Servheen et al., ). The closest known previous records of the bear to our study area are from the Pachitea Basin, determined by interviews conducted with the inhabitants of the Sira Communal Reserve, and from products derived from bears avaialable for sale in the local markets of Puerto Inca, El Sira and Llullapichis (Figueroa, ; Hurtado et al., ). Compared to the range of the spectacled bear documented on the IUCN distribution map for the species (Velez-Liendo & García-Rangel, ), our photographic records together with those of Figueroa () and Hurtado et al. () indicate an eastwards extension of c.  km. If the bear is distributed throughout the high elevations of the Sira range this could represent an increase in its known habitat of up to , ha, enough habitat for c.  adult individuals, based on previous density estimates from other locations (Kattan et al., ; Ríos-Uzeda et al., ). As various individual spectacled bears were recorded on several occasions and in different locations, these are unlikely to be records of vagrant individuals. Another threatened species of note is the oncilla, one of the smallest cats and least known Neotropical mammals (Tortato & Oliveira, ; Hurtado et al., ). A search of the literature and museum databases (Hurtado et al., ) yielded only  records from the past  years, of which only three were since , all within the montane forest of the Peruvian Yungas region (Hurtado et al., ). Based on the current IUCN distribution map for this species, our records are c.  km west of the known range (Payan & Oliveira, ). Other large-bodied species of note detected that require large intact habitats include the lowland tapir and the giant anteater, suggesting a high degree of ecological integrity within the core Sira Reserve. The presence of many small, rare and cryptic species, including the margay, the shorteared dog and the pacarana, further underlines the importance of the Reserve in sustaining species of conservation significance (Bickford et al., ). The Peruvian woolly monkey is a key species for seed dispersal, supporting carbon stores and maintaining ecosystem viability (Bello et al., ; Estrada et al., ); it is also a favoured target for hunters, as we discovered, and is susceptible to local extirpation (Peres, ). In addition to the direct evidence of hunting in the Reserve recorded by the camera traps, informal interviews with members of local communities living within the buffer zone confirmed the disappearance of key bushmeat species in nearby lowland areas (in particular the Peruvian woolly monkey, the black-faced spider monkey Ateles chamek and the white-lipped peccary Tayassu pecari). This loss FIG. 3 Elevational distribution of mammal species recorded by camera traps along the survey transect in Sira Communal Reserve (Fig. ). Cameras placed on trails and at specific habitat features are represented by triangles, mean elevation of detections is represented by filled circles, and maximum and minimum elevation records are represented by vertical lines. of game vertebrates is driving hunting pressure into the upper reaches of the forest within the core reserve area. Subsistence hunting is conducted when the hunter has no viable alternatives, especially where other food options are not readily available (Ripple et al., ). During our stay in the communities close to our study site we observed that they reared many animals for personal consumption (chicken, cows, ducks and pigs), and informal conversations confirmed that hunting is practised predominantly as a cultural legacy (enjoying bushmeat, collecting trophies and a social activity with family and friends), not for survival. In  we witnessed illegal logging inside the core area, at , m. Previously most anthropogenic impact had occurred outside the core area; however, our direct evidence, along with satellite imagery showing canopy loss in the northern reaches of the core zone (Novoa et al., ), indicates this is a real threat to the protected core region. The National Service of Natural Protected Areas (SERNANP) is limited in its capacity to patrol large remote forested landscapes. In such cases, remote sensing technologies such as camera traps and acoustic monitoring devices have been proven to be effective as complementary tools for monitoring illegal activities in protected areas (Hossain et al., ). The remoteness and rugged terrain of the Cerros del Sira has so far protected it from extensive human pressure within the core area of the Reserve. However, evidence of defaunation of other remote areas (Fa et al., ; Galetti & Dirzo, ) suggests that we cannot rely on isolation alone to protect this important conservation area. Fragmentation and isolation could be detrimental to the Cerros del Sira; surrounding areas must retain sufficient integrity and connectivity between key protected areas of the Oxapampa-Asháninka-Yánesha Biosphere Reserve to facilitate species migration and gene flow for viable populations. Although the surrounding lowlands have been subject to widespread ecological degradation, the high-elevation areas still maintain a unique assemblage of lowland and highland tropical rainforest mammals, many of which are threatened or poorly known. The presence of these species indicates the Reserve has a high degree of ecological integrity and remains one of the few intact wilderness regions (Watson et al., ). Although the species in these highlands have thus far been protected from habitat loss, the increasing human population and demand for economic development are exerting increasing pressure on their habitats (Soh et al., ). If such pressures are combined with climate change induced range shifts (Forero-Medina et al., ), species losses may be catastrophic. There is a need for increased protection of the roadless, intact core area of the Reserve and to ensure the maintenance of connectivity to other key protected areas (Kearney et al., ). The development of plans in collaboration with local people and national park authorities to help create viable sustainable livelihoods that limit impacts on biodiversity could foster this, in addition to providing park rangers with the necessary facilities and resources to implement protection of the Reserve.
2019-05-07T13:40:58.274Z
2019-04-17T00:00:00.000
{ "year": 2019, "sha1": "5ec21c86a0a0c46e69f8c8ead0881dee043cf326", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/EA00498DD045A67DF6A96C001C94C430/S0030605318001096a.pdf/div-class-title-camera-trapping-reveals-a-diverse-and-unique-high-elevation-mammal-community-under-threat-div.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4db716bd73aff8a944b94b55f20f1ff47cee93c8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
262484810
pes2o/s2orc
v3-fos-license
Long-term follow-up after bronchoscopic lung volume reduction treatment with coils in patients with severe emphysema Background and objective Bronchoscopic lung volume reduction coil (LVR-coil) treatment has been shown to be safe and clinically effective in patients with severe emphysema in the short term; however, long-term safety and effectiveness has not been evaluated. The aim of this study was to investigate the long-term safety and effectiveness of LVR-coil treatment in patients with severe emphysema. Methods Thirty-eight patients with severe emphysema (median age is 59 years, forced expiratory volume in 1 s is 27% predicted) who were treated in LVR-coil clinical trials were invited for a voluntary annual visit. Safety was evaluated by chest X-ray and recording of adverse events and by efficacy by pulmonary function testing, 6-min walk distance (6MWD) and questionnaires. Results Thirty-five patients visited the hospital 1 year, 27 patients 2 years and 22 patients 3 years following coil placement. No coil migrations were observed on X-rays. At 1-year follow-up, all clinical outcomes significantly improved compared with baseline. At 2 years, residual volume % pred, modified Medical Research Council (mMRC) and the SGRQ score were still significantly improved. At 3 years, a significant improvement in mMRC score remained, with 40% of the patients reaching the 6MWD minimal important difference, and 59% for the St George's Respiratory Questionnaire (SGRQ) minimal important difference. Conclusions Follow-up of the patients treated with LVR-coils in our pilot studies showed that the coil treatment is safe with no late pneumothoraces, coil migrations or unexpected adverse events. Clinical benefit gradually declines over time; at 3 years post-treatment, around 50% of the patients maintained improvement in 6MWD, SGRQ and mMRC. INTRODUCTION Bronchoscopic lung volume reduction (BLVR) is a new minimally invasive treatment option for patients with severe emphysema. 1 BLVR with one-way endobronchial valves, a 'blocking' device, is an efficacious method in a selected group of patients with absence of collateral ventilation (CV). 2,3 For the majority of patients with severe emphysema, a BLVR treatment that works independently of CV, a 'nonblocking' device, must be used. One of the currently investigated non-blocking devices is the lung volume reduction (LVR) coil (RePneu, PneumRx, Inc., Mountain View, CA, USA). This nitinol coil is bronchoscopically delivered in both lungs in either upper or lower lobe heterogeneous emphysema or homogeneous emphysema, 4,5 thereby compressing diseased parenchyma and radially suspending airways after placement in the lung. To date, five studies investigating LVR-coil treatment have been published. [4][5][6][7][8] Four non-randomized studies (n = 10, 11, 16 and 60 patients) 4,6-8 and one randomized study (24 controls and 23 treated patients) 5 showed that the procedure is feasible, safe and well tolerated. Significant improvements in quality of life, exercise capacity and pulmonary function were observed. 4,5,7,8 Most studies had relatively short follow-up times: 3 months, 5,6 6 months 4,8 and one study up to 12 months after treatment. 7 To our knowledge, no study investigated a longer follow-up time after LVR-coil treatment. This longer follow-up time is needed to document both safety and effectiveness of the procedure. In our hospital, we performed two pilot studies investigating bronchoscopic LVRcoil therapy, with treatments in 2009 and 2010. The aim of this study is to investigate the safety and effectiveness of LVR treatment with coils 1, 2 and 3 years post-treatment in patients with severe emphysema who participated in pilot trials. Study population Between April 2009 and November 2010, 38 patients were treated with the LVR-coil at our institution, in one of two pilot studies (NCT01220908 4 and NCT01328899 7 ). The inclusion and exclusion criteria for both can be found in Table S1. Both studies were approved by the University Medical Center Groningen Medical Ethics Committee, and all participants signed informed consents. LVR-coil treatment The LVR-coil procedure has been described before. 4,6 In brief, the coils (RePneu, PneumRx Inc.) are made of shape-memory nitinol wire, range in length from 70 to 200 mm to accommodate airways of different sizes and are designed to compress the lung parenchyma. The coils were bronchoscopically placed under general anaesthesia in two sequential procedures using fluoroscopy. Study design The follow-up period of both studies were 6 4 and 12 months 7 after the second treatment. After completing and exiting the study, patients were invited for a voluntary annual follow-up visit. Patients performed pulmonary function measurements, 6-min walk test (6MWT) and chest X-ray and completed questionnaires. Patients also had a consultation with a physician who reported the patient's health status during the past year. Measurements Spirometry, bodyplethysmography and the 6MWT were performed using European Respiratory Society/ American Thoracic Society (ATS) guidelines. [9][10][11] Health-related quality of life was measured by the St George's Respiratory Questionnaire (SGRQ) 12 and dyspnoea severity by the modified Medical Research Council (mMRC) dyspnoea scale. 13 Safety was measured by recording all adverse events reported by the patients during the yearly follow-up visits. The first X-ray after the treatment and the last performed X-ray at final follow-up visit for all participants were assessed for presence of coil migration (defined as displacement of the original posttreatment coil position in the segment), atelectasis and consolidation of tissue around the coils. Pre-treatment decline in forced expiratory volume in 1 s All available spirometry results of the pre-treatment years were collected from the patient's own hospital, serving as a reference of the expected decline in lung function of our patients. Lung transplantation Two patients underwent a lung transplantation: one patient at 1 year and the second patient at 4 years post-treatment. Both patients gave permission for histopathological examination of the explant. The lung tissue was processed according to routine clinical guidelines for confirmation of disease diagnosis and assessment of any potential concurrent disease. Haematoxylin and eosin stains were made on lung sections after careful removal of the nitinol coils, and representative sections were photographed and unedited used for presentation in this study. Statistical analysis Due to non-normally distributed data, Wilcoxon signed rank tests were performed to compare the clinical characteristics at 1-, 2-and 3-year follow-up against baseline and to compare if baseline characteristics differed between responders and nonresponders at 3-year follow-up. For the responder analyses, we counted the number of patients who reached the earlier established minimal important difference (MID) for forced expiratory volume in 1 s (FEV1) (100 mL 14 and 10%), RV (400 mL 15 ), 6-min walk distance (6MWD) (26 m 16 ), and the SGRQ (4 points 17 ). The annual change in post-bronchodilator FEV1 before the treatment was derived from the slope of the regression line for each patient's individual FEV1 values measured at their own hospital. We only calculated the annual change in FEV1 of patients when at least three FEV1 values were available. Paired sample t-tests were performed to compare the difference in the decline in FEV1 before and after the treatment. P-values < 0.05 were considered statistically significant. IBM-SPSS Statistics (v20) was used for statistical analysis (IBM, Armonk, NY, USA). Patients The baseline characteristics of the 38 patients are shown in Table 1. One year after the treatment, 35 patients performed follow-up measurements, at 2 years 27 patients and at 3 years 22 patients (Fig. 1). Safety The adverse events are shown in Table 2. Six patients (16%) died during the 3-year follow-up independent of the treatment. The causes of death are reported in Table 2. Two patients had a pneumothorax directly after the coil procedure; however, no long-term pneumothoraces occurred. Of the patients, 74% reported a very mild haemoptysis just postprocedure; only one patient reported spontaneous settling of more severe haemoptysis at 3-year followup. On the follow-up chest X-rays, we observed no coil migrations, a segmental atelectasis was visible in 3 patients (8%) and consolidation of tissue around some of the coils in 11 patients (29%) (see Fig. 2 for the first X-ray post-procedure and the follow-up X-ray at 3-year follow-up of two example patients). Effectiveness At 1-year follow-up, forced vital capacity, RV, RV/total lung capacity, mMRC, 6MWD and SGRQ total score were all significantly improved compared with baseline. At 2-year follow-up, RV, mMRC and the SGRQ total score were significantly improved when compared with baseline. At 3-year follow-up, only the mMRC was significantly improved compared with baseline. The other clinical characteristics were not significantly changed at 3 years compared with baseline ( Table 3). The number of patients reaching the MID for FEV1 ranged from 20-30% (absolute change) to 30-40% (relative change) throughout the 1-to 3-year followup. The number of patients reaching the MID for RV decreased during the 1-to 3-year follow-up, from 51% to 19%. The number of patients reaching the MID for 6MWD decreased during the 1-to 3-year follow-up from 57% to 40%. The number of patients reaching the MID for SGRQ ranged from 50% to 60% throughout the 1-to 3-year follow-up (Table 4). No differences were found in baseline characteristics between patients who reached the MID for SGRQ or 6MWD at 3-year follow-up compared to patients who did not reach the MID. Pre-treatment decline in FEV1 At least three previously performed FEV1 measurements were available for 30/38 patients (79%). The median number of available measurements was 9 (range 3-23) and the median number of days for the first available measurement before treatment was 1989 days (range: 292-4376). The mean decline in FEV1 before the LVR-coil treatment was −0.082 L/year (standard deviation: 0.073). This was significantly different compared with the mean decline in FEV1 during study participation (mean decline: −0.036 L/ year, P = 0.018). The decline in FEV1 after more than 6 months of follow-up did not significantly differ compared with the decline before the treatment (mean decline: −0.060 L/year, P = 0.45) (Fig. 3). Lung transplant explant evaluation On gross macroscopic evaluation of the lung explants, the coils could be identified in the main segmental and sub-segmental airways. No vascular disruptions were noticed, nor were there any abscess formations in the coiled regions. Histopathological examination revealed in both patients, besides presence of emphysematous tissue, a thin, compressed capsule of tissue around the imprints of the airways with a slight inflammatory reaction. It was unclear whether these changes represent pre-existing pathology in these patients or if this is associated with device placement. In the 1-year specimen, the presence of interstitial fibrosis of alveolar septa with the device 'capsule' and the surrounding alveolar parenchyma was visible. In the 4-year specimen, the device imprint in the airways was surrounded by a well-organized fibrous capsule comprised of compressed, concentric rings of stroma, and this was also found in the alveolar parenchyma, where the device imprint was in an area of more dense fibrous tissue. No abundant inflammatory reaction or infection was found in either explant (see Fig. 4a-f). DISCUSSION This was the first study that investigated the longterm safety and effectiveness of bronchoscopic LVR treatment with nitinol coils. In this trial, we followed our first pilot study patients over the years and showed that the treatment is safe in the long term. After 1 year, the treatment was found to be clinically effective compared with baseline, with a median gradual decline of the clinical benefits over time, with 3-year follow-up approaching similar parameters to the pre-treatment baseline for the overall group and with a responder rate of 59% of the patients reaching MID for SGRQ and 40% for 6MWD at 3 years. In the 3-year follow-up of our pilot studies, patients showed that the LVR-coil treatment was safe in the long term. We witnessed no late pneumothoraces, no coil migrations, no major haemoptysis, no major infectious complications or unexpected adverse device events and no treatment-related deaths. The 3-year survival in our group (84%) is in line with survival reports in the literature for comparable patient populations. Lange et al. reported a 74.2% 3-year survival, 18 and a 55-65% 3-year survival is reported when using Collaborative Cohorts to Assess Multicom-ponent Indices of COPD in Spain, Global Initiative for Chronic Obstructive Lung Disease, or ATS/Body-Mass Index, Airflow Obstruction, Dyspnea, and Exercise Capacity Index in Chronic Obstructive Pulmonary Disease severity criteria. 19 Evaluation of post-lung transplant-explanted lung tissue showed that the proximal and mid portions of the coils can still be found in the segmental and sub-segmental airways, encapsulated by some fibrotic/organizing reaction, with occasionally the most distal part of the coils being encapsulated in the surrounding lung tissue, but with no signs of serious inflammatory or infectious reactions. These findings indicate that there is tendency of the airways and lung tissue to slowly organize around the coils, which might be due to local tissue stress, compression and micromovements of the coils. The treatment was beneficial for a large group of patients after 1 year, with overall mean clinical parameters returning to baseline values at 3 years. Unfortunately, we did not have a control group in which we could investigate the natural decline of clinical parameters. However, the National Emphysema Treatment Trial (NETT) study 20 that investigated lung volume reduction surgery (LVRS) in severe emphysema patients with a median follow-up of 4.3 years reported that clinical parameters like SGRQ declined in both the treatment and control group. 20 To estimate the natural rate of functional decline in our patients, we collected all available pre-treatment spirometries. We found that the rate of decline did not change after the LVR-coil treatment but that treatment increased FEV1 to the extent that return to pre-treatment baseline levels occurred only after approximately 3 years (Fig. 3). That the rate of decline did not change is unsurprising; two other studies investigating LVRS also showed that the rate of decline after surgery was comparable with the rate of decline before surgery. 21,22 We believe it is as important to evaluate clinical significance as it is with statistical significance of outcomes from treatment. Therefore, we also investigated whether patients reached the MID for FEV1, RV, 6MWD and SGRQ at each time point. However, a confounding factor is that most MIDs were calculated for short-term changes, ranging from 1 15 to 6 16 months post-intervention. A long-term MID (for example 3 years) could be lower than an MID for the short term. Therefore, the MIDs used in our analyses could underestimate the number of meaningful responders at 3 years. Unfortunately, this is not known and would be interesting to investigate. We did not find any pre-dictive factors to identify responders at 3-year followup. However, our sample size was too small to be able to evaluate this in detail. Current ongoing large randomized controlled trials (NCT01608490 and NCT01822795) will possibly give more insight in the best responder profile for this treatment. Long-term follow-up after BLVR with coils has not been investigated before. A few other studies investigating other LVR techniques included at least 12 months follow-up. The NETT study 20 found that 20% of the patients improved more than 8 points on the SGRQ total score 3 years after LVRS (patients who died or were lost to follow-up were considered not improved). When we apply the same rules for improvement, 31% of our patients (n = 11) improved more than 8 points after 3 year. As in our study, the NETT study also found a larger improvement in the quality of life in the long term than in exercise capacity. Another study investigated the effect of lung sealant therapy for emphysema in 16 patients 2 years after the initial treatment. 23 They found a much higher number of patients who reached the MID for FEV1 2 years after the treatment, which is 50% compared with 19% in our population. Not much literature to date has been published on longer-term follow-up data for bronchoscopic LVR devices. Three small cohort studies investigated long-term follow-up of endobronchial valve treatment. Venuta et al. 24 showed promising results after 3 and 5 years follow-up. Unfortunately, patient loss to follow-up was not taken into account, and paired statistical analyses were not used, making the result difficult to interpret. A retrospective study by Kotecha et al. 25 showed that 6 out of 16 patients (38%) had sustained long-term improvements in FEV1 (Δ > 0), which is comparable with our study (at 2-year follow-up:11/27 (31%)). Furthermore, Hopkinson et al. 26 showed that the occurrence of atelectasis following endobronchial valve treatment was associated with prolonged survival at 6 years follow-up. The major disadvantage of our study is the noncontrolled design and possible selection bias of patients who volunteered for yearly follow-up visits after participating in one of our pilot studies. Although a large number of patients did visit our hospital yearly, the results at 2-and 3-year follow-up should be interpreted with caution as patients with worse response could be presumed less likely to return for follow-up. It would be useful to investigate the long-term efficacy and safety of the LVR-coil treatment in a randomized controlled intervention study with long-term follow-up. Currently, a large (n = 315) randomized controlled trial with 5-year follow-up is enrolling patients and will give additional insight into the long-term effectiveness and safety of coil treatment (Lung Volume Reduction Coil Treatment in Patients With Emphysema Study: NCT01608490). In conclusion, follow-up of our very first pilot patients showed that LVR-coil treatment is safe in the long term, with no late pneumothoraces, coil migrations or unexpected adverse events. Clinical benefit gradually declines over time; at 3 years posttreatment, around 50% of the patients maintained improvement in 6MWD, SGRQ and mMRC.
2018-04-03T03:00:07.592Z
2014-11-23T00:00:00.000
{ "year": 2014, "sha1": "504e0e8b76bb14255b04acd350c66955592bd6fb", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc4321042?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "504e0e8b76bb14255b04acd350c66955592bd6fb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14793072
pes2o/s2orc
v3-fos-license
Intermediate inflation in light of the three-year WMAP observations The three-year observations from the Wilkinson Microwave Anisotropy Probe have been hailed as giving the first clear indication of a spectral index n_s<1. We point out that the data are equally well explained by retaining the assumption n_s=1 and allowing the tensor-to-scalar ratio r to be non-zero. The combination n_s=1 and r>0 is given (within the slow-roll approximation) by a version of the intermediate inflation model with expansion rate H(t) \propto t^{-1/3}. We assess the status of this model in light of the WMAP3 data. I. INTRODUCTION The most striking result from the three-year Wilkinson Microwave Anisotropy Probe (WMAP) observations [1] is the pressure that they impose on the Harrison-Zel'dovich spectrum of density perturbations, for which adiabatic perturbations have the scale-invariant spectral index n s = 1. This spectrum was first proposed by Harrison [2] and Zel'dovich [3] because it has metric potential perturbations of the same amplitude on all scales. This allows small perturbation theory to hold on large and small scales, and would also allow primordial black-hole formation to occur over a wide range of mass scales if the amplitude of fluctuations was sufficiently large [4]. Harrison-Zel'dovich spectra arise in pure de Sitter inflationary universe models, but they have also been shown to arise from different non-inflationary cosmological situations [5]. The simplest versions of inflation, in which a finite period of de Sitter-like inflationary expansion occurs, naturally create such a spectrum of fluctuations because the dynamics have no preferred moment of time in de Sitter spacetime: an irregularity spectrum with identical metric perturbations on each scale respects this invariance. However, there are many variants of inflation for which the expansion dynamics are not of de Sitter form, and they predict different spectra of fluctuations; hence it is important to determine which (if any) of them are consistent with the current observational data. If adiabatic density perturbations are the only perturbations present, then the original WMAP3 parameter-estimation analysis suggested that the Harrison-Zel'dovich spectrum is excluded at quite high significance [1]. This significance has been reduced by re-analysis of the inflationary constraints by the WMAP team (available at Ref. [6]), from the viewpoint of the more sophisticated statistical approach of model selection [7], and by recent papers highlighting possible systematic effects [8], but it is nevertheless timely to explore possible interpretations of these data. The conclusion that n s = 1 is disfavoured by the data is restricted to the case where adiabatic scalar pertur-bations are the only ones present. The best-motivated generalization is the inclusion of tensor perturbations, which are predicted to be present at some level by inflation, and parametrized by the tensor-to-scalar ratio r. This is explored in some detail by the WMAP team [1], and in subsequent papers [9], with the conclusion that n s ≥ 1 is readily allowed provided that the value of r is significantly non-zero. In this Brief Report, we analyze a particular class of inflationary models which give this behaviour, the intermediate inflation model discussed in Refs. [10,11,12]. This was originally introduced as an exact inflationary solution for a particular scalar field potential, but is perhaps best-motivated as the slow-roll solution to potentials which are asymptotically of inverse power-law type, V ∝ φ −β . This type of potential is in common use in quintessence models [13], but it also gives viable inflationary solutions, although with this precise potential form inflation is everlasting and a mechanism has to be introduced to bring inflation to an end. It also arises in a range of scalar-tensor gravity theories [14]. As shown by Barrow and Liddle [15], the intermediate inflationary model, in the slow-roll approximation, gives n s = 1 to first order provided β = 2 (see also Ref. [16] for a more extensive discussion of the inflationary generation of the Harrison-Zel'dovich spectrum, and Ref. [17] for the construction of exact potentials giving n S = 1 without slow-roll approximation). In this case, r depends significantly on scale, falling in value with time and hence becoming smaller on shorter length-scales. There will be an observable effect provided inflation ends swiftly enough, so that r was still important at the horizon crossing of observable perturbations. More generally, if β = 2, the spectral index may exhibit running, approaching unity at late times; see also the review of this situation in Ref. [9]. II. PREDICTIONS OF THE MODEL A generalization of the intermediate inflation model [10] used in the earlier study of Ref. [15] has an expansion scale factor given by (with appropriate choice of time coordinate) This is an exact solution of the Friedmann equations (8πG = c = = 1) for a flat universe containing a scalar field φ(t) with potential V (φ), where It can be obtained using the solution-generating method of Ref. [18]. Without loss of generality φ 0 can be taken to be zero. We will now specialise to the pure intermediate inflationary model of Refs. [10,15] with B = 0 and A > 0. In the slow-roll approximation with B = 0, the first term on the RHS of Eq. (3) dominates V at large φ, and we have as the scalar field rolls down a power-law potential. The first two slow-roll parameters are then given, in the Hamilton-Jacobi formalism [19], by So, the condition for inflation to occur (ǫ < 1) is only satisfied when φ 2 > β 2 /2. A. First-order considerations In order to confront this model with observations, we need to consider the contribution of the scalar and tensor perturbations which can be represented by n s and r, respectively. They are expressed in terms of the slow-roll parameters to first order by [20] We clearly see that n s = 1 and r > 0 is possible, provided β = 2. This is the case where η = 2ε. We see that an exact scale-invariant spectrum can be obtained to leading order in slow-roll by both the de Sitter expansion, i.e. with a(t) = exp(H 0 t) and H 0 constant, and by the special intermediate inflationary dynamics with a(t) = exp(At 2/3 ). The two contours correspond to the 68% and 95% levels of confidence. The observational data is from the WMAP analysis at Ref. [6], which updates that of version 1 of Ref. [1]. The observational dataset used is WMAP alone, applied to the lcdm+tens model (without spectral index running). Returning to the general case (0 < f < 1), this model can be compared with observations, shown in Fig. 1. The relation between the scalar and tensor perturbations is On one hand, we see that β > 2 is well supported by the data, while on the other, we see that β < 2 allows n s > 1, but becomes rapidly disfavoured when β approaches 1. In order for our comparison of the intermediate inflationary model's predictions with the observations to be complete, we must also consider the time spent by the field in the region of the n s -r plane allowed by the data. The number of e-foldings between two different values φ 1 and φ 2 of the scalar field is given by [15] If we assume that inflation begins at the earliest possible stage, that is at φ 2 1 = β 2 /2, then Eqs. (7) and (8) can be re-expressed in terms of the number of e-foldings, N b , which have passed since the beginning of the inflationary period: If we consider a Harrison-Zel'dovich model (β = 2) with the inclusion of gravitational waves, then we see in Fig. 1 that the curve r = r(n s ) enters the 95% confidence region for r = 0.66 which corresponds to N b ≃ 12. Since the point (n s = 1, r = 0) lies just inside the two-dimensional 95% confidence contour, the model is viable for all larger values of N b . B. Second-order corrections Next, we show that the second-order corrections to our analysis at first-order in slow roll are small, and can be neglected to a very good approximation. Generalizing Eq. (7) in terms of the slow-roll parameters to secondorder, we have [20,21] n s −1 = 2η −4ǫ−[8(1+C)ǫ 2 −(6+10C)ǫη +2Cξ 2 ], (13) with C = −0.73 a known numerical constant, and ξ 2 (φ) ≡ ǫη − (2ǫ) 1/2 dη/dφ. Putting β = 2 (so that we have n s = 1 exactly to first order) in the above expression, we get the second-order correction to the spectral index: Finally, knowing that r = 16ǫ + O(ǫ 2 ), we obtain to second order that The above calculation, which uses the exact solution, corresponds to the full potential Eq. (3). While in the full slow-roll approximation this gives the same result as the single power-law model, Eq. (5), at second-order the potentials yield different results. A similar calculation to the above, but using the V -slow-roll approximation [19], shows that the denominator 64 is modified to 384/7 in that case. Observations [1] constrain r to be less than 0.65 (at 95% confidence). So, for either potential, this extra contribution in the case with β = 2 is quite negligible once the field enters the region allowed by the data. C. The running of the spectral index The running of the spectral index in inflationary models is given, to lowest order in slow-roll, by [20] dn s d ln k Moreover, to lowest order n s − 1 = −β(β − 2)φ −2 , which allows us to rewrite this relation as We can deduce from this relation that β = 2 implies no running of the spectral index to first-order, which was already obvious from the comment following Eq. (7). Models with β > 2 feature positive running, which the WMAP3 data disfavor [1,6]. However, within the allowed region the predicted running is very small (for example, it is always less than 0.001 for the β = 4 case), and it would be premature to claim that the running constraint adds any value to the n s -r constraints for these models. III. CONCLUSIONS The intermediate inflation model is a viable example of a model with n s = 1 which is permitted by the observational data, due to the non-zero tensor contribution. In this model, r is scale-dependent, and we have shown that a good fit to the WMAP3 observations is possible provided observable scales crossed outside the horizon at least 12 e-foldings after the earliest possible startingpoint for inflation. Arranging this requires that whatever mechanism is introduced to bring inflation to an end does so with φ > 14 in reduced Planck units, considering that a minimum of perhaps 50 e-foldings is required to push the perturbations to observable scales [20]. This model serves as a useful phenomenological illustration, in the light of WMAP3 data, of a type of simple slowly-rolling scalar field evolution that does not display pure de Sitter inflationary expansion, but can still produce a Harrison-Zel'dovich spectrum. For the more general intermediate inflation case with β = 2, observations constrain β to be greater than about one, unless we are in the regime very close to the Harrison-Zel'dovich limit. Constraints from running do not presently add extra information. Kamionkowski while this work was completed, during a visit supported by the Royal Astronomical Society and by Caltech, and thanks Pia Mukherjee for discussions and advice.
2016-12-22T08:44:57.161Z
2006-10-26T00:00:00.000
{ "year": 2006, "sha1": "5045ad664254b41209f2c87764a7f6a6cb9810b5", "oa_license": null, "oa_url": "http://sro.sussex.ac.uk/id/eprint/15965/1/PhysRevD.74.127305.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5045ad664254b41209f2c87764a7f6a6cb9810b5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
4334491
pes2o/s2orc
v3-fos-license
Memory instability as a gateway to generalization Our present frequently resembles our past. Patterns of actions and events repeat throughout our lives like a motif. Identifying and exploiting these patterns are fundamental to many behaviours, from creating grammar to the application of skill across diverse situations. Such generalization may be dependent upon memory instability. Following their formation, memories are unstable and able to interact with one another, allowing, at least in principle, common features to be extracted. Exploiting these common features creates generalized knowledge that can be applied across varied circumstances. Memory instability explains many of the biological and behavioural conditions necessary for generalization and offers predictions for how generalization is produced. Our past experience can aid our current and future performance. For instance, being a skilled tennis player may help when it comes to playing other racquet sports such as squash. Encouraging the transfer of skill from one situation to another also lies at the heart of many brain training and rehabilitative strategies. The ability to generalize from a specific example to a category or concept is not restricted to the world of actions. The location of a reward in a navigation task can be found more quickly in subsequent versions of the task even though key aspects of the task, including the location of the reward, change [1]. Similarly, generalization can occur across different objects, facts, or events to create categories. Generalization therefore plays a key role in a wide array of cognitive functions. Yet, despite its clear importance and adaptive value, how and when generalization occurs is poorly understood. Instability as opportunity Generalization requires the identification of features common across experiences. For example, a common element of some navigation tasks is that a food pellet is never visible but is instead always buried within the sand. To discover the common feature requires a comparison and hence an interaction between memories for the different tasks. Interactions between memories can also lead to interference, after which a memory either is lost or becomes Common conditions It is when memories are unstable that transfer is most prominent. For example, a recently acquired perceptual skill for detecting a stimulus at one location can easily generalize or transfer to other new locations [14]. Similarly, a newly acquired motor skill learnt with one hand can be easily transferred to the other hand (i.e., intermanual transfer; [15][16][17][18]). Yet, transfer is much reduced once a memory has been stabilized through consolidation [11]. Even following consolidation, it is only after a memory has once again become unstable that it can be modified and integrated with other memories, which is a prerequisite for subsequent generalization [19]. Together, these studies suggest that for many tasks, memory instability and transfer are found in similar circumstances. Yet, this is only circumstantial evidence. Few studies have measured memory instability and subsequent performance transfer together. Behaviour connects instability to generalization Recent work has tested the link between instability and generalization [5]. Learning a sequence of actions improves subsequent learning of a sequence of words. Conversely, learning a sequence of words improves the subsequent learning of a sequence of actions. This reciprocal pattern of transfer between different types of knowledge occurs provided 2 conditions are satisfied. Firstly, the motor and word sequences must share a common abstract structure or grammar. What is transferred is the high-level or abstract relationship between elements rather than knowledge of the individual elements themselves (i.e., words versus actions). Secondly, the memory must be unstable for performance to transfer. The instability of the initial memory is correlated with subsequent transfer, suggesting that transfer is related to the instability of the memory. Yet, the relationship between instability and transfer goes beyond correlation. Stabilizing the initial memory, preventing it from being susceptible to interference, also prevented transfer to the subsequent memory task. Thus, transfer from one task to another was critically dependent upon memory instability. Modifying stability modifies transfer A manipulation that modifies memory stability can also modify transfer. Prolonged practice stabilizes a memory [20]. Usually, a new skill memory is so unstable that it can be disrupted by subsequently learning another different skill [4]. Yet, when the amount of training is increased, the newly formed skill memory is no longer susceptible to interference from further learning. The increased training has stabilized the memory. The increased training also reduces subsequent skill transfer. For example, after a short period of initial training on a visual task, participants show substantial transfer to a novel visual task, whereas after prolonged training, there is limited transfer [14,21]. Similarly, the transfer of skill between hands is frequently greater when there is a short rather than a prolonged period of initial training [22]. Extended training can also lead to the formation of habits, which show limited transfer [23]. Yet, not all extended training leads to the formation of a habit. Critically, a habit also requires a reduction in the importance of a goal [24]. The common feature across habits and extended training is the duration of practice, and thus, it seems likely that this is responsible for impairing transfer. Together these studies suggest that a manipulation-specifically, prolonged practice-can both stabilize a memory and reduce transfer from a learnt to a novel task. Memory instability as an explanation for the trade-off between detailed knowledge and generalization. Each memory representation has common (blue bars) and unique features (memory A, red bars; memory B; green bars). (A) When unstable memories interact and interfere with one another, it leads (B) to the loss of detailed information about an event or action. For example, learning tennis (memory A) and badminton (memory B) in quick succession might lead to loss of skills specific to tennis. (C) However, the interaction between memories may allow the identification and extraction of shared common features between memories (blue bars). (D) Exploiting those common features allows knowledge to be applied broadly across a range of related situations. For instance, the skill acquired playing tennis can be applied or transferred to other related racquet sports (dark grey; squash and badminton) and also perhaps even to other somewhat related sports (light grey; cricket and baseball). Instability provides an opportunity for interaction between memories, which can lead to their disruption and the loss of detailed knowledge, while simultaneously allowing shared features to be identified and exploited to allow generalization. As a consequence, (E) instability can explain the trade-off between detailed knowledge and generalization [5,8]. https://doi.org/10.1371/journal.pbio.2004633.g001 Box 1. Alternative mechanisms of generalization Different mechanisms may be responsible for supporting generalization under different circumstances. Learning may enhance the plasticity of a circuit, which could improve the learning of any subsequent task [9]. For example, learning a sequence of movements can improve how quickly participants adapt their movements to a novel visual environment, as occurs in prism adaptation. As a mechanism for generalization, it is potentially broad because it does not require shared attributes or knowledge between tasks; however, it does require similar or at least partially overlapping circuits to be involved in learning the different tasks. Within this framework, learning primes neuroplastic mechanisms, supporting the transfer of performance to subsequent tasks. Forgetting may also drive generalization [10]. Losing information that appears only in specific situations allows a memory to become less tied to a specific circumstance and thus able to be applied generally across a wide range of circumstances (forgetting model). By contrast, rather than losing irrelevant information and diminishing the specificity of a memory, it may be possible to identify relevant information, a pattern, or a feature that reoccurs across a range of situations, enhance knowledge for that feature, and thus increase the efficacy of the memory across a range of situations. Identifying these common features requires an interaction or communication between memories, which can occur when they are unstable (instability model; [5,7,11]). Thus, generalization might be achieved by losing knowledge for specific situations; equally, it may also be achieved by enhancing knowledge for features that are a motif across a family of tasks. Identifying the information that has to be either strengthened or weakened is an important challenge for both of these models of generalization. This challenge is substantial because it is also potentially a dynamic challenge. Initially, a feature of a task could recur across a family of tasks, and thus, strengthening knowledge of this feature would aid generalization. Yet, later in another circumstance, this same feature might be an idiosyncrasy of one particular situation, and therefore, strengthening knowledge of this feature, rather than aiding generalization, would only serve to increase the specificity of a memory. Another feature that these models share is that they link generalization to the loss of detailed information. For one model, forgetting, the loss of knowledge, drives generalization by decreasing the specificity of a memory. Yet, in the other, it is a side effect. The interaction between unstable memories can lead to identifying common features between tasks, producing generalization, but it also leads to the disruption and loss of detailed knowledge [5]. Overall, the different models of forgetting and instability envisage generalization arising by weakening or strengthening different aspects of a memory (i.e., specifics versus recurring motifs, respectively). Both models share the common challenge of how to identify the information that needs to be strengthened or weakened, and both, either directly or indirectly, provide a link between generalization and the loss of detailed knowledge. These are examples of how generalization may arise. Each mechanism is better suited to, and perhaps can only operate under, specific circumstances, and thus, it seems likely that at least in principle, these mechanisms could act together in a complementary fashion, with the strengths of one compensating for the weaknesses of another. Prolonged practice leading to reduced transfer may seem counterintuitive. With prolonged practice comes increased proficiency, which might logically be expected to improve transfer. After all, the more knowledge is gained about one task, then the greater facility and hence perhaps the greater the potential to transfer performance to another related task. Yet, transfer does not seem to operate in this way. Prolonged practice appears to prevent rather than support transfer. Thus, transfer is not simply dictated by the accumulation of knowledge or performance. Instead, transfer occurs in the same circumstances as memory instability, is for at least one set of tasks critically dependent on instability, and can be prevented by prolonged practice, which stabilizes a memory. Together, these findings converge to suggest a link between memory instability and transfer. A common mechanism for stability and transfer Prolonged practice leads not only to memory stabilization and impaired transfer; it also leads to neurochemical changes. One such change is an increase in GABA within the cortex. Potentially, this increase following prolonged practice may be responsible for stabilizing the memory and for reducing transfer. Changes in GABA have been linked to changes in performance transfer. The concentration of GABA within the cortex can be modified using a noninvasive brain stimulation technique called direct current stimulation. In this technique, 2 electrodes (an anode and cathode) are placed on the scalp of a human participant, and a small current is passed between the electrodes (for a review, please see [25]). Placing the anode of a stimulation device over the motor cortex decreases GABA concentrations in this area [26]. Decreasing GABA in this way Box 2. Time window of generalization Generalization develops over a diverse range of timescales, from hours to weeks to even years [5,7,8,12]. In some cases, generalization develops during those hours of instability following initial memory formation. For example, immediately after learning, performance can transfer from one sequence to a different type of sequence (action versus words) provided that the sequences share a common structure [5]. Equally, immediately following the formation of a memory, fear can transfer from one context to another neutral context [7]. In these examples, generalization develops within hours during a single episode of instability following memory formation. On other occasions, generalization can take weeks and potentially years to develop [8,12]. Perhaps the features common across some tasks are so complex that they require multiple episodes of instability to be identified, which increases the time necessary for generalization to develop. Instability on multiple occasions may be possible because of memory replay during sleep or perhaps memory reactivation when other similar new memories are being encoded (for a short review, please see [13]). Even once a common feature between memories has been identified, other subsequent processes may be required for generalization to be expressed (for example, forgetting; see Box 1). In this scenario, instability is necessary to trigger the development of generalization but is not in itself sufficient. These processes might act together, in some cases, with multiple episodes of instability, triggering other complementary processes, which take time to develop, and subsequently support generalization. contralateral to the trained hand enhances subsequent transfer to the untrained hand [27]. Thus, GABA is linked to performance transfer: an increase in GABA, due to prolonged practice, impairs transfer, while a decrease in GABA, due to current stimulation, enhances transfer. Equally, an increase in GABA has been linked to an increase in memory stability [20,28]. Together, these studies reveal a mechanistic link between stability and transfer. Subsequent studies may further test the nature of this link by using pharmacological methods to specifically modify GABA. However, even with converging evidence to show that GABA mechanistically links stability with transfer, it should not be assumed that this link depends solely on GABA. Potentially, GABA is only one component of what is likely to be shown, in time, as a complex and diverse neurochemical mechanism linking memory stability to performance transfer. Circuits of stability A link between memory stability and generalization is also present at the level of networks and brain circuits. One part of a network critical for generalization and for the interaction between unstable memories appears to be the prefrontal cortex. Several studies have shown that the prefrontal cortex makes a critical contribution to generalization. For instance, lesions to the ventromedial prefrontal cortex in humans or disruption to prefrontal function with transcranial magnetic stimulation (TMS) prevents semantic generalization [29,30]. This is when participants learn a list of semantically related words and subsequently incorrectly identify another semantically related word as coming from the list [31]. These errors due to semantic generalization, frequently called false memories, are decreased when the function of the prefrontal cortex is impaired. Other studies have shown that the prefrontal cortex makes a critical contribution to memory instability. Disrupting the function of the human prefrontal cortex, with TMS, prevents newly formed unstable memories from being susceptible to interference [32]. Similarly in rodents, a lesion to the frontal cortex also prevents the interaction between new unstable memories ( [33]; for a review, please see [4]). Together, these studies suggest that the prefrontal cortex is responsible for creating interference between newly acquired memories [4]. The prefrontal cortex may support interference between memories by affecting their representation. Unstable newly formed memories have an overlapping representation within the hippocampus [6,7]. The prefrontal cortex exerts an influence upon the representation of motor skill memories in the primary motor cortex, and thus, at least in principle, it may have a similar role in influencing the representation of memories in the hippocampus [34,35]. Disruption to the prefrontal function could then transform the overlapping representation to a set of independent representations. Without an overlap, there may be no communication or interference between the memories, which is consistent with the work on rodents and humans [32,33]. Envisaging the overlapping representations as providing communication between memories would explain their vital contribution to transfer [7]. Information from one memory, for example, about the fear associated with one context has to be communicated to another memory for fear to transfer to a previously neutral context. Thus, instability and susceptibility to interference may be achieved by the prefrontal cortex creating overlapping memory representations, which are critical for transfer. This may explain the critical contribution of the prefrontal cortex to semantic generalization. A similar mechanism may also allow generalization between old and recently formed memories. Previously formed memories are reactivated when related new information is being encoded into a memory. Reactivation of the memory is due to a dialogue between the medial prefrontal cortex and the hippocampus ( [36][37][38]; for a review, please see [39]). When reactivated, an old memory becomes once again represented within the hippocampus, and it reverts to an unstable state, which allows it to be modified [40]; it can be then strengthened or integrated with a new memory [19,41]. The instability, modifiability, and capacity to be integrated with new memories suggest a communication between old and new memories, which may be achieved by the old reactivated memory sharing an overlapping representation with the new recently formed memory ( [6]; for a review, please see [10]). Such communication between the memories may allow the identification of common elements, or motifs, which in turn supports the creation of generalizable knowledge. Once the common elements have been identified, the newly acquired memory may quickly cease to be represented within the hippocampus, instead becoming part of a cortical representation of the common properties shared across the old and new memory (i.e., part of a schema; [1]). Overall, the prefrontal cortex mediates the reactivation of an old memory, leading to it becoming once again unstable and able to have an overlapping representation with a new related memory, which provides perhaps the basis for communication between memories and generalization across them. Prefrontal circuits are not unique in making a critical contribution to generalization. The circuit can alter depending upon the nature of the common characteristic or repeating regularity that is being generalized. For instance, a circuit that includes the ventromedial prefrontal cortex is critical for semantic generalization [29,30]. By contrast, when the common feature is no longer semantic but instead, for instance, spatial position, then another brain area, the angular gyrus, is critical for generalization across tasks [42,43]. Similarly, the circuit critical for a newly formed unstable memory alters depending upon the type of information learnt [44][45][46][47]. While the circuit dedicated to memory stability and generalization may vary, what does not vary perhaps is the overlapping relationship between stability and generalization. Instability: A trade-off between detail and generalization Detailed knowledge can easily be lost when a newly formed memory is unstable and susceptible to disruption. For example, rather than recalling a complete list of 12 words, a person might only recall 10 words [5,32,48]. However, this loss of detailed knowledge can come with a benefit. There is a positive correlation between the loss of detailed knowledge and transfer of performance to a subsequent related task. For example, skill learnt in performing a sequence of actions is lost in direct proportion to the performance transferred to learning a sequence of words [5]. Similarly, knowledge of word sequence is lost in direct proportion to the performance transferred to an action sequence. This pattern of reciprocal transfer is observed when the sequence of words or actions share a common abstract structure. Converging with this behavioural work showing a trade-off between detailed knowledge and generalization is more recent functional imaging work (Fig 1, [8]). Patterns of activation within the human hippocampus have also revealed a trade-off between detailed knowledge and generalization [8]. Maintaining a detailed knowledge of a learnt association between an object and a scene was measured as the match between the pattern of neural activation at the initial encoding and its subsequent retrieval. A tight match between the pattern of activation at encoding and retrieval indicated retention of detailed knowledge. Each of the 128 objects was uniquely paired with one of only 4 scenes. This allowed memories to be related to one another through a common shared scene. Similarity of activation between those memories with a shared common scene provided a measure of generalization. Using these analysis techniques provided measures of both detailed knowledge retention and generalization, which were negatively correlated to one another, revealing a trade-off between detailed knowledge retention and generalization. Overall, for both the behavioural and the functional imaging work, the greater the loss of detailed knowledge is, the greater the ability to generalize. This trade-off can be explained by memory instability being necessary for generalization. Instability makes a memory susceptible to interference. The greater this instability is, the greater the interference, and the greater the loss of detailed knowledge. However, instability also increases the interaction between memories, potentially allowing the identification of shared common features, which can be exploited to allow transfer and generalization between different but related situations. Thus, instability may well explain the trade-off between the loss of detailed knowledge and generalization (Fig 1). Reward as a modifying factor of memory stability and generalization After its formation, a memory is stabilized over several hours. Such offline processing can be affected by modifying factors, one of which appears to be reward. For example, providing a reward following learning for the retrieval of specific items enhances the subsequent recall of those items [49]. Similarly, rewarding the acquisition of a motor skill enhances the skill improvements that develop offline during consolidation [50]. The connectivity of circuits and what happens within those circuits following memory formation are both affected by reward. An increase in connectivity between the visual cortex and both the anterior hippocampus and the ventral tegmental area is associated with a reward [51]. There is also an increase in the frequency with which the pattern of neural activity present at memory formation is replayed offline during consolidation [52,53]. Together, these studies provide evidence that reward can affect offline memory processing during consolidation. A wide range of memory changes occur during offline processing [54]. Stabilization is but one example of these changes. Other examples of offline processing, such as memory enhancement, are clearly affected by reward [50]; yet, what remains less clear is whether specifically memory stabilization is affected by reward. It is conceivable that a reward can affect memory stability. After all, reward leads to the release of dopamine, affecting synaptic plasticity mechanisms, which, at least in principle, may shorten the interval that a memory is unstable for [55,56]. This would suggest that reward might stabilize a memory. Increasing the stability of a memory with a reward may reduce the propensity to transfer learning between related tasks. At first, the idea that reward will impair a behaviour, in this case the transfer of learning, may seem counterintuitive; however, it may have important adaptive benefits. Reward may be sculpting behaviour so that high performance is focused precisely on those tasks that yield a reward, and not upon related tasks that may not yield any reward. Overall, it seems highly likely that memory stability can be manipulated with reward; yet, what remains to be tested is whether this manipulation affects subsequent transfer between related tasks. Stressing memory As is the case for a rewarding stimulus, a stressful or aversive stimulus can affect the offline processing of a memory. For example, in rodent studies, applying a foot shock immediately after memory formation can enhance subsequent consolidation [57]. Stress may act to stabilize a memory or at least reduce its susceptibility to interference by altering the connectivity of circuits so that memory processing becomes isolated or independent from other processing [58]. With this reduced vulnerability to interference, a memory is stabilized, and retention improved. Whether these changes in the memory stability due to stress affect the development of generalizable knowledge is not currently clear. Some studies have suggested that stress increases generalization, others that it makes minimal difference, and others that it decreases generalization [59][60][61]. For some of these studies, stress did not increase knowledge retention, and thus, there is no evidence that stress affected consolidation or memory stability. When knowledge retention is increased, indicating enhanced memory stabilization, there is a decrease in generalization. The latter observations are consistent with the model proposed here, with stability favouring accurate detailed retention whilst instability favours generalization. Memory instability is also implicated in the transfer of fear from one context to another. Fear paired within one context will transfer to another neutral context when, and only when, the acquisition of a memory for both contexts is separated by only a few hours (%5 hours; [7]). There is no transfer when the interval between acquiring the memories is increased to 1 week. A similar time course is followed by memory instability, with a memory being unstable and susceptible to interference within the first hours of its formation and stable within a day (and certainly within 1 week) [4]. Instability and subsequent transfer are present over a similar time window. Thus, it is conceivable that the capacity to transfer fear between contexts is related to and, consistent with the current model, potentially relies upon memory instability. Transfer is also linked to instability through how memories are represented. When acquired in quick succession, memories share a neural circuit, or ensemble. When the overlap between memory ensembles is decreased through aging, transfer of fear to a new context is impaired [7]. Conversely, when the overlap in aging rodents is rescued through experimental manipulation, the transfer of fear to a new context is once again possible. What this beautifully illustrates is that an overlapping representation is necessary for subsequent transfer. The overlapping representation, on the one hand, provides a means for different memories to interact, common features to be identified, and transfer to happen. Yet, on the other hand, it makes memories susceptible to interference and perhaps makes them unstable. Overall, (A) reducing memory instability with aversive stimuli impairs transfer; (B) memory instability is transient, lasting for only a matter of hours, and it is only during this time window that transfer of fear from one context to another is possible; and (C) instability may be due to an overlapping memory representation, which is critical for transfer. Together, these results converge to suggest that memory instability may have an important role to play in transfer and the creation of generalizable knowledge (please see "Predictions," Box 3). Brain state Sleep has been linked to supporting generalization (for a review, please see [64]). During sleep, memories continue to be processed, are enhanced, and are reorganized [54]. The reorganization of past events potentially allows hidden patterns to be uncovered. For instance, infants demonstrate knowledge for an artificial grammar of nonsense letter strings only after sleep during a nap. This is achieved by identifying repeating patterns-in this example, the grammatical structure common to letter sequences. Specifically, the first syllable ("PEL") predicts the final syllable, and as a consequence, both "PELwadimRUD" and "PELchilaRUD" are valid grammatical structures for these nonsense letter strings [65,66]. Sleep has also been implicated in transitive inference when the high-order structure of the relationship between arbitrary symbols (such as fractal patterns) is uncovered based solely upon exposure to low-order relationships [67]. For example, when participants are exposed to simple pairings such as A > B, B > C, and C > D, the appropriate inference from exposure to these is that A > D, which is enhanced over sleep. Thus, sleep provides an environment that promotes the extraction of rules, the identification of repeating abstract patterns, and generalization across tasks. Generalization during sleep may be linked to memory instability. Memories are reactivated during sleep, which may cause the memory to become unstable. The pattern of neural activity present during memory formation can be found again during sleep. For example, the neuronal firing patterns during a motor learning task are replayed again in the motor cortex of a rodent Box 3. Predictions Memory instability may provide one gateway to the development of generalizable knowledge. As a consequence, modifying memory stability through fear, reward, or even prolonged practice could modify subsequent transfer of performance across related tasks or situations. Reward and fear are both predicted to decrease transfer because both enhance consolidation and thus are assumed to increase memory stability. Evidence is accumulating that is consistent with this view; however, as yet, there is very little direct evidence, because no single study has modified memory stability using fear or reward, measured that change, and examined the subsequent effects, if any, in transfer. Such studies are not without challenges. For instance, the shift in processing from a goalbased to a habit-based strategy that fear promotes could couple performance more tightly to a particular context and thus impair transfer, regardless of changes in memory stability [57,62,63]. Another approach to testing the link between memory instability and transfer is to better understand the conditions necessary for transfer. Fear can only transfer from one context to another neutral context when the memories for each context are formed within a few hours of one another [7]. The transfer in these circumstances could be related to memory instability. Memories are unstable for a few hours after their formation, the same time interval during which transfer is possible. This suggests a link between memory instability and subsequent transfer. Certainly, the instability of a memory for a sequence is related to the subsequent transfer to a different type of sequence (actions versus words; [5]). However, these are very different types of transfer-the latter case requires identifying and using the common sequential attributes to transfer performance, whereas in the former, there is no common element; instead, fear is being misattributed to a neutral context. Despite these differences, both may be dependent upon memory instability to enable transfer; alternatively, these differences may translate onto different mechanisms (see Box 1). Transfer is also dependent upon the nature of the memory representation. When learnt within hours of one another, memories have overlapping representations [6,7]. Elegant work has shown that these overlapping representations are critical for transfer [7]. Manipulations that modify transfer would therefore be predicted to alter this memory representation and stability. For example, prolonged practice may diminish transfer and increase memory stability by promoting the creation of nonoverlapping or independent representations. The rise in GABA during prolonged practice may be responsible for diminishing the excitability within a shared overlapping representation and split it into independent representations [20]. Increasing excitability between independent representations can rescue the capacity to transfer fear from one context to another [7]. Manipulating memory representations may provide a way to test for a mechanistic link between instability and transfer. during sleep, and this replay is correlated with the subsequent sleep-dependent performance improvements [13,68]. These reactivations may lead to the memory becoming unstable [69]. When retrieved during wakefulness, a memory is rendered unstable, vulnerable to interference, just as the memory had been soon after its initial formation. Similarly, the reactivation of a memory during sleep may also make it unstable. Elegant work has demonstrated that memories can be artificially reactivated during sleep [70,71]. For instance, a memory formed while a sensory cue, such as an odour, is presented can be reactivated when that same sensory cue is represented during sleep [70]. The same pattern of functional activation found during learning is found again when the sensory cue is represented during sleep. Yet, when a specific memory is reactivated during sleep, it remains invulnerable to interference [72]. Interference from further learning is only one measure of memory instability, and a memory may be unstable, despite not being susceptible to interference. Changes in brain state during sleep may make a memory, even an unstable memory, invulnerable to interference. During large parts of sleep, the effective functional connectivity of the human brain is markedly reduced. For instance, the waveform evoked by applying TMS to the motor cortex travels a substantially shorter distance when applied during slow-wave sleep than during wakefulness [73,74]. Along with this decrease in functional connectivity, there is a change in brain organization. Specifically, the brain becomes more modular. Functionally connected circuits remain, but these circuits are smaller and more constrained, lacking the widespread connections present during wakefulness [75]. Thus, memories may well become unstable over sleep but remain protected from interference because of the poor functional connection amongst brain areas. Unstable memories are constrained within functionally discrete, independent circuits, and therefore, interference between memories is minimized. Thus, sleep may provide an ideal environment for memories to become unstable because they are protected from interference. However, this environment may also restrict the generalization that is possible during sleep. With connectivity limited during sleep, generalization may only occur between those memories represented within restricted circuits. This may mean that generalization can only occur across memories with certain properties such as having the same or similar content. By contrast, generalization can occur between memories with different content during wakefulness (i.e., between actions and words; [5]). Yet, transient restorations in long-range connectivity associated, for example, with sleep spindles may be sufficient to allow communication across brain areas to support generalization across diverse memories [76][77][78]. Alternatively, interludes of rapid eye movement (REM) sleep may be sufficient to restore connectivity when, or if, coordinated with episodes of memory reactivation that predominately occur when connectivity is reduced during slow-wave sleep [64]. Sleep may restrict the damaging effects of interference upon memories by having generally limited connectivity whilst simultaneously having brief restorations in connectivity to allow communication and potentially generalization across memories. Conclusions and beyond Broadly, there appear to be at least 2 contrasting perspectives on memory instability, which are unlikely to be mutually exclusive. One perspective sees instability as arising from the unique requirements of biology. For instance, it takes time to synthesize the protein necessary to stabilize a memory, and thus, an interval of instability follows. From this perspective, memory instability is simply the inevitable consequence of having algorithmic processes implemented within a biological substrate. Implement those same processes within a different substrate, such as silicon, and memory instability may well vanish without any loss of memory function. In an alternative perspective, memory instability-and potentially the offline processing of memories more broadly-may make an indispensable contribution to the algorithms necessary for memory function. Instability may provide an opportunity for a particular form of computation or algorithm that is critical for memory function. Instability may be critical to uncovering patterns common across different memories. It provides an opportunity for a comparison between different memories, allowing common features to be identified, extracted, and exploited. Once stabilized, a memory becomes invulnerable to interference; yet, it may also lose its ability to interact with other memories, and thus, features common to the memories can no longer be identified. Consistent with this idea, the transfer of performance from one task to another is most prominent in those circumstances that favour memory instability [11,[14][15][16][17][18]20]. Yet, this is more than simply a circumstantial link. The transfer of performance across related sequential tasks is correlated with memory instability [5]. Stabilizing those memories, either through subtle changes to the tasks or inserting a time interval to allow consolidation to take place, prevents transfer [5]. Similarly, prolonged practice stabilizes perceptual memories and is associated with decreased transfer [14,20,21]. Conversely, reversing those neurochemical changes associated with memory stabilization enhances transfer [20,26-28]. Instability then may be critical for generalization. Instability also explains the trade-off between the loss of detailed knowledge and the creation of generalizable knowledge found in behavioural and functional imaging work (Fig 1; [5,8]). This relationship between instability and generalization may remain even in other brain states. Sleep has been widely associated with promoting generalization, and it is during sleep that the patterns of neural activity present during memory formation are replayed. Memories are rendered unstable through replay yet are protected from interference because of the changes in brain organization and connectivity that take place during sleep. Overall, from across a diverse array of studies, a consistent link emerges, connecting memory instability to generalization.
2018-03-26T13:45:28.420Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "510b7f7cd6aaf2ece4e9dd10b8010431c4609907", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.2004633&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cefb517287522e92070712991d315f30d44cc446", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
210280726
pes2o/s2orc
v3-fos-license
Assessment of the faecal sludge management practices in households of a sub- Saharan Africa urban area and the health risks associated: the case study of Yaoundé, Cameroon The aim of this study was to assess on-site sanitation facilities in Yaounde on the basis of the eight proposed indicators of hygienic safety, sustainability and functionality of the Millennium Development Goals (MDG) target 7 definitions of improve sanitation. Information were collected on the design of toilet facilities, management and functionality through a semi-structured interview and observations of 602 randomly selected toilet facilities in 22 different urban settlements of Yaounde. In addition, information about education and socioeconomic status of householders, management and functionality of toilet facilities and health status of the users were collected. The results revealed several methods of excreta disposal and noted that approximately 3 % of households had no latrine and practiced open defecation. It also showed that 79% of latrines were covered at the top with concrete slabs while 69% with ground lined below floors. Households that lacked proper toilet facilities frequently suffered from orally transmitted sanitation-related diseases, with higher prevalence recorded in rainy seasons. This study recommends improvement in the management of sanitation facilities in some settlements of Yaounde in order to guarantee adequate sanitation in a healthy environment. © 2019 International Formulae Group. All rights reserved INTRODUCTION The purpose of a sanitation system is to protect and promote human health and environmental conditions (Stenström et al., 2011, Koné et al., 2016Joshua et al., 2017;Cheng et al., 2018). Provision of adequate sanitation facilities is a basic underlying factor for good human health, economic development and well-being (Verbyla et al., 2013;Tilley et al., 2014;Brown et al., 2015). In developing countries, urban areas are estimated to have higher sanitation coverage than rural areas (Carlton et al., 2012), but statistics had often not come out clearly to show the severity and complexity of sanitation challenges affecting towns and the urban poor. It was estimated that in 2010, 2.6 billion individuals did not have access to improve sanitation . The Sustainable Development Goals (SDGs) adopted by the UN General Assembly in 2015 aims to substantially improve water and sanitation globally, and includes two specific targets within Goal 6 for drinking water, sanitation and hygiene (WaSH): (i) the first target aims at achieving universal and equitable access to safe and affordable drinking water for all by 2030, while the second aims at achieving access to adequate, equitable sanitation and hygiene for all and end open defecation within the same period. Progress towards the Millennium Development Goals (MDGs) which preceded the SDGs was monitored globally based on the use of improved drinking water supplies and sanitation facilities. The SDGs aim at higher water and sanitation service provision and are being monitored using indicators which include elements of service quality that were not captured by the MDG indicators (Wolf et al., 2018). The generalized approach to define sanitation access for a single household is to assess the use of an ‗improved' toilet technology (Jenkins et al., 2014), which may be less appropriate for rapidly growing cities where on-site sanitation technologies, such as pit latrines and septic tanks, are still used, despite the urban population growth. In such situations, sanitation facilities are likely to be emptied rather than moved (Tilley et al., 2014). Hence the safety of sanitation systems depends on safe faecal sludge capture and containment (i.e. the design of the facility),provisions for safe faecal sludge management, including emptying, removal, treatment and disposal, or reuse (Taweesan et al., 2015). In low-and middle-income countries, over 70% of urban dwellers use mainly on-site sanitation systems such as unsewered latrines and septic tanks for excreta and wastewater disposal (Klingel et al., 2002;Dodane et al., 2012;Ngoutane Pare et al., 2012). Faecal sludge contains extremely high pathogen concentrations, responsible for the elevated endemic rate of excreta-related diseases, especially among children (Faechem et al., 1983;Stenström et al., 2011). In areas where access to sustainable sanitation, (i.e. where safe storage, collection, treatment and safe disposal/reuse of faeces and urine) is inadequate or poor, parasites spread in the natural environment (Tilley at al., 2014;Soh Kengne et al., 2014). Most cities in low-and middle-income countries, which can be categorized as -latrine-based cities‖, rely on such infrastructure for excreta disposal (Jeuland et al., 2004). Ongoing latrine provision programs, aiming at achieving the SDGs sanitation target, still lack service provision arrangements for collection/emptying, haulage, safe disposal, reuse or treatment of faecal sludge produced by on-site sanitation infrastructures. At any given time, approximately half of the urban populations in Africa suffer from diseases associated with poor sanitation, hygiene and water (De Silva et al., 2011;Aryal et al., 2012;Pujari et al. 2012;Niwagaba et al., 2014;Guiteras et al., 2015). Systematic reviews suggest that improved sanitation can significantly reduce rates of diarrhoeal diseases (Pujari et al., 2012). On-site sanitation systems for excreta collection are widespread in Yaounde, with the predominance of pit latrines (>59%) (Kengne et al., 2009). The city does not have any faecal sludge treatment plant and it was estimated by Berteigne (2012) that about 700-1,300 m 3 of faecal sludge is discharged weekly into the environment of peri-urban areas. This amount has constantly increased due to population increase of the city particularly in urban slums. This paper presents a detailed assessment of household excreta disposal facilities in some settlements of Yaounde, the capital city of Cameroon. This paper proposes and applies a set of indicators to characterize and assess the hygienic safety and sustained functionality of existing latrines, including locally available pit emptying services and disposal methods, based on household survey data. It also assesses the health risks associated with the current sanitation technologies that make up the investigated excreta disposal. Study area This study was carried out in the city of Yaounde (Cameroon). Yaounde is an urban area of approximately 256 km 2 and is located between about 700-800 m above sea level. The town had an estimated population of 2,4 million inhabitants in 2011 (BUCREP, 2012). The City faces overpopulation like many other urban cities in developing countries with a density of 14,000 inhabitants/km 2 . Parrot et al. (2009) mentioned that more than half (51 %) of the capital consists of slums with no pipeborne water supply and no centralized sanitation and waste disposal infrastructure. The population has to rely mainly on shallow dug wells and springs for drinking water sources (Graf et al., 2010). Yaounde has an equatorial climate with four seasons comprising two dry seasons (December-February, July-August) and two rainy seasons (March-June, September-November). The average annual rainfall is 1,600 mm with an average temperature of 23 °C (Lienou et al., 2008). On-site sanitation systems for excreta collection are widespread with the predominance of pit latrines (> 59%). The city has no FS treatment station and it was estimated that about 700 to 1,300 m 3 of FS are discharged weekly into the environment of peri-urban areas (Berteigne, 2012). Assessment of the existing faecal sludge management practices at household level To assess the faecal sludge management practices in households, a heterogeneous stratified sampling method was applied in different urban settlements previously identified in the study area. Therefore, a total of 22 settlements were selectively chosen in the study area according to the heterogeneity of urban settlements of Yaounde and represented by peri-urban interfaces, planned urban area, informal settlements, middle and high income areas according to the methodology described by Lüthi and Parkinson (2011). For these authors, every city is a patchwork of different domains and physical environments, each of which presents their own challenges and opportunities. The distribution of the quarters investigated is shown in Table 1. The size "n" of the households investigated as a function of the total population "N" was estimated using the margin of error formula for defined population (Barlett et al., 2001): With -N‖ the size of the total population of Yaounde (estimated at about 2.4 million inhabitants (BUCREP, 2012), -n‖ the sample size, -e‖ the marginal error set to 5 % in the case of this study. The size of sample obtained using the formula have shown a total of 402 households. To limit errors and to increase the viability of results, the sample size was adjusted to 602 households which were chosen as sampling and analysis unit, while on-site sanitation facilities were organized and managed as property according the sampling methodology described by Jenkins et al. (2014). Household survey This survey consisted of a 602 semistructured questionnaire administered verbally in French or English to the home owners or the tenants occupying the homes for the longest time in the absence of actual owners. The survey was designed to characterize and describe the sanitation facilities (below and above ground design), to determine the age of the facilities, to document latrine use, to assess operating and maintenance practices. The survey also assessed facility design, emptying preferences and the perceptions of sanitation conditions and problems. At each property, GPS coordinates and respondent socio-economic characteristics (including sex, educational level, number of people living in the households and reported monthly income) were collected. This study draws on a sub-set of the survey data related to the facility design, management and functionality, safety and sustainability assessment of on-site sanitation systems in the studied settlements. Toilet facilities observation -Flushing out‖ is a method of partial emptying of pits which involves inserting a drain or an opening into an exposed or elevated portion of the latrine pit wall, below the slab to release faecal sludge into open environment to be washed away by storm water during rains. In some cases, the rising of water tables during rain events and excess flooding may increase pit sludge levels to the level of the opening, where it is divulged or -vomited‖ out. In the light of growing concerns over this unsanitary pit-emptying practices, surveyors were trained to look for and record the presence of a -flushing out‖ pit waste drain pipe during structured on-site observations of each facility. The functional state of the facilities in terms of slab structural conditions, fullness of the waste pit/tank, and aspect of the superstructure were observed. Pit fullness was judged by observing the height of the vacuum space between the slab or cover and the surface of the sludge. To understand barriers for safe emptying, physical accessibility of the property to small car or tanker vehicles was also observed. Indicators of improved, safe and sustainable sanitation The Joint Monitoring Program (JMP) indicators, developed for the Millennium Development Goals with respect to safe and sustainable sanitation (Table 2) were used to access the safety and sustainability of on-site sanitation systems in the study area, according to the methodology adopted by Jenkins et al. (2014). The first three (1-3) indicators assess the technical design of the facility, the next two indicators (4 and 5) assess availability and access to safe faecal waste management services and the last three indicators (6, 7 and 8) assess the functionality of the facility at the time of the survey. Health risks assessment related to the excreta disposal facilities Safe disposal of excreta is critical because agents of a large number of infectious diseases are passed from the body into the excreta (Stenström et al., 2011). These excreted infection agents fall into four main groups: viruses, bacteria, protozoa, and worms (helminths). Excreta, unless properly isolated, can also provide a breeding ground for insects, which may act as either direct or indirect transmitters of disease. Therefore, the health risks related to different on-site sanitation technologies was additionally assessed by investigating the health status of the users. The health risks assessment related to excreta disposal attempts to collect some information about the prevalence and the diversity of excreta-related diseases (cholera, amoebiasis, typhoid fever and helminthiasis) occurred in the past six months in households investigated and the climatic season of occurrence. Statistical analysis All the data from the questionnaires were entered manually using Microsoft Excel 2013. Descriptive statistics tools such as percentages and Chi-square test were used to establish associations between categorical variables. Missing values (associated with the variation of denominators) were not taken into account for data interpretation. The distribution of on-site sanitation in function of the quarters investigated was assessed. Monetary values were adjusted to 2018 values and presented in US Dollars (1 US Dollar = 500 Fcfa). (Jenkins et al., 2014). Demography of respondent The demographic profiles of the investigated populations are presented in the Table 3. The table show the variation (frequency) of the gender of householder, the age group, the educational level, the monthly income as well as the number of person living in household. Generally, the household were dominated by males (77.7%) mostly between the ages of 46 to 55 years (26.4%), majority having a secondary level of education (45.2%). With regard to monthly incomes, only 13.5% of respondents had income above 400 USD. Number of persons per households varied from one (1) to six (6) and above. The education level of some respondents was however low with as much as 24.6% of respondent attaining only primary school education and 9.5% without any formal education. This situation may affect the ability of most respondents to understand critical issues regarding the management of on-site sanitation systems. These observations reflect the population and housing census figures of Cameroon, 2015 (Table 3). Typology of excreta disposal The results revealed that there were differences in on-site sanitation system from one urban setting to the other within the study area ( Figure 1). Assessment of the variations revealed that traditional pit latrines and septic tanks were the sanitation systems used by most households. The traditional pit latrines were observed in all the sites but most prevalent in the informal settlements and least in the middle income and high income areas and peri-urban interface. The septic tanks on the other hand were mostly prevalent in planned and high income areas. The -piped equipped latrines‖ systems, which were observed principally in lowland areas, had the top of the pit connected to a PVC pipe outlet, thus allowing outflow of faecal sludge in the event of flooding. It was noted that 0.5% of households in informal settlement adopted the -piped equipped latrine‖ with 2.5% middle and high income areas also adopting the system. A further 0.5% of households in planned and high income urban areas had used this system. One of the main results of this study is that some households did not have any on-site sanitation technologies and thereby practicing open defecation. This practice may occur mostly in households from informal settlement. This practice, which was mostly observed in the informal settlement enclaves constitute a serious environmental risk as it exposes surface and groundwater resources to faecal contamination. About 3% of households that practiced open defecation were mostly located in the lowland area of the city. It is important to mention that open defecation was not considered as a type of sanitation system but an unapproved option for households which lacked toilet facilities. Relation between the on-site sanitation systems and the level of education of households To assess the relationship between educational level of respondents and the distribution of the current sanitation systems in households, the 2-sided Pearson Chi-Square correlation test was employed. The results showed a significant correlation between the educational level of householders and the type of on-site sanitation systems in place in the households investigated (X 2 =79.34 and p<0.001). The technologies septic tanks, traditional pit latrines and the VIP latrines which can be assimilated to improve sanitation systems were widely spread in households where the head attended higher education, secondary school and primary school ( Figure 2). The distribution of septic tank systems which is the most improved sanitation technologies found in the study area were 11.71%, 1.52%, 8.65% and 19.35% respectively for households where the respondents attended higher education, never went to school, attended primary school and secondary school. The traditional latrines were mostly represented in households were the head attended secondary school (25.29%), primary school (13.75 %)never went to school (6.28%) and higher education (6.62%). According to the VIP latrines, the frequencies of the distribution of these on-site sanitation technologies in the households are 0.50%, 0.17% and 0.34% respectively for households which the head attended higher education, primary school and primary school. Looking at the piped equipped latrines, the households with these types of sanitation technologies were mostly attended the primary school (1.35 %), the secondary school (0.50%) and who never attended school (1.52%). The open defecation practices were found at all educational level and it is mostly represented in households who attended primary school (1.18%) and secondary school (0.67%). Relation between the on-site sanitation systems and the monthly income of households To assess the effect of monthly income of households on the distribution of the current defecation practices at the household level, the 2-side Pearson Chi-Square correlation test was employed. The results showed significant effects on the distribution of sanitation systems in the households investigated (X 2 =83.501, p<0.001, n=364). In general, the unimproved sanitation practices (i.e. the prevalence of open defecation practice and the distribution of pipe latrines in households) were found to be widespread in households with low income (under 100 USD, 100 to 200 USD, 200 to 300 USD) while the septic tank systems which represent an improved sanitation technologies were mostly represented in households with higher incomes (300 to 400 USD and over 400 USD) (Figure 3). Latrine characteristics Number of persons using the facilities An important variation in the number of persons using the latrines was recorded. This number of persons varied depending of the investigated quarters and ranged between one to more than six people ( Figure 4). However, latrines visited by more than 6 persons were mostly represented in the study area (32.39%, n=559) while the latrines used by one person were less represented (14.45%, n=559). The number of persons using the latrines in the households was found to display with the number of people living in the households. This observation could be explained by the fact that people in the study area did not share their latrines with other external users as it is contrary observed in other countries where the number of user of latrines may be sometimes more than the number of people living in the household. Building materials Based on the field observations, several building materials were used by households to build their latrines ( Figure 5). Most of the latrines were built with concrete (83.98%, n=586) while other where built with materials like metal sheets (4.72%, n=586), beaten earth (2.27%, n=586), wood (6.75%, n=586) and plastic materials (2.28%, n=586). Assessment of the distribution of latrine building materials in function of the level of education and monthly income of households using the 2-sided Pearson Chi Square test revealed 56.7% and 63.9% of variations respectively. The latrine building materials used by the surveyed populations strongly depended on the educational level of householders (X 2 = 64.80; p=0.004) as well as their financial incomes (X 2 = 41.07; p<0.001) ( Figure 6 and Figure 7). It can be concluded that the sanitation technologies used in the surveyed populations strongly depended on the educational level and the monthly income of the population. Desludging periods When the pits of latrines are full, several actions are carried out by the latrine users ( Figure 8). Most of respondents mentioned the emptying of their pits (74.10%, n=583), 21.10% mentioned the addition of chemical substances (caustic soda, wood ache) and only 4.80% mentioned the construction of another latrine. The later cited options where recorded in households which did not face space problems. Within the facilities emptied, 50.99% (n=557) of respondents mentioned that they usually empty the pit of their latrines within a period not less than two times/year, 12.39% (n=557) empty their latrines two time/year, 18.31% (n=557) with a period of three times/year, 10.77% (n=557) of respondents empty their latrine four times/year, and 7.54% (n=557) within a period of more than four times/year. To assess the relationship between the types of on-site sanitation systems and the variation of desludging periods within the surveyed population, the cross correlation using the 2-sided Pearson Chi-Square tests was employed. The test revealed 53.3% of the variability of emptying periods within the surveyed population with a strong significant effects of the types of sanitation systems (X 2 = 371.30; p<0.001) (Figure 9). Looking at the septic tank facilities, the desludging frequencies recorded in the study area were 49.79%, 14.97%, 23.07%, 6.48% and 5.67% respectively for the desludging frequencies less than two times per year, three times per year, four times per year and more than four times per year (n=247) (Figure 10). For the traditional pit latrines, the frequencies of the desludging periods recorded were in order of 56.09%, 4.18%, 14.98%, 13.58% and 4.18% respectively for the desludging frequencies less than two times per year, three times per year, four times per year and more than four times per year (n=287). According to the ‗piped equipped latrine', the desludging period frequencies recorded were 84.21% and 15.79% respectively four times per year and more than four times per year (n=19). The variation recorded in the distribution of the desludging frequency may be due to the variation of the number of latrine users, the volume of the pits, the available space in the household as well as the socioeconomic status of households. Comfort of use On the basis of the survey information and field observations collected, the significant variation of respondent's opinion looking at the problems related to the comfort of toilet facilities was observed (X 2 =204.408, p<0.001, n=368). The spread of bad odours was observed in most of the toilets investigated with a higher prevalence recorded in the ‗piped equipped latrines' (> 80%, n=20) ( Figure 11). The case of insecurity was observed in households practicing open defecation (71.42%, n=14) and those who used the traditional pit latrines (11.25%, n=244). Exposure was observed in some of the septic tanks (15.78%, n=90) and traditional pit latrines (17.21%, n=244). Some pictures of unsafe facilities with discomfort are presented in Figure 12. Indicators of improved, safe and sustainable sanitation systems To classify the on-site sanitation systems investigated as improved, hygienically safe, sustainable and functioning, the WHO/UNICEF JMP indicators was applied. Looking at the facility design, 85.36% (n=568) of latrines with slab (indicator 1) build with concrete, brick, rock or other hard material ( Figure 13). In 68.79% (n=568) of the facilities investigated, wastes were contained into the pit/tanks of latrines without overflow to the surrounding areas (indicator 2) and 62.85% (n=568) of latrines had below ground pit/tank lined allowing the safe waste emptying and the protection of shallow groundwater (indicator 3). Looking the waste management practice, 58.97% (n=568) of households were accessible by the vacuum tanker services for the extraction of faecal sludge in the pit of latrines (indicator 4), 52.62% (n=568) of pits were accessible to hygienic emptying service vehicles (tanker or tug) (indicator 5). The functional conditions of the toilet facilities were also investigated and 90.23% (n=568) of the latrines investigated was not completely full (indicator 7), 66.85% (n=568) had half wall/door (indicator 8) and only 28.75% (n=568) of the facility has a cabin containing full height walls, a full height door and a roof (indicator 8a). The variation of the WHO/UNICEF JMP indicators observed in the study area may be explained by the differences observed in the monthly incomes, level of education of householders as well as the willingness of householders to pay for access to an improve and sustainable sanitation system. Sanitary and environmental risks associated with the current defecation practices in surveyed households Based on the survey results, 61.30% (n=598) of the investigated population have suffered from several cases of faecal-oral transmitted diseases (amoebiasis, cholera, helminthiasis, thyphoid fever) within the past six months (Figure 14) with the difference of prevalence between current defection practices in households (Figure 15). It appears that the faecal-oral transmitted diseases were less prevalent in households with septic tanks as toilet facility (only 31.07% of diseases prevalence recorded) in comparison to the households with traditional pit latrines as toilet facility 58.84% (n=311) of diseases prevalence recorded. The VIP latrines users did not mention the prevalence of faecal-oral transmitted diseases contrary to the households practicing open defection. The significant correlation between the prevalence of faecal-oral transmitted diseases and the type of on-site sanitation was found (X 2 = 163.03, p<0.001, n=598) as well as the distribution of the type of faecal-oral transmitted diseases recorded in the households (60% of variation, X 2 = 170.29, p<0.002, n=293) ( Figure 16). A maximum rate of prevalence was recorded in households using traditional latrines as toilet facility and no disease was recorded in households using VIP latrines as toilet facility. According to the variation of the type of faecal-oral transmitted diseases recorded in relation to the toilet facilities used in households, typhoid fever were the most prevalent diseases. The prevalence distributions recorded were in the order of 38.58%, 26.69% and 55% respectively for traditional pit latrine, septic tank and piped equipped latrine. The prevalence recorded for amoebiasis which is the second case of disease recorded in the study area after the typhoid fever were in the order of 11.57%, 1.59%, 30% and 35.71% respectively for traditional pit latrine, septic tank, piped equipped latrine and open defecation. We mentioned that open defecation in this study is considered as current defecation practices found in households who did not have on-site sanitation technology. The prevalence of helminthiasis in the studied households were in the order of 6.10%, 2.39%, 10% and 10% respectively for the traditional pit latrine, septic tank, piped equipped latrine and open defecation. For the cholera disease, the prevalence recorded were in the order of 2.25%, 0.32% and 0.32% respectively for the traditional pit latrine, septic tank and open defecation. The prevalence of the faecal-oral transmitted diseases in the study area may be due to the weak maintenance as well as the sanitation and hygiene conditions during the management of the toilet facilities in households. Typology of on-site sanitation The finding of this study revealed that there were differences in on-site sanitation system from one urban setting to the other within the study area. Similar observation was made by Letah Nzouebet et al. (2016) working on the prevalence and diversity of intestinal helminth eggs in pit latrine sludge of a tropical urban area. The authors pointed out that several sanitation technologies in the study area were constituted of septic tanks, traditional pit latrines, ventilated improved pit latrines and piped equipped latrines. Indeed, the results of Cheng et al. (2018) working on the toilet revolution in China revealed several methods of improved sanitation technologies which are represented by septic tanks, doublevault funnel latrine, double pit alternate type, biogas-linked toilet, urine-faeces diversion latrine and integrated flushing latrine. In addition, the finding Stenström et al. (2011) who assessed microbial exposure and health assessments in sanitation technologies and systems in tropical area found that open defecation is widespread in developing countries and is the most significant environmental factor involved in the transmission of sanitation-related diseases. Relationship between the on-site sanitation systems and the level of education of households This study showed a significant correlation between the educational level of householders and the type of on-site sanitation systems in place in the households investigated. Similar findings was obtained by Jenkins et al. (2014) who revealed the strong effects of educational level of households and the type of on-site sanitation systems when assessing excreta disposal in Tanzania. According to these authors, well-educated households often know more information on adequate emptying frequency, choosing qualified mechanical emptying services and ensuring environmental and health safety for the surrounding population. Additionally, Brown et al. (2015) pointed the difficulty to achieve a change in excreta disposal practices as they are part of the basic behavioural pattern of a community. As unsafe practices, the open defection found in the study area constitutes a significant health risk to populations through contamination of ground and surface water resources. Indeed, Bartram and Cairncross (2010) stated that about 2.4 million deaths (4.2% of all deaths) could be prevented annually if everyone practised appropriate hygiene and had good and reliable sanitation technologies at household levels. Relationship between the on-site sanitation systems and monthly income of households This study showed significant effects of the monthly income on the distribution of on-site sanitation technologies in the households investigated. It revealed that the good sanitation practices in the study area may strongly depend on the household's income. This is the main reason why the number of latrines observed in the low income households were half constructed and do not have permanent roofs and doors in some cases. The strong correlation between the household incomes and the type of sanitation systems used in households was addressed in the literature (Bakare et al., 2015). The authors demonstrated the strong implication of the inadequate financial resources on the weak coverage of improve sanitation technologies in Tanzania and Senegal respectively. Latrine characteristics The variation of the number of persons using the on-site sanitation systems was recorded in this study and this number is proportional to the number of persons living in the households. This current situation found in the study area could be explained by the practice of non-sharing latrines. Indeed, Jenkins et al. (2014) by assessing the sanitation access in rapidly expanding informal settlement in Tanzania revealed that the practices of sharing latrine is common at mixed landlord-tenant and tenant only residences compared with family occupied residences were this practices is very low or absent. These authors pointed out the economic factors, like the sharing of latrine involved several contributions from multiple users in building, maintenance and operating a shared facility. According to the latrine building materials, several building materials were used by households to build their latrines with most of the latrines built in concrete. Similar observations were done by Bakare et al. (2012) assessing the excreta disposal in Dar-es-Salaam (Tanzania). The authors mentioned the variation of the latrine building materials to be strongly affected by educational level coupled to the financial resources. For these authors, low education is the main reasons for low-paying jobs performed by householders with an implication on the investment into good excreta disposal facilities. Looking at the desludging periods, several variation were recorded and could be due to the variation of the number of latrine users, the volume of the pits, the available space in household as well as the socioeconomic status of households. Indeed, the findings of Nakagiri et al. (2016) working on the characterisation of pit latrines sludge in urban areas of Sub-Saharan Africa mentioned the high variation in the pit latrine depths and the number of latrine users as main factor which may affect the pit filling rate. Additionally, Bakare et al. (2012) demonstrated the high-water table to be the main causes of the pit's filling rate. The pipe equipped latrines identified during this study and which is usually located near the water table showed a higher desludging period of more than four times per year. According to the literature, the number of latrine users could strongly affect the rate of faecal sludge accumulation in the pits of latrines as the excreta production rate per person was estimated at about 0.12 to 0.40 litre of faeces and 0.6 to 1.5 litre of urine per day (Bakare et al., 2012). In addition, the work of Cheng et al. (2018) pointed out about a million tons of faecal sludge collected yearly in public toilet in urban area of China. Furthermore the study done by Gning et al. (2017) in Dakar pointed out the absence of regulations by the Senegal Government in term of the cost of desludging paid by households for having access to the mechanical emptying service. This could also be one of main reasons of the setbacks in the context of the study area since most of the investigated households did not have access to a clean safe hygienic mechanical emptying service. Some unsafe latrine facilities with discomfort were found in the study area. This discomfort is the spread of bad odours, insecurity and exposure. The finding of this study corroborate with those of Bakare et al. (2012). The authors pointed out the prevalence of unsafe latrine technologies in South Africa. Additionally, the inquisitive eyes that may occur during open defecation, as reported in this study is found to be particularly disadvantageous to women who are prone to sexual abuse while finding places to ease themselves. The variation of the WHO/UNICEF JMP indicators was observed in this study. This variation may be due to the differences observed in the monthly incomes, level of education of householders as well as the willingness of householders to pay for access to an improve and sustainable sanitation system. Indeed, Taweesan et al. (2015) by assessing accelerating uptake of inhouse toilets of a rural community in Ghana pointed out the financial limitation to constructing improved in-house latrines. In addition, the literature pointed out the variation of the JMP indicators to be affected by the level of education (Jenkins et al., 2014;Brown et al., 2015). Sanitary and environmental risks associated with the current defecation practices in the surveyed urban areas The current defecation practices in households were found to be associated to sanitary and environmental risks. Indeed, the empirical evidence provided by Tumwebaze et al. (2013) suggests that toilets facilities end up in a deteriorated state and pose health risks since users fail to adequately maintain them. Additionally, Taweesan et al. (2015) revealed adequate sanitation as one of the fundamental key factors for good health and socioeconomic development. Also, the improvement of sanitation technologies can substantially reduce the rate of morbidity and severity of various diseases affecting the quality of life particularly for children (Mara et al., 2010;Stenström et al., 2011;Cairncross et al., 2010). Furthermore, lack of hand washing may be the cause of the prevalence of faecal-oral transmitted diseases found in the study area. According to the findings of Wolf et al. (2018) working on the impact of drinking water, sanitation and hand washing with soap on childhood diarrhoeal disease, the authors reported the association between improved household sanitation facilities and diarrhoea compared with unimproved sanitation and two observations respectively of sewer connection compared with unimproved and improved sanitation facilities. For Mathew et al. (2017) working on the systematic review and meta-analysis of the impact of sanitation on infectious diseases and nutritional status showed the positive impacts of sanitation on the aspects of health. However, the role of the health sector in improving sanitation is fundamental for the promotion of sanitation in environmental health planning at the local and national level. Thus, behaviours should be changed to increase householders' demand for sustained use of excreta disposal facility. According to Tumwebaze et al. (2013), the enforcement role of the health sector is particularly important in urban areas where high-living density increases the risks of faecal contamination in the environment and where one person's lack of sanitation can affect the health of many other people. In the fact, Koné et al. (2016) pointed out the necessity of the safe treatment of faecal sludge coming mainly from on-sanitation technologies in order to minimize the risk of infections along the faecal sludge management chain. Conclusion The objective of this study was to present a detailed assessment of excreta disposal facilities in 602 randomly selected households in the city of Yaounde (Cameroon). The study aimed at proposing and applying a set of indicators to characterize and assess the hygienic safety and sustained functioning of the existing device, including locally available excreta disposal, according to JMP definitions and the assessment of the health status of latrine users. The results showed that several mods and characteristics of individual sanitation are closely related to the standard of households as well as the household incomes and the level of education. Also, the observed heterogeneity of sanitation systems is related to the standard of the surveyed households, the monthly income and the level of education. The latrine facilities were different in terms of number of users, emptying modes and frequency, as well as building material. The respondents using inadequate toilet facilities are suffering from faecal-oral transmitted diseases, with higher prevalence in rainy seasons. The finding of this study may have important implications for defining what constitutes ‗improved' sanitation for poor populations living in unplanned informal settlements.
2019-11-14T17:08:51.881Z
2019-11-11T00:00:00.000
{ "year": 2019, "sha1": "dfbb7fbcc9da4a4ff41c719ba311013bb3066ca3", "oa_license": "CCBYNCSA", "oa_url": "https://www.ajol.info/index.php/ijbcs/article/download/191038/180214", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dea5fb6fd94e6bfcfb55bdc5b6e7782b3b157fc5", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Geography" ] }
53764553
pes2o/s2orc
v3-fos-license
Attitudes of prehospital providers on transport decision-making in the management of patients with a suicide attempt refusing care: A survey based on the Mental Health Care Act of 2002 Background Given the frequency of suicidal patients making attempts prior to a completed suicide, emergency access to mental health care services could lead to significant reduction in morbidity and mortality for these patients. Aim To describe the attitudes of prehospital providers and describe transport decision-making around the management of patients with a suicide attempt. Setting Cape Town Metropole. Methods A cross-sectional, vignette-based survey was used to collect data related to training and knowledge of the Mental Health Care Act, prehospital transport decision-making and patient management. Results Patients with less dramatic suicidal history were more likely to be discharged on scene. Few respondents reported the use of formal suicide evaluation tools to aid their decision. Respondents displayed negative attitudes towards suicidal patients. Some respondents reported returning to find a suicidal patient dead, while others reported patient attempts at suicide when in their care. Eighty per cent of respondents had no training in the management of suicidal patients, while only 7.0% had specific training in the Mental Health Care Act. Conclusion A critical lack in the knowledge, training and implementation of the Mental Health Care Act exists amongst prehospital providers within the Western Cape. A further concern is the negative feelings towards suicidal patients and the lack of commitment to transporting patients to definitive care. It is essential to urgently develop training programmes to ensure that prehospital providers are better equipped to deal with suicidal patients. Introduction 'People who are suicidal need further management -to be left alone is like saying nobody cares.'… ' [I] wish further training could be done as most patients end up DOA [dead on arrival] at a later stage.' (Participant 38, female, provincial sector) Globally, suicide poses an increasingly grim public health concern and is the 13th leading cause of mortality worldwide. 1 The World Health Organization (WHO) estimates 1 million deaths occur annually as a result of suicide. 2 In South Africa, national suicide figures in the year 2016 give age-standardised rates of 18.7/100 000 for men and 4.7/100 000 for women. 3 In society, there is a deep-seated belief that people who threaten suicide are not likely to follow through with their threat. However, the data assert that this is incorrect. 4 Repeated suicide attempts often occur when a patient's initial threat or attempt does not get the desired effect (i.e. the cry for help fails). 4 About 10% of those who threaten or attempt suicide eventually do kill themselves, 5 while 80% of those who commit suicide have previously verbally stated their intention to do so. 6 difficulties in health worker-patient relationships. 8 Patients discharged against medical advice (DAMA) showed reduced treatment benefit; worse psychiatric, medical, psycho-social and socioeconomic functioning; had decreased access to outpatient services; overused emergency services; and were readmitted sooner. 8 Considering the risks of multiple suicide attempts, the detection and follow-up of treatable conditions is key to successful long-term management of patients at risk. 5 Kuo et al.'s study on 11 040 acutely ill psychiatric inpatients in Taiwan found an increase in incidence of successful suicide in psychiatric patients who leave 'DAMA' compared to those who were discharged by their psychiatrists. 7 Patients who were DAMA might have unresolved, unaddressed detrimental psychiatric or psychosocial difficulties and were more likely to complete suicide. 7 Patients presenting to health care services with a suicide attempt who receive active suicide prevention contact and follow-up may reduce the risk of repeated suicide attempts in the next year. 9 First responders and prehospital personnel have been identified by the WHO as key role-players in the prevention of community suicide. 2 Prehospital providers are often the first and only health care providers on scene after a suicide attempt; thus, their role in the patient's access to mental health care services is critical. Of concern is the lack of formal training in the Mental Health Care Act (MHCA) and direction by policy-makers for these practitioners. The urgency of managing these patients within the community adds additional cognitive and emotional stress for prehospital providers. 5 Prehospital providers engage in the management of the suicidal patients in two ways: most commonly in the assessment at the scene of the incident and transport to hospital, as well as the interfacility transfer of patients to specialised psychiatric services. 10 South Africa has a three-tiered 'level of care' emergency medical services (EMS) model, basic life support (BLS), intermediate life support (ILS) and advanced life support (ALS). The South African EMS is currently going through a period of transition. Until now, the pathway to become a prehospital provider was either through vocational shortcourse training, or through formalised tertiary education. After 6 weeks of training, an individual could register with the Health Professions Council of South Africa (HPCSA) as a Basic Ambulance Assistant (BLS). After approximately 6 months of work experience, a BLS provider could complete a 6-8-month course, registering as an Ambulance Emergency Assistant (ILS). Finally, after another 6 months of work experience, the ILS provider could complete a 9-12-month course and register as a Critical Care Assistant (ALS). This means of qualification is now being phased out, as promulgated by the Minister of Health of South Africa. Going forward, formal academic training at a higher education institute is required to register as a prehospital provider. The courses are between 1 and 4 years in duration. The overarching term 'prehospital provider' has been selected to describe any EMS practitioner in this article, regardless of qualification. In a resource-limited health system, first responders frequently become the gatekeepers for access to mental health care services and, as a result, they need to have a clear understanding of both local legislation for involuntary assessment and treatment criteria. 11 These individuals can frequently become discouraged from transporting patients to health care facilities because of long waiting times and bed shortages, and these frustrations undermine the importance of helping the patient access mental health care and substance abuse treatment. 2 Pre-arranged agreements between prehospital services, hospitals, community mental health care services and addiction agencies can help the first responder to streamline the referral process. 2 Patients who have attempted to commit suicide may have limited insight into their illness and a restricted ability to cooperate with treatment. 12 This may require measures that, in line with local legislation, restrict their personal freedom and allow transportation to hospital against their will. 5 However, the health protocols on the evaluation of suicidal patients rarely apply to prehospital providers, who are generally the first at the scene. 5 Actively suicidal, aggressive or agitated patients can pose a risk to staff safety on the scene of an incident. Pre-hospital providers have limited training around verbal de-escalation, and should verbal de-escalation techniques fail and patients pose a danger to themselves; prehospital providers may require the assistance of ALS paramedics able to prescribe and administer sedative drugs, physical restraints and/or police intervention may be required to transport patients to a facility for a formal risk assessment. 5 Safety is also of concern during ambulance transport. There have been cases reported of injury or completed suicide during hospital transfer because of patients jumping from moving ambulances. 11 In addition to the explicit guidance on the process for application for involuntary admission of psychiatric patients, the South African MHCA of 2002 provides criteria to which the patient should be assessed in order to initiate such an application. In short, the patient should be incapable of making an informed decision regarding their own care because of suspected mental illness, should pose a threat of harm to themselves or others, and care is needed to protect the financial interests or reputation of the individual. Such applications should be made in writing. 13 For prehospital providers, this is not achievable at the scene and therefore requires the rapid assessment and transport of these patients to health facilities for evaluation and possible emergency admission on an involuntary basis. The MHCA describes such involuntary transport to be a function of the South African Police Services and does not address the role of EMS. In reality, however, EMS are often the first to arrive at the scene of a suicide attempt and have to manage these patients without police assistance. Determining the training received on and the interpretation and application of the MHCA by prehospital providers is therefore essential towards ensuring adequate care for suicidal patients. Aim The aim of this study was to describe the attitudes (perceptions and experiences) of prehospital health care providers regarding the care and management of patients with a suicide attempt in the Cape Town Metropole. Objectives The following were the objectives of this study: • to understand the perceptions of prehospital providers regarding transport decisions for suicidal patients • to describe personal experiences of the management of patients with suicide attempts • to determine the training that prehospital providers have received on the MHCA in dealing with mental health care users. Research methods and design The study was a cross-sectional design using a survey with open-and closed-ended questions. This survey was generated through a review of the literature and validated for content by an expert group of emergency medicine, prehospital medicine and psychiatry practitioners. The survey included demographics, training in and knowledge of the MHCA in relation to prehospital transport decision-making. There were also five vignettes in which respondents were expected to describe their transport and management decisions in various patients who refused transportation. The vignettes ranged in severity of suicide attempt based on traditional risk factors (patient age, gender, major depression, feelings of hopelessness, substance abuse and previous suicide attempts). 14 Opinions regarding suicidal patients were explored, as well as accounts of challenges they have experienced transporting suicidal patients. A pilot was carried out with a small cohort of prehospital providers to test usability and feasibility. A convenience sample of 100 prehospital providers of all levels from the provincial and private sector was sought via cluster randomisation of Cape Town ambulance stations. Inclusion criteria were registration with the HPCSA and full-time clinical operational employment. Data were subjected to descriptive analysis with the aid of NVivo® software (QSR International; Victoria, Australia) as well as hand coded independently by the three researchers. Demographic data and multiple-choice questions are presented as total numbers, means, medians and standard deviations. Associations between demographic data, knowledge-based answers and transport decisions were investigated by chi-square analysis. Ethical consideration Ethics approval was obtained from the University of Cape Town's Human Research Ethics Committee (HREC 533/2014) and local permission to conduct the study was obtained from the three ambulance services' relevant research committees. Results One-hundred and thirty paper surveys were distributed and 100 were returned, yielding a response rate of 77.0%. Two responses were excluded for data quality, with 98 responses eligible for analysis. There was an equal distribution of private and public service respondents (Table 1). Feelings around transport decisions Common themes were identified in the answers to case vignettes (Box 1). While the risk to self-harm was more explicit, the practitioners reported a higher need for police involvement and the use of physical and chemical restraint. In younger patients, family involvement was more likely. In less than two-thirds of all vignettes did participants take steps to convince patients to be transported to hospital. Some respondents reported utilising some form of informal risk assessment methods, with no practitioners reporting the use of validated risk assessment tools. Another common theme identified was the described need for a more senior roleplayer to aid with transport decision-making. This ranged from the control room supervisor, officer or shift-leader or even a higher qualified clinician such as ALS paramedic or doctor. A subset of staff also expressed the importance of these discussions being recorded. Family, involvement was utilised in various means. This included using the family to convince the patient to be transported by ambulance voluntarily and in certain cases requested the family transport the patient by force privately to hospital. In the vignettes involving minors with suicide attempts, the respondents reported that they would allow family to make the ultimate decision about transport. Some participants described the family as being a hindrance rather than a help to the prehospital care providers' management strategies: 'they get in the way' and 'the family are sometimes more difficult than the patient'. The provision of involuntary care to patients who pose a potential danger to themselves was not unanimously expressed by respondents. In the vignettes, less than half of respondents stated that they would transport a patient to hospital against their will. 'I can't force the patient … I have to leave patient at home. When the patient refuses transport there is not much I can do' 'We can't force anyone because it is a form of kidnap'. The importance of the patient signing for their refusal of care was commonly reported and many expressed the belief that this documentation absolved them from legal responsibility. Personal experiences and attitudes around the management of suicide attempt patients A portion of respondents expressed negative attitudes towards patients, citing varying reasons. A common theme identified was that caring for these patients was a 'waste of time', with prolonged time spent 'talking in circles' for which they felt they did not have the patience or would be penalised in terms of performance. There was mention that the time 'wasted' with these patients could be better spent. 'It feels like you are able to help more people with your time'. Perceived self-pity on the patient's behalf was seen as an irritation by some respondents: 'clearly looking for attention, trying to spite somebody and just wasting people's time if they weren't going to do any harm to themselves … pretending'. Emotions expressed included feeling drained from these encounters, scared, threatened, finding it difficult to not be judgemental and an inability to empathise. Some respondents expressed that the patient has 'chosen to die' and should therefore be left to their own devices. Numerous prehospital providers report that they feel uneasy about the uncertainty of the outcome of their suicidal patients who were not transported to hospital. Some reported to actively avoid obtaining follow-up on these patients for fear of the consequences. At least five respondents reported a personal account of a suicidal patient's death after being given permission to sign a refusal of care or transport document and being left on scene. Additionally, four respondents reported having not transported patients to hospital, who subsequently developed severe complications resulting from the suicide attempt or a subsequent attempt at suicide. In addition, 15 respondents mentioned that they personally know a colleague who reported a death of a suicidal patient after not transporting the patient to hospital. Staff reported a lack of support after these events with a lack of counselling or debriefing opportunities. One reported a colleague resigning because of difficulty in coping with a patient death. More than half of participants identified threats to their safety during the care of suicidal patients as a concern. Three reported having been injured by suicidal patients in the course of emergency assessment and treatment. Mechanisms of injury included bite wounds, facial scratching and blunt assault. Reports of self-defence methods utilised by staff included the use of a Taser as well as placing the patient in a 'head-lock' until assistance arrived. Participants described concern regarding the risk of a patient committing suicide while in their care in the ambulance. Two respondents described past experiences of patients stabbing themselves and one respondent reported a patient jumping from a moving ambulance. Training of prehospital care providers in the context of the Mental Health Care Act Of the respondents, 80% (n = 78) reported no training in the management of psychiatric patients, while only 7% (n = 7) had specific training in the MHCA of 2002. A chi-square test reveals no association between qualification and training in psychiatric management ( p = 0.062) or the MHCA ( p = 0.41). Participants expressed a desire for further education or training in the management of psychiatric (and specifically suicidal) patients in the form of Continuous Medical Education (CME) sessions, internal service updates or training as part of original qualifications. risk factors for completed suicide were managed more conservatively and were less likely to be transported to hospital against their will. This is of major concern considering that patients with minimal risk factors are still inherently at risk of potential self-harm or suicide. Eventual death by suicide by these patients is demonstrated in the literature, 8 and specific accounts of eventual suicide in patients left on scene have been reported by the respondents sampled in this study. Discussion Respondents erroneously mentioned that patients cannot be transported to hospital without consent under any circumstances. Respondents did not have the confidence to make decisions related to the further management by delegating the transport decisions to the family, specialists or senior role-players. We have identified a significant need for training of South African prehospital providers with regard to the MHCA as well as on-scene evaluation of the suicidal patient. In many countries, trained prehospital providers may initiate involuntary mental health holds. 12 Personal safety was an important theme identified. In a recent unpublished study on prehospital personnel, 56.0% of the 158 participants reported that they had been assaulted while on duty. 15 Similar results are described in the international literature that report an incidence of violence between 61.0% and 87.5%. 16 To protect themselves from potentially dangerous patients, providers often involve the police. Providers expressed concern that patients with negative previous experiences with police restraint often reacted poorly. Of concern is the negative attitudes towards suicidal patients. 17 Literature suggests that these often present as a lack of empathy on the part of the health care provider and can lead to accusing patients of attention-seeking behaviour. 18 General lack of training, lack of decision-making support and fear of personal safety may contribute to these negative attitudes. There was limited reported training across all qualifications with regard to MHCA and management of suicidal patients. Steps for improvement include: re-evaluating the current student curricula to better serve practical application and designing training programmes for already-graduated providers. Guidelines and policies for prehospital providers to assist in decision-making related to transport refusal in suicidal patients should be developed at a national level. Dedicated response teams with specific training may be a solution to manage these cases. Limitations External validity and generalisability of the study were affected by the limited geographic nature, convenience sampling and the biases of self-selection. However, we feel that the depth of the personal experiences disclosed has significant value. The mental health of each of the participants was not assessed, and therefore the impact that this might have had on the specific transport decision-making cannot be extrapolated. Conclusion This study shows a critical lack in the knowledge and training of prehospital providers within the Western Cape regarding the management of patients with a suicide attempt. Of further concern are the negative feelings expressed towards suicidal patients and a lack of commitment to transport patients to definitive care. The future development of, and research on, mental health care training programmes for emergency services and the adjustment of current curricula are critical. Specifically, this should focus on the transport of patients with emergency mental health care needs, methods of involuntary transport, psychiatric differential diagnoses and the legal framework within which this may be executed to protect the rights of the patient.
2018-12-02T16:55:09.979Z
2018-10-30T00:00:00.000
{ "year": 2018, "sha1": "7af9c937a2665ef0960da5d69a5037fe738aa008", "oa_license": "CCBY", "oa_url": "https://sajp.org.za/index.php/sajp/article/download/1156/1292", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7af9c937a2665ef0960da5d69a5037fe738aa008", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
266814274
pes2o/s2orc
v3-fos-license
Plant responses to climate change, how global warming may impact on food security: a critical review Global agricultural production must double by 2050 to meet the demands of an increasing world human population but this challenge is further exacerbated by climate change. Environmental stress, heat, and drought are key drivers in food security and strongly impacts on crop productivity. Moreover, global warming is threatening the survival of many species including those which we rely on for food production, forcing migration of cultivation areas with further impoverishing of the environment and of the genetic variability of crop species with fall out effects on food security. This review considers the relationship of climatic changes and their bearing on sustainability of natural and agricultural ecosystems, as well as the role of omics-technologies, genomics, proteomics, metabolomics, phenomics and ionomics. The use of resource saving technologies such as precision agriculture and new fertilization technologies are discussed with a focus on their use in breeding plants with higher tolerance and adaptability and as mitigation tools for global warming and climate changes. Nevertheless, plants are exposed to multiple stresses. This study lays the basis for the proposition of a novel research paradigm which is referred to a holistic approach and that went beyond the exclusive concept of crop yield, but that included sustainability, socio-economic impacts of production, commercialization, and agroecosystem management. Global warming, temperature stress and eco-physiological effects on crop yield and quality Climate change and agricultural production are highly correlated.It is now well established that global warming affects agriculture in several ways, including changes in average temperatures and rainfall.The predictability of extreme meteorological events (e.g.heat waves, flood and drought), changes in pests and diseases, increase in atmospheric carbon dioxide and ground-level ozone concentrations, and changes in the nutritional quality of foods (Zhao et al., 2020;Kumar et al., 2022) are among the drawbacks of this phenomena. This study considers the relationship of climatic changes and their bearing on sustainability of natural and agricultural ecosystems, with a consideration to the role of omicstechnologies, genomics, proteomics, metabolomics, phenomics and ionomics.Improving crops for higher adaptability and tolerance to climate changes can be achieved by resource saving technologies as precision agriculture and new fertilizers and amendments.Nevertheless, the adoption of a more holistic vision of agriculture and food production is necessary to achieve sustainable food security. Global warming is defined as the continuing rise of the average temperature of the Earth's climate system and is one of the cause forcing climate change (IPCC, 2019;Seneviratne et al., 2021;Zandalinas et al., 2021).Temperature is one of the major environmental factors affecting plant growth, development, and yield.Temperatures persistently above those optimal for plant growth may induce heat stress (HS), thus constraining the flowering and fruit developmental processes and strongly reducing yields.At some threshold high temperature may cause plant death.Extreme heat events can be classified according to the maximum temperatures reached (intensity), how often the events occur (frequency), and how long they last (duration).Extreme HS episodes and prolonged heat (global warming) demand radically different approaches from breeders to meet the demands of farmers, and consumers for food security.Several aspects need to be considered when carrying out risk assessment for crop production and food security.These include the extent of the adverse event, how frequently the sustainable temperature thresholds are likely to be crossed within the growing season, whether these extreme episodes exceed lethal temperatures, and the length of the event.Models, that capture the variety of drivers determining crop yield variability and scenario climate input data that samples the range of probable climate variation have been developed with an eye towards the mitigation of yield losses (Ribeiro et al., 2020;Schauberger et al., 2021;Stella et al., 2021).Under a global warming scenario, the identification of the temperature thresholds for the major crop plants and their effects on yield is vital in predicting risk for food security (Zhao et al., 2017).This is particularly true when considering that frequency and intensity of heat events will increase dramatically in the future, especially in tropical regions (geographic perspective) and in developing countries (national perspective) leading to >15% of global land becoming more exposed to levels of heat stress that will affect both food production and human health (Sun et al., 2019). Food production in the last century has shifted from the use of about 2500 different plant species to reliance on the 'four queens': rice, wheat, maize, and soybean (Smyḱal et al., 2018) (Figure 1).These crops provide two-thirds of the total human energy intake, while the grain legumes alone contribute 33% of required human dietary proteins.This affects food security and environmental sustainability (Foyer et al., 2016).Persistent dependence on such a small number of agricultural commodities (Khoury et al., 2014) coupled with climate uncertainties (Foley et al., 2011) could become factors of great economic instability and political vulnerability.Assessing the impact of global temperature increases on the production of these commodity crops is therefore a critical step for maintaining the global food security (Zhao et al., 2017) as Map of probable shifts in cultivation areas of some key and traditional crops due to global warming and climate change, positioned on the heat prevision map extracted from the last IPPN report (2081)(2082)(2083)(2084)(2085)(2086)(2087)(2088)(2089)(2090)(2091)(2092)(2093)(2094)(2095)(2096)(2097)(2098)(2099)(2100). discussed in recent reviews reporting on the threshold temperatures for several crop species (Kaushal et al., 2016;Janni et al., 2020). Several examples have been reported of the effects of heat on crop yield and quality.In wheat a mean daily temperature of 35°C caused total failure of the plant, while exposure to short episodes (2-5 days) of HS (>24°C) at the reproductive stage (start of flowering) resulted in substantial damage to floret fertility leading to an estimated 6.0 ± 2.9% loss in global yield with each degree-Celsius (°C) increase in temperature [8,35].Increasing the duration of high temperature at this stage linearly reduced the grain weight (Prasad and Djanaguiraman, 2014); similarly for pea (Bhattacharya, 2019), lentil (Barghi et al., 2012) and chickpea (Wang et al., 2006).In response to 2°C of global warming, the total production in the top four maize-exporting countries is projected to decline by 53 million tons (51.9-54.8),equivalent to 43% (41.5-43.8) of global maize export volume (Tigchelaar et al., 2018).Kaushal et al. ( 2016) (Kaushal et al., 2016) provide an extensive analysis for several crop species of the threshold temperatures above which growth and development are compromised, while Zhou et al. (2022) (Zhou et al., 2022), extensively reported the physiological effects of heat stress on yield limitation (Zhou et al., 2022).A recent overview of the effects of threshold temperatures for vegetative growth and reproductive development in several crop species has been reported by Janni and co-workers (2020) (Janni et al., 2020).Even taking into account the heterogeneity in the collection of data and the time frames of experiments, it is evident that HS is correlated with decreased yields of the major crops; cereals are particularly sensitive to heat during grain filling, which also affects the quality (Maestri et al., 2002).Seed filling is a crucial growth stage for most crops, and involves mobilization and transport of various chemical constituents, and activates many biochemical processes made for the synthesis of proteins, carbohydrates, and lipids in the developing seeds (Ali et al., 2017).It is influenced by various metabolic processes occurring in the leaves, especially production and translocation of photo-assimilates, providing precursors for biosynthesis of seed reserves, minerals, and other functional constituents (Fahad et al., 2017;Sehgal et al., 2017). HS can impair several physiological processes linked with seed size and quality.HS during grain filling markedly decreases starch accumulation (Hurkman et al., 2003), in rice (Yamakawa and Hakata, 2010) and maize (Yang et al., 2018) as well as the levels of sugars such as fructose and sugar nucleotides as hexose phosphate (Yang et al., 2018); the decrease in sugars may be related to enhanced assimilate utilization rather than to an increase in edible component production.In maize, waxy grain starch content was decreased, whereas protein content was increased, resulting in a change of grain quality (Yang et al., 2018).Moreover, increasing temperature and CO 2 induces protein and micronutrients contents in grain (Chakraborty and Newton, 2011) and in soybean (Li et al., 2018).In soybean under HS, total free amino acids were reduced together with the total protein concentration, while the oil concentration was significantly increased (Takahashi et al., 2003).As a general conclusion, under HS, reductions in total yield are mainly due to alteration of the source and sink activities that take place. Although it might be argued that the 'fertilization effect' of increasing CO 2 concentration may benefit crop biomass thus raising the possibility of an increased food production (Degener, 2015), emerging evidence has demonstrated a reduction in crop yield if increased CO 2 is combined with high temperature and/or water scarcity, making a net increase in crop productivity unlikely (Long et al., 2006).Water supply is thus a deeply linked issue.It has been estimated that in the period 1990-2020 total rainfed and irrigated growing areas together increased by 35% for maize, 0.3% for wheat, 13% for rice, and 159% for soybean.Rainfed areas for wheat and rice decreased by 10 and 7%, respectively, while the rainfed maize area increased by 24% (compared to the 35% increase in total area), and rainfed soybean areas increased by 158% -most of the increase in soybean areas was rainfed (Sloat et al., 2020). Agroecosystems resilience, plant resilience, temperature tolerance An increases of global temperature was perceived already in the 70s and lead to the definition of this phenomena as global warming (Broecker, 1975).Indeed, the majority of reports have warned that HS due to increases in global temperature can cause global yield to a decline (Sadok and Jagadish, 2020;Zhu et al., 2022) as a result of eco-physiological stress. In fact, projections of climate change risks produced through advanced modelling are consistent in indicating a negative influence on crop production (Challinor et al., 2014;Konduri et al., 2020) and a worsening in food quality and nutritional values (Chakraborty and Newton, 2011).Climate models can forecast temperature increases at the regional level with higher certainty than other changes, as precipitation.Multimethod analysis can improve our confidence in assessment of some aspects and consequences of future climatic impacts on crop productivity and inform about the adoption of specific rescue strategies (Zhao et al., 2017).After 30 years of efforts and some progress under the United Nations Framework Convention on Climate Change (UNFCCC), the anthropogenic greenhouse gas (GHG) emissions in continue increases and the evenience of a catastrophic exit is relatively under-studied and poorly understood (Kemp et al., 2022). The specialization in crop selection and production, and the economic scale that has developed, have led to a huge increase in productivity in agroecosystems.But the long-term sustainability of these may be reduced by some of the constraints associated with global warming, especially when it is considered what the current complex agroecosystems provide not only for harvest, but also for other important ecosystem services of great social and economic value (Di Falco and Chavas, 2008). Several reviews have addressed mainly HS effects on crop yield, focusing on the role played by the molecular mechanisms underpinning plant resilience and yield reduction (Table 1).However, most did not consider global warming and HS as significant combinatorial factors (Table 1) in acting to reduce food security. Resilience of cropping systems to global warming and to temperature increase can be described in terms of resilience of the related agroecosystems, i.e. their capacity to support yield in critical environmental conditions like HS (Allan et al., 2013;Zampieri et al., 2020;Saeed et al., 2023).We can think the resilience of an ecosystem as the capacity to maintain its function, identity and organization, though subjected to a critical disturbance (Holling, 1978).For agroecosystems this definition is problematic due to the bias of human intervention, but metrics of resilience can be taken into consideration in a framework which uses a number of phenological indicators (Cabell and Oelofse, 2012;Deutsch et al., 2018). Resilience is certainly a holistic way to describe some properties of agroecosystems which are context-dependent (Carpenter et al., 2001).But a system considered resilient today can become less so over the years or even the months, because of a gradual or a sudden changes of context (Holling, 2001).Tolerance to temperature stress has a cost because it implies a consistent allocation of energy resources to maintain survival at the expense of reproduction and growth and therefore with a tradeoff between maintenance and yield. Three mutually interacting concepts need to be considered when dealing with agroecosystems.These are (i) agroecosystem welfare and the way it interacts with human needs over the time; (ii) agroecosystem resilience, meaning its capacity to adapt, overcome stress and reorganize in stressing environments or when perturbation to the norm becomes frequent, as in global warming; and (iii) food security, the production of sufficient food of good quality for the human and animal populations.A holistic approach to food security expands the problem well beyond the simple concept of crop yield, also including sustainability, socioeconomic impacts of production, commercialization, and agroecosystem management. Both social and biological aspects are relevant to a correct management of agroecosystems.But climate change and global warming could give rise to such a rapid, deep, and unpredictable changes that current agroecosystems may fail to adapt.Recently, a meta-analysis on 10,000 animal species has been published considering only phenological traits, concluding that most of these species are at a risk of not surviving if global change continues in intensity and direction.Even maintaining the highest possible level of diversity within our agroecosystems may not be sufficient to combat global change and its effects on food security (Hoy, 2015). Global warming and temperature increase are often taken as stressor examples but although they are certainly threatening phenomena, it is difficult to isolate each single component from them.Plants resilient to global warming and temperature increase may be capable of withstanding HS without any significant departure from their growth habits and productivity (Maestri et al., 2002;Law et al., 2018). Novel fertilizers and biostimulants to increase plant resilience As previously widely discussed, global changes including high temperatures, drought, and salt accumulation are reported main factors of soil desertification and plants yield reduction.In this context, biostimulants (BSts) could play crucial roles in mitigating the negative effects of stresses on plants by inducing several protection mechanisms, like molecular alteration and physiological, biochemical, and anatomical modulations (Sangiorgio et al., 2020;Bhupenchandra et al., 2022).They also stimulate the innate immune responses of plants to biotic stress by deploying cellular hypersensitivity, callose deposition, and lignin synthesis (Bhupenchandra et al., 2022). Production of "conventional" chemical fertilizers has a large share in global CO2 emission, calculated in about 500 million tons/ year (FAO, 2020) worldwide.Production of organic fertilizers on the other hand is largely dependent on animal farming with its considerable share of glass house gas emission (Timsina, 2018;Ramakrishnan et al., 2021) Sustainable alternatives under experimentation are nanofertilizers (Kah et al., 2018), biofertilizers (Bhardwaj et al., 2014) and new soil amendments (Rombel et al., 2022). Nanofertilizers belong to the family of engineered nanoparticles (ENPs) with dimension between1-100nm and have shown some beneficial protecting effects on plants, like stimulation of growth and promotion of nutrients absorption (Abdel-Aziz et al., 2021;Kalwani et al., 2022).Recent studies on tomato have shown the beneficial effect of some nanoclay which confirmed previous studies in zucchini (Marmiroli et al., 2021;Pavlicevic et al., 2022).Some advantage of nanofertilizers as compared to chemical fertilizers are the slower release of the nutrients with the time thus avoiding dispersion and washing out to superficial water body with risk of eutrofization (Zulfiqar et al., 2019).However, the production of nanofertilizers is still expensive and limited by regulatory frameworks and then use by farmer's acceptance (Kah et al., 2018;Kah et al., 2019).Biological (Green) synthesis of Bio Nanofertilizers is very slow but may became a suitable option (Zulfiqar et al., 2019). Plant growth promoting microorganisms (PGPM) are types of microbes (bacteria, fungi) that through a plant-microbe interaction stimulates the plant immune system (Backer et al., 2018).PGPM stimulates and enhances plant capabilities to absorb nutrients and defend from pathogens.This may result in increased plant yield and health (Backer et al., 2018;Lopes et al., 2021;Ramakrishnan et al., 2021).The performance of biofertilizers can be enhanced by combining them with soil amendments that have the positive characters of improving soil properties (pH, CE, water holding capacity) and stimulate microbial growth (Backer et al., 2018;Mohamed et al., 2019;Rouphael and Colla, 2020). Among the newly developed soil improvers, biochar has gained same interest because: i) it is produced by pyrolysis or pyrogasification of removable biomasses which does produce no significant amounts of CO2, ii) on the contrary, once in the soil, increases significantly the soil CO2 holding capacity, iii) has a high porosity and absorbent capacity toward water, nutrients and iv) can provide a reliable niche for PGPM, thus favoring their persistence and growth in the soil after inoculation.Recently it has been found in wheat and maize that biochar "functionalized" with PGPMs favor the soil microbial diversity and the cross talk between plant and soil which leads to better plant physiological parameters (Graziano et al., 2022).A matrix evaluating risks and benefits in biochar utilization has been recently proposed (Marmiroli et al., 2022). The relevance of these new BSts for the nutrition and health of plants in the condition of global warming is paramount.They increase the natural resilience of the plant against environmental clues (biotic and abiotic) through the stimulation of the plant immune system, potentiate the water holding capacity of the soil like "pore water" (Beesley et al., 2010)and therefore expose the plant to a low water stress, determining a slower release of nutrients and making the same more broad available from the plant.An important consideration was also for the global savings in CO2 emission, which their introduction in agriculture may determine (Li and Chan, 2022). Recent updates in omics for heat resilience Many novel omic technologies, including genomics, proteomics, metabolomics, phenomics and ionomics, have been applied during the last few decades to investigate the modifications in the genome, transcriptome, proteome, and metabolome occurring as plant stress conditions change (Wani, 2019).Omic technologies provide independent information about the genes, genomes, RNAomes, proteomes and metabolomes; however, integrating these information is important for finding a durable solution to the questions addressed.A typical "integromics" study on the stress-responsive behavior of a given crop examines the genes and genome to understand their structure and organization and identifies candidate genes using either structural or functional genomics (Muthamilarasan et al., 2019), as well as data from metabolomics. The progress of omics technologies has enabled direct and unbiased monitoring of the factors affecting crop growth and yield in response to environmental threats (Janni et al., 2020;Raza et al., 2021b;Raza et al., 2021a).Overall, omics constitute powerful tools to reveal the complex molecular mechanisms underlying plant growth and development, and their interactions with the environment, which ultimately determine yield, nutritional value (Setia and Setia, 2008;Soda et al., 2015), and the required level of agricultural inputs.Janni et al. (2020) reported an exhaustive list of success in case studies focused on the application of omics to several crops to enhance crop resilience to HS (Zhou et al., 2022). Ionomics is a high-throughput elemental profiling approach which studies the mechanistic basis in mineral nutrient and of trace elements composition (also known as the ionome) of living organisms (Pita-Barbosa et al., 2019).By coupling genetics with high-throughput elemental profiling, ionomics has led to the identification of many genes controlling the ionome and of their importance in regulating environmental adaptation (Huang and Salt, 2016;Zhang et al., 2021). Most genomics investigations are concentrated to understanding the role of Heat Shock Proteins (HSPs) and Heat Shock Factors (HSFs) in heat response in crops such as tomato (Scharf et al., 2012;Marko et al., 2019), in barley (Mangelsen et al., 2011) andwheat (Maestri et al., 2002;Hurkman et al., 2013;Comastri et al., 2018), with a focus on flower development and flowering time.Reactive Oxigen Species (ROS) genes also play a key role in basal heat tolerance, alone or as regulators of the activation of HSF (Driedonks et al., 2016) and therefore are considered with equal interest. Other reviews have discussed the identification of differentially expressed genes (DEGs) associated with heat stress (Masouleh and Sassine, 2020;Wang et al., 2020;Zhao et al., 2020;Kang et al., 2022).Proteomics has provided detailed information for the encoded proteins, revealing their function in stress tolerance mechanisms (Priya et al., 2019;Katam et al., 2020) in several plant species and developmental stages (Janni et al., 2020;Chaturvedi et al., 2021).Adaptive response to HS also involves various post-translational modifications (PTMs) of proteins.The accumulation of stressassociated active proteins (SAAPs) in wheat has been reported recently (Kumar et al., 2019). New breeding techniques (NBTs) and in particular those based on genome editing (CRISPR/Cas9) encompasses an impressive and revolutionary set of molecular tools to enhance productivity by creating genetic variability for breeding purpose, disease-free and healthy planting genetic material, improvement in stress tolerance (Mote et al., 2022;Brower-Toland et al., 2023;Liu et al., 2023).The genome-editing approach can significantly accelerate the breeding times to select environmentally tolerant crop varieties (Zhang et al., 2023). It is now well established that major environmental stress causes metabolic reorganization towards homeostasis, maintaining essential metabolism and synthesizing metabolites with stressprotective and signaling characteristics (Schwachtje et al., 2019).This has been determined applying untargeted metabolomics in species including tomato (Paupière et al., 2017), maize (Qu et al., 2018), barley (Templer et al., 2017), wheat (Thomason et al., 2018;Buffagni et al., 2020;Yadav et al., 2022), soybean (Xu et al., 2016), citrus (Zandalinas et al., 2017) and rice (Sun et al., 2022).Sugars, free amino acids, antioxidants, fatty acids and organic compounds are key players in the heat response and in the response to combined stresses such as heat plus drought (Vu et al., 2018).Furthermore, lipids, being major components of cells and organelles membranes, are among the first targets of ROS produced during HS (Narayanan et al., 2016;Narayanan et al., 2018).An interesting correlation was found between the type of metabolites involved and the need to protect specific cellular functions or cell compartments from the adverse effects of stress, drawing attention to the application of metabolomics approaches for identification of new genetic materials for breeding. Improvements have been achieved in recent years using plant phenomics as a tool to mitigate global warming effects and shaping genotypes and varieties more adaptable to the ongoing environmental challenges.Plant phenotyping enables non-invasive quantification of plant structure and function and interactions with their environment and can be employed in pre-breeding and breeding selection processes (Watt et al., 2020).Modern plant phenotyping measures complex traits related to growth, yield, and adaptation to stress, with an improved accuracy and precision at different scales of organization, from organs to canopies (Fiorani and Schurr, 2013).High throughput phenotyping (HTP) involves the acquisition of digital phenotypic traits by means of sensors, typically in the visible spectrum, as well in the near infrared, and in the induced fluorescence domain (Tardieu et al., 2017), to monitor plant photosynthetic activity (Li et al., 2014;Perez-Sanz et al., 2017), growth status (Petrozza et al., 2014;Danzi et al., 2019) and overall water content as main components of plants' response to stress.HTP has been used successfully to monitor heat stress in plant species including rice, wheat and Arabidopsis and to select stay-green genotypes (Araus and Kefauver, 2018;Juliana et al., 2019;David et al., 2020;Gao et al., 2020;Karwa et al., 2020;Karwa et al., 2020;Luan and Vico, 2021;Pettenuzzo et al., 2022). Successful image-based methods have been developed that directly target yield potential traits, in particular by increasing the throughput and accuracy of enumerating wheat heads in the field to help breeders manipulate the balance between yield components (plant number, head density, grains per head, grain weight) and environmental conditions in their breeding programs (David et al., 2020). The application of biosensors in the field and under controlled environment conditions increases comprehension of the mechanisms underlying ionomics and metabolomics and can markedly improve the efficiency of water management as well as informing breeders of the most resilient genotypes et al., 2017; Janni et al., 2019). The perception that inadequate phenotyping methods can hinder genetic gain in major crops has aroused the interest of the scientific community and the launch of national, regional, and international initiatives (Araus et al., 2018) such as IPPN ( h t t p s : / / w w w .p l a n t -p h e n o t y p i n g .o r g / ) , E P P N 2 0 2 0 (eppn2020.plant-phenotyping.eu)and EMPHASIS (https:// emphasis.plant-phenotyping.eu/).With the increased availability of large-scale datasets, deep learning has become the state of the art approach for many computer vision tasks involving image-based plant phenotyping (Singh et al., 2018;Alom et al., 2019;David et al., 2020) allowing the development of powerful image-based models. A holistic thinking within knowledgebased strategies to tackle with global changes Soon, temperature increases, and global warming are significantly affect the economy and all other aspects of life.Occasional heat waves have always been an aspect of summer weather in many areas of the world; but as climate change makes heat waves more frequent and more intense, the consequent risks for the agriculture sector need to be rethought strategically (Figure 2).The economic drawback of prolonged exposure to heat on a quantity measure of output in agriculture is stronger.Specifically, an abnormally hot day proceeded by at least eight others reduces the FAO Crop Production Index by almost 3%.Heat-wave measure implies per-wave reductions in output ranging from $0.8-3.1 billion for agriculture and up to $31.9 billion in other sectors (Miller et al., 2021).Moreover, ensembled mean projections, average per-country losses reaching 10.3% of agricultural output per year by 2091-2100 without considering mitigation strategies, and 4.5% with adaptation (Miller et al., 2021). Breeding aims to become the main player in mitigating the effects of global warming.It was employed during the green revolution as a tool to boost yields by crossing smaller, hardier versions of common crops.Farmers used these alongside improved irrigation methods, strong pesticides and efficient fertilizers Soil-CO 2 emissions-Global Warming and Sustainability connection with an holistic view.Janni et al. 10.3389/fpls.2023.1297569Frontiers in Plant Science frontiersin.org(Rehm, 2018).The cooperation of modelers, systems biologists, breeders, and farmers to accommodate environmental changes and improve sustainability, reflects the philosophy of the holistic approach needed to overcome the challenge involved in global warming.Despite the continuous advances in plant science and understanding of the biophysical and molecular responses to local warming and temperature increase, little has been achieved to maintain crop yield and growth under temperature increases and to react to the consequent socio-economic challenges.It has been estimated that a breeding program takes about 30 years "from lab to fork" and although omics approaches have helped to reduce this time-scale, the interval between a discovery and its application is still too long (Varshney et al., 2014).Moreover, genetic breeding (molecular or not, engineered or not) mostly addresses individual traits, like resistance to a specific pathogen or pest, but is still poor in dealing with complex traits like tolerance to temperature increase (Comastri et al., 2018;Janni et al., 2020). Thus, to address the global climate challenge a multifaceted and holistic approach in which crop production is seen only as one aspect of agroecosystem stress resilience is needed.To consider together the agroecosystem, the plant, and the novel technologies now available the shaping of more adaptable crops is mandatory. The entire food chain, from the discovery of new varieties to their introduction in the market, requires suitable regulatory processes and distribution systems, which call for advanced management and marketing capacities.The entire chain that affects future developments has been termed the BDA process (Breeding, Delivery and Adoption) (Challinor et al., 2016).The means of adapting to global warming and temperature stress are certainly context-dependent, but they also show some common features.Knowledge-based strategies are needed to deal with food security both in developed and developing countries.In this field, the recent success of many African countries -the "African Green Revolution" -risks to being nullified by lack of strategies to help farmers overcome the problems posed by global warming. Combination of "Omic" technologies are vital for the identification of key genes and metabolic pathways and can support marker-assisted breeding to cope with climate change (Zenda et al., 2021).The dissection of the genetic basis of important agronomic traits, as grain yield, grain size, flowering time, fiber quality and disease resistance paves the way for the application of new breeding techniques (NBTs) in breeding programs (Bohra et al., 2022) or in the exploitation of existing genetic resources through NGS (next-generation sequencing) (Mahmood et al., 2022).Moreover, plant phenotyping bridges two approaches essential for a sustainable production of food security: breeding and precision farming, both under controlled conditions (Janni and Pieruschka, 2022).Campbell et al. (2016) (Campbell et al., 2016) pinpointed four challenges when counteracting the threats posed to food security by climate change: 1) changing the culture of research; 2) creating economical options for farmers, communities, and countries; 3) ensuring options that are relevant to the situations more affected by climate change; and 4) combining strategies such as adaptation and mitigation.Solutions like climate-change smart communities, and farming systems practicing Conservative Agriculture (Davies and Ribaut, 2017) are viewed with interest in developed countries too as permitting resilient agriculture and greater sustainability, and are well suited to the vision of a circular economy. Climate change is in the process of imposing a highly selective extinction of animals and plants.Natural biodiversity alone does not suffice to preserve habitats and agroecosystems.It is obvious that human efforts will need to be directed to protect the low number of cultivated species essential for food security, by also exploring the existing biodiversity to discover novel alleles for climate adaptation (Danzi et al., 2021;Snowdon et al., 2021) and old species that may return useful.To address this emergency, more studies are explicitly considering complex and multifactorial stress combination (Dey et al., 2016;Lovell et al., 2016;Rivero et al., 2022;Zandalinas and Mittler, 2022).Thanks to these studies several evidence on the importance of higher level of complexity was found.While each of the different stresses (salt, high light, herbicides, heat, drought) applied individually, had a negligible effect on plant growth and survival, the accumulated impact of multifactorial stress combination on plants was detrimental.Unique and on that specific pathways and processes are triggered when combination of stresses was applied (Zandalinas and Mittler, 2022). To exploit the molecular basis and processes associated with plant responses to HS, and the mechanisms of tolerance, more genome sequence information were essential including the pangenomes of cultivated and wild species and precise identification of key alleles and genes.Precise identification and characterization of specific haplotypes will lay the foundation for genomicassisted breeding strategies, including genome editing, for improved resilience, coupled with higher economic yields and higher sustainability. To tackle the upcoming HS scenarios, a new breeding paradigm is required to focus not on single stress effectors but to move in the direction of higher complexity.The adoption of a holistic approach for climate-resilient breeding should be the next revolution to enable the sustainability of crop production. Sustainability goes beyond three precise steps within the food supply chain: i) development of food systems; ii) reduction of food loss and waste (FLW); and iii) global dietary change toward plantbased diets (Garcia-Oliveira et al., 2022). The holistic approach starts from considering the trade-off between food security and nutrition; livelihoods; environmental sustainability, novel technology.The proposed approach meets the targets of the Sustainable Development Goals (SDGs) -in particular SDG 2, which aims to create a world free of hunger by 2030.Again, the integration of socioeconomic developments and climatic crisis within the context of global change and worst the need to prompt policymakers and stakeholders to consider these insights to inform future assessments and policymaking efforts. Adaptation to climate change of agroecosystems requires holistic actions and the shift from punctual responses to an integrated approach but on the same scale.Some proposals in this direction are related to technical interventions, for example, from genomic and phenotypic characterization to obtain seed varieties that were more resistant to drought and high temperatures, varieties with adapted growth cycles, modifications on the use of agricultural amendments, and optimization of precision irrigation methods (Miroń et al., 2023). In view of a holistic approach resource savings technologies should be considered as mitigating technology toward the achievement of increased sustainability (Ermakova et al., 2021). Precision agriculture technologies have the potential to play a key role in the implementation of Climate Smart Agriculture by aiding farmers to tailor farm inputs and management conditions (Toriyama, 2020).Several key technologies are already in use in agriculture to improve sustainability and resource use efficiency as for example variable rate application that allowed for a strong reduction in N 2 O usage up to 34% (Mamo et al., 2003;Kanter et al., 2019). Irrigation, as the use of special multilayer soil structures (fertile layer/hydro accumulating layer/sand), secondary water for irrigation, and desalination of salt water, using reverse osmosis or evaporation, embracing the concept of circular economy as part of the global solution (Myrzabaeva et al., 2017;Martinez-Alvarez et al., 2020;Gao et al., 2022).But how to mitigate climate change from a circularity perspective has become a trending topic (Romero-Perdomo et al., 2022) more than a search for pragmatic solutions. In this frame, novel technologies based sensors as remote, proximal and in vivo sensors and sensor's platforms can significantly enhance irrigation efficiency and produce water savings (Janni et al., 2019;Segarra et al., 2020;Tavan et al., 2021;Kim and Lee, 2022) becoming more familiar in everyday farm management. Finally, and ironically, the omics approach has generated data which emphasizes epigenetics, the broad term used to describe all causes of variation which cannot be explained with classical genetics.Transposons, non-coding RNAs, chromatin regulation and chemical modification are among these.One point of considerable interest is the role of non-coding RNAs such as microRNA(miRNA) in modulating plant response to several abiotic stresses including HS (Pagano et al., 2021), and the fact that these miRNAs are part of the innate reaction to this stress, the "plant immune system". This work is aimed at opening new perspectives for dissemination and to give novel thoughts in the light to mitigate the dramatic effects of climate change.Overall, the holistic approach targets several areas of interest to public research institutions, policy makers, food producers and farmers, brad public, and consumers.Omics in this vision, represents a first and road in the sustainability in agriculture (Braun et al., 2023;Gil, 2023).This work considers all aspects of food production, highlighting the strength and weak of the current approaches. TABLE 1 Recent reviews and articles focused mainly on heat stress and effects on crop yield and the main components of defense responses. TABLE 1 Continued Few reviews tackle global warming and climate change's effects on agriculture.
2024-01-07T16:21:05.555Z
2024-01-05T00:00:00.000
{ "year": 2024, "sha1": "dd3be42de578917306fbe43077a65b9cdac5e838", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2023.1297569/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a38d4f42186a97e6cf62bf9ffe6b9454bd167dc9", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
219478717
pes2o/s2orc
v3-fos-license
Analysis of Key Security Technologies for Power Dispatch Control System of Power Dispatch Support System The scale of China’s power grid is continuously expanding, the amount of collected data is increasing rapidly, and data processing and analysis services are developing towards clustering. Therefore, at present, task management in a stand-alone mode faces severe challenges in ensuring the high efficiency and reliability of distributed tasks. Information security risks in cyberspace can cause fatal threats to smart grid entities through the destruction of grid dispatching control systems and communication networks. Security check is one of the application functions of intelligent power dispatching control system. It is an important security defense line to ensure the stable operation of power grid and provides security check service for power grid dispatching plan and power grid operation. The existing operation and maintenance management mode of the power grid dispatching control system is distributed, and the data of each system and unit is not effectively integrated and connected, and lacks overall consideration. Based on the development history of power grid dispatching automation, this paper comprehensively analyzes the current status of the overall structure of the power grid dispatching system, and summarizes the key technology innovations and application effects. Introduction In the new era of rapid development and excessive development of China's UHV power grid, in order to effectively promote the safe and stable operation of China's power grid system, effective measures should be put forward to make certain adjustments to the structure and current flow of the grid [1]. Information security risks in cyberspace can cause fatal threats to smart grid entities through the destruction of grid dispatch control systems and communication networks. With the continuous expansion of the scale of the interconnected power grid, the electrical connections of the entire network are getting closer, the cross-section coupling relationship is more complicated, the shape and characteristics of the power grid are facing profound changes, and the level of security and stability are mutually restricted [2]. The concept of cyber security has evolved from traditional information system security protection to cyberspace confrontation. In fact, the network has become the fifth combat space after land, sea, sky and space. On the basis of integrated development, the regional line network constructed in China has clearly increased demand and exhibited differentiated usage characteristics. However, in actual construction, information and scale still do not fit the management model [3]. For distributed tasks, the current single-machine task management cannot manage such tasks across nodes, that is, tasks cannot be automatically deployed to multiple nodes, and computing performance cannot be improved by making full use of the resources of all nodes [4]. Security check is one of the application functions of smart grid dispatching and control system. It is an important security defense line to ensure ICMSP IOP Conf. Series: Materials Science and Engineering 799 (2020) 012025 IOP Publishing doi:10.1088/1757-899X/799/1/012025 2 the stable operation of power grid and provides security check service for power grid dispatching plan and power grid operation [5]. Each specialized system of the dispatching control center has the application requirement of safety check, and the separate construction of safety check function in each specialized system will lead to waste of resources and difficulty in coordination [6]. Dispatching agencies at or above the provincial level shall set up automatic departments to undertake the operation, construction, maintenance and related technical management of the system [7]. The State Grid Corporation of China has carried out the construction and application of the smart grid dispatching and control system pilot project in dispatching and control centers above the provincial level. It has completed and put into operation the largest and most powerful grid dispatching and control system in the world, ensuring the safe operation of large power grids [8]. The operation and maintenance management mode of the existing power grid dispatching and control system is distributed and not centralized enough, and the data of each system and unit are not effectively integrated and linked, thus lacking overall consideration [9]. This paper analyzes the service bus and parallel computing service based on smart grid dispatching and control system to realize service-oriented security check service of power grid, which provides customizable and multi-task parallel security service for power grid service. So as to ensure the safety and reliability of power grid operation. Security Check Service Based on Service Bus With the improvement of automation level of power monitoring system, the enrichment of functions, the extension of coverage of dispatching data network and the increase of users, the sources of information security threats in power monitoring system are becoming more diversified. The power flow corresponding to the check section is formed for the maintenance plan of the power grid system function and the operation of the power grid, so that the safety check can be carried out for the faults and problems occurring in the operation of the power grid [10]. The information between the security check service process and each port is implemented interactively by means of interface functions, which can effectively meet the requirements of different application functions for the query and location services of the security check service. After the safety check of the system is completed, auxiliary decision-making and margin assessment calculations need to be carried out. This can effectively analyze the problems in the safety and stability of the power grid in the dispatch plan and operation, and propose correct judgments. The smart grid dispatch control system is gradually put into practical operation at all levels of dispatch agencies. In order to enable the smart grid dispatch control system to play a better supporting role in the dispatch business system, research and application of the smart grid dispatch control system operation and maintenance plan and key technologies are required. The grid system requires electrical energy to be balanced at all times, so the construction of smart grids is an inevitable choice for the current development of the power industry. Parallel Computation of Dynamically Assigned Tasks. Security check service uses grid dispatching control system to realize parallel computing service, and establishes effective connection between standard interface and cluster computing resources, thus realizing interaction between information resources. In the process of developing and researching the power grid dispatching control system, China has carried out brand-new reforms on the regulation of operating costs of various power grids and the realization of energy conservation and emission reduction. Security check calls the parallel computing service of smart grid dispatching control system, and realizes the interaction with cluster computing resources through standard interfaces. By receiving the optimized real-time data and equipment alarm information of the power grid and using tools such as remote browsing, the reliable operation monitoring of the system can be realized, and the safe and stable operation of the system can be guaranteed [11]. The database of the operation and maintenance service support platform adopts the combination of real-time database and relational database, and uses the real-time database to meet the requirement of providing fast real-time data access, and organically combines the real-time database and relational database [12]. Parallel computing service supports pre allocation and dynamic allocation. The calculation amount of security check changes dynamically according to the application requirements, and dynamic allocation is adopted. After receiving the calculation request received by the service port, the security verification service terminal will select the calculation method according to the specific calculation content, and evaluate the accuracy of the calculation results [13]. As shown in Figure 1, the scanning speed modulation architecture of the power prediction model. Figure 1 Scanning speed modulation architecture of the power prediction model Requirements of Distributed Task Management in Power Grid Dispatching Control System The connection between the electrical systems of the whole network is continuously strengthened, and the coupling relationship between the sections is becoming more complicated. The cross-security and stability of different levels of power grids are also receiving increasing attention, and their security problems are becoming increasingly prominent. Distributed task management is responsible for managing the entire life cycle of a job, from deployment, startup, run to exit. At present, single-machine task management can only manage the running status of task processes on this node, and cannot manage all tasks subordinate to the job across nodes. Distributed task management of power grid dispatching control system should be able to automatically deploy tasks to each node according to the system's resource usage, and ensure that tasks are evenly distributed according to node load. China's power grid has always adhered to the principles and standards of unified dispatching and hierarchical management during its operation. City and county-level power grids are responsible for the management and supervision of planning and safety check in the region [14]. The security check service system of the power dispatching system puts forward a unified model and a joint check scheme, which effectively ensures the safe and stable operation of the power grid. In addition, the implementation of the multi-level auxiliary dispatching plan plays a key role. In order to make full use of computing resources and improve the computing performance of tasks, distributed task management should monitor the resource usage of nodes in real time. The dynamic part of the unified information model mainly stores a message format template of real-time collected data by sensor nodes, and analyzes and updates the real-time collected data according to the template to form data consistent with the storage format of the data service system. The test preparation data for the delivery accuracy of screened power dispatching data, the delivery speed of power dispatching data, and the safety level detection of power dispatching data are shown in Table 1. Since the power station does not need to consume fuel, the power company should first dispatch all the power [15]. The goal of dynamic economic dispatch of power system including power station is to minimize the generation cost of traditional generating units. The objective function can be written as follows:  (2) When the operation cost is not considered, based on the original startup and shutdown state, due to the randomness of wind power, the power generation cost of conventional units is also a random variable, so the objective function is the minimum expected value of power generation cost: (3) Like the two non-crossing guard rings of Figure 2, "2" on the link indicates that the link is composed of two fibers in opposite directions. When the working path of the service with bandwidth W passes through the link, the working link A or B belongs to only one ring network, and only one ring network protects the link of the service delivery node. If the service working link is a, the protection capacity w of the ring network 1a and the protection capacity of the ring network 1b are 0. Figure 2 Non-intersecting rings In the calculation of the power flow of the power dispatching plan, it should be considered that the power demand of the inter-regional tie lines and inter-provincial cross-section power needs is controlled within the planned value, and such considerations can effectively avoid internal plans formulated by different regional and provincial grids Power spreads out of balance. In distributed task management, in order to achieve the recovery of faulty tasks across nodes, one master and one standby redundant fault tolerance technology is used. In this technology, a backup task is configured for each main task on other nodes. When the main task fails, the backup task is immediately switched to the main task [16]. A distributed task management system needs to monitor the running status of each node in the system in real time. When a new node is found to be operational, tasks on a high-load node must be migrated to that node, making full use of idle resources to improve operating efficiency. The reactive power compensation measures shall be timely adjusted according to the node conditions in the specific report of power operation in the whole power grid dispatching control system, so as to ensure that the voltage of each hub node can be within the range specified by the planned value. With the continuous application of modern science and technology, the function of safety check service is continuously developing and improving, and is being promoted and applied in dispatching control centers at all levels. In order to ensure the reliability of the task management node, multiple backup machines are usually configured for the task management node [17]. When the host fails, the backup machine is immediately switched to the host to be responsible for task management. Summary According to the latest research results at home and abroad, the power grid dispatching control system should continuously improve its own safety check service function in the future development to ensure the safe and effective implementation of the power dispatching plan. With the rapid development of new energy power generation, its randomness and volatility have brought challenges to the dispatching and operation of the power grid, and the accuracy of planned power flow and safety check in some areas with surplus new energy resources have been greatly affected. The operation mode of the power network and the actual distribution of the power network are the keys to guide the dispatching and control technology of the power network and the dispatching and operation of the power network. In the future, the development technology of China's smart grid dispatch control system is mainly reflected in the technical optimization of multi-period and multi-level short-term power markets, self-description of system operation methods, and dynamic analytical development. Uncertainty theoretical analysis should be incorporated into the power grid security checking service to adapt to the security check service level of the power grid dispatching system in the context of the rapid development of new energy sources, so as to achieve the safe and stable operation of the entire power grid system in China. Only by fully understanding the development status of intelligent power network control and dispatching systems can we better develop and develop new types of future power grid dispatching technologies.
2020-05-28T09:18:09.210Z
2020-05-19T00:00:00.000
{ "year": 2020, "sha1": "7eaf0d84210744545fb80d4841a9722fca8c278e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/799/1/012025", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f76113f564317da224775f240078ae4af3ffe746", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
229378030
pes2o/s2orc
v3-fos-license
Potential Drug-Drug and Drug-Disease interactions of selected experimental therapies used in treating COVID-19 patients At the end of 2019, the whole world was witnessing the birth of a new member of the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) family in Wuhan city, China. Since then, the 2019 novel coronavirus (COVID-19) has rapidly invaded every corner of the world. Before the end of September 2020, nearly 32 million cases worldwide were recorded, with a death toll of approximately 1 million cases. As COVID-19 has spread across the world, certain groups of people prove more susceptible than others. Elderly patients and people with chronic medical conditions such as heart disease or diabetes are more likely to experience or even suffer from serious diseases. As a population, senior citizens take more medicines than young people. Similarly, people with chronic illnesses who are several taking drugs to control their illness. All this poses a significant query in managing COVID-19 cases: can a standard drug regimen be paired with one or more experimental drugs? For example, some of the most widely prescribed medicines—including antibacterial drugs, antifungals, heart-related medications, neuroleptics, contraceptives, and sedatives —can have extensive and often even severe interactions with some of the experimental COVID-19 therapy. Therefore, to reduce the morbidity and mortality rate associated with COVID-19, this issue needs to be answered in detail. This review addressed the key points related to the drug-drug and drug-disease interaction in patients with COVID-19. To help health care providers locate the answers they need in the shortest possible time, the information contained in this review has been included in easy-to-read tables. Introduction Interactions between drugs could be described as the combination of two perhaps even more drugs, so that one drug's potency, sometimes even efficacy, is substantially altered by the existence of another medication. Adverse drug reactions (ADIs) well-documented causes of increasing patient morbidity as well as rising medical costs and complaints of malpractice. 1 Generally, drug interactions are known to include the effect(s) of one drug on the disposition and/or response to another. Normally such associations are addressed in pharmacokinetics-in which one medication alters the absorption, distribution, metabolism, or elimination (ADME) of another drug and pharmacodynamics-in which one medication affects the response to another medication (apart from the pharmacokinetic effects). 2 The effect of a patient's condition on the disposition and reaction to a medication is of equal significance. 3 Moreover, many medications adversely interfere with a variety of diseases, and vice versa. (i.e. drug-disease interactions). Unfortunately, this point has been scarcely addressed. Therefore, in addition to examining interactions between medications, this review also discusses interactions between the disease(s) and the experimental drugs used in COVID-19. An expanding number of studies have suggested that there is significant potential for ADIs occurrence in patients with COVID-19. 4 Moreover, since the start of COVID-19 pandemic numerous studies and clinical trials, continuously, suggest the use of an even increasing number of potential and adjuvant drugs. 5 Therefore, it is necessary to provide the healthcare providers with a comprehensive resource that contain all the possible drugdrug and drug-disease interaction in patients treated for Currently, more than 200 thousand people are being infected with COVID-19 each day-worldwide. 6 This extremely high number of cases is a huge burden on the healthcare personnel; therefore, the healthcare professionals may not have the sufficient time to go through all the relevant articles ISSN: 2250ISSN: -1177 [220] CODEN (USA): JDDTAO published about the safety of medications used in patients with COVID-19. That is why in this review all the information related to the drug-drug interactions as well as the drug-disease interactions was organized in easy-to-read comprehensive tables, as shown below in tables 1 & 2.  Azithromycin stretches the QT interval, which raises the risk of developing cardiac arrhythmia and torsades de pointes.  Digoxin, diltiazem can prolong the PR interval and azithromycin has been shown to prolong the QT interval.  Azithromycin may potentiate the effects of oral anticoagulants. Clinical monitoring, and likely serum digoxin levels, are recommended during and after azithromycin therapy is discontinued.  Chloroquine enhances the pharmacodynamic action of the oral hypoglycemic drugs and Increases the risk of hypoglycemia.  Chloroquine is a moderate inhibitor of CYP2D6. Therefore, chloroquine could raise the serum concentrations of risperidone, metoprolol, aripiprazole, iloperidone, haloperidol, Tricyclic Antidepressants, fluoxetine, and paroxetine. On the contrary. Chloroquine will reduce the serum level of the prodrugs that are dependent on CYP2D6 for their activation. For instance, Tramadol and Codeine.  Chloroquine is an inhibitor of the transport system P-glycoprotein (P-gp). Therefore, Chloroquine is expected to rise the serum level of the cyclosporine.  Hydroxychloroquine is an inhibitor of the transport system (P-gp). Therefore, increases the serum level of the substrates of this cellular pump inhibitor (such as cyclosporine and digoxin).  Hydroxychloroquine increases the risk of prolonged QT interval in patients with COVID-19 who is also using Azithromycin.  Hydroxychloroquine is a moderate inhibitor of CYP2D6. Therefore, chloroquine could raise the serum concentrations of risperidone, metoprolol, aripiprazole, iloperidone, haloperidol, Tricyclic Antidepressants, fluoxetine, and paroxetine. On the contrary. Chloroquine will reduce the serum level of the prodrugs that are dependent on CYP2D6 for their activation. For instance, Tramadol and Codeine.  The risk of peripheral neuropathy may be increased if Hydroxychloroquine used concurrently with tocilizumab. QT monitoring may be required.  Coadministration of theophylline and favipiravir increases favipiravir Cmax and AUC.  Kaletra accelerates the metabolism of Warfarin; therefore, Kaletra reduces the pharmacological action of Warfarin.  Kaletra increases the QT interval, thereby raising the risk of cardiac arrhythmia. Due to the obvious potential for severe adverse reactions such as arrhythmia, co-administration of Kaletra and amiodarone, lidocaine, bepridil or quinidine should be avoided.  Because of the high risk of severe adverse reactions such as rhabdomyolysis, co-administering simvastatin and Kaletra should be avoided. There is a strong opportunity for multiple drug-drug interaction to occur as CYP2D6 and CYP3A4 are responsible for the vast majority of drug metabolisms. To prevent further complications, other antiviral medications like, Remdesivir or Favipiravir would be better alternatives for patients currently using prasugrel, clopidogrel or ticagrelor. ECG monitoring is recommended. If treatment with an HMG-CoA reductase inhibitor is suggested, the safest alternative would be to use pravastatin. Or you can use a lower dose of the statin drugs to avoid the serious side effects.  Chloroquine or Hydroxychloroquine can diminish Remdesivir's antiviral activity. Therefore, it is not recommended to coadminister such medicines. No clinical studies have been performed on drug-drug interactions for Remdesivir.  Azithromycin rises the hazard of prolonged cardiac repolarization and QT in patients with history of torsades de pointes, elongation of the QT interval, bradyarrhythmia, congenital-long QT syndrome, patients with uncorrected hypomagnesemia or hypokalemia, or patients using another drug that prolongs the QT interval. Immuno-modulators  In general, the use of macrolide antibiotics has been reported to worsen symptoms of myasthenia gravis. Stool test for C. difficile toxin and stool cultures for C. difficile could be beneficial diagnostically. It is suggested to use ECG to monitor patients during therapy. If signs and symptoms of hepatitis occur, Azithromycin should be stopped immediately.  The use of Chloroquine may exacerbate the medical condition in patients with porphyria.  Chloroquine rises the risk of elongated cardiac repolarization and QT in patients with history of torsades de pointes, elongated of the QT interval, bradyarrhythmia, congenital-long QT syndrome, patients with uncorrected hypomagnesemia or hypokalemia, or patients using another drug that prolongs the QT interval.  Chloroquine may provoke epileptic seizures in prone individuals. Therefore, patients with low seizure threshold or epilepsy may be at greater risk.  Chloroquine may provoke acute renal failure and hemolysis in patients with glucose 6 phosphate dehydrogenase (G6PD) deficiency.  The use of Chloroquine may incite a severe attack of psoriasis.  Lopinavir/Ritonavir is a known hepatotoxic. Therefore, Kaletra is better to be avoided in patients with hepatic impairment.  Patients with hemophilia are at an increased hazard of bleeding when given Lopinavir/Ritonavir.  Lopinavir/Ritonavir has been reported to elevate the blood glucose level. Therefore, it should be used with caution in patients with Diabetes Mellitus.  Second and third degree atrioventricular (AV) block have been reported with the use of Ritonavir. Therefore, in patients with preexisting conduction irregularities, underlying heart disease, ischemic heart disease, or cardiomyopathies, Kaletra should be cautiously prescribed because such patients are at greater risk for developing cardiac conduction abnormalities. Remdesivir (Veklury®) 107,108  The use of Remdesivir has been associated with Transaminase elevations in patients with COVID-19 and healthy volunteers. Therefore, Remdesivir should be used with caution in patients with hepatic impairment. Hepatic laboratory testing is crucial at baseline and on daily basis during Remdesivir administration. Stop Remdesivir if the level of Alanine Aminotransferase ( ALT) becomes more than 5 times the upper limit of normal (ULN). Immuno-modulators Anakinra (Kineret®) [109][110][111]  Anakinra impedes the immune response. Therefore, Anakinra should not be given to patients with active infections or those who acquire severe infections after administration of Anakinra.  Anakinra is mainly excreted by the kidneys. Therefore, in patients with renal dysfunction it should be used with vigilance to prevent toxic reactions.  Anakinra should be used with vigilance in patients with hepatic diseases. Patients with severe renal dysfunction or end-stage renal disease should receive the dose of Anakinra every other day. Monitoring of renal function is recommended.  Malabsorption syndromes reduce the amount of absorbed zinc. Therefore, larger dosages may be needed when zinc is given orally. Discussion Comorbid patients need several pharmacological treatments, which in turn may lead to issues that physicians are expected to handle rapidly by recognizing potential drug-drug interactions that could arise in order to prevent diminished efficacy or increased adverse event burden. 142 To put simply, the issue of whether concurrent pharmacological therapies that compromise patient safety is typically answered in a context that recognizes the treatment choices for each particular disease, enabling reasonable handling of interactions based on reliable clinical evidence. 143 However, in the case of comorbid conditions happening in COVID-19 patients, healthcare professionals now are needed to consider the hard question as to whether interactions between COVID-19 pharmacological treatments, which are not yet well-defined, and various therapeutic agents are possible. 144 Moreover, while waiting for the results from more than 300 ongoing clinical trials aimed at identifying successful treatments against the COVID-19 virus, how drugs used in COVID-19 patients (e.g., various Antiviral Agents, Azithromycin, Hydroxychloroquine, and Monoclonal Antibodies) that redundantly disturb the pharmacodynamics and pharmacokinetics of other drugs, and vice versa, remains a topic of investigation. 4,145 Therefore, focus is put on the interactions between the medications most widely used for COVID-19 and various classes of medications (Table 1) and the most important drug-disease interaction in (Table 2). Given the range of potential interactions with hepatic metabolism systems, such as Cytochromes P450 (CYPs), as most of the existing antiviral medications used in COVID-19 infection are expected to affect various CYP450 isozymes. Therefore, dose adjustments may be needed. Some of the most challenging drug-drug interactions are between investigational COVID-19 medicines and cardiovascular medicines, including anti-arrhythmias, beta-blockers, calcium channel blockers, anti-coagulants, and lipid-lowering statins. Antibacterial medications are another significant class; many have a defined effect on the QT interval, and others may alter the level of a COVID-19 drug, a comedication, or both in the body. 146
2020-11-26T09:04:09.220Z
2020-11-15T00:00:00.000
{ "year": 2020, "sha1": "d924302330f1ba162587f84e70b2e7e203c0b1aa", "oa_license": "CCBYNC", "oa_url": "http://jddtonline.info/index.php/jddt/article/download/4383/3416", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b3d625d25c8fd455bc393bb15b1f58bd9b0d7481", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270734871
pes2o/s2orc
v3-fos-license
Doses for X‐ray and electron diffraction: New features in RADDOSE‐3D including intensity decay models Abstract New features in the dose estimation program RADDOSE‐3D are summarised. They include the facility to enter a diffraction intensity decay model which modifies the “Diffraction Weighted Dose” output from a “Fluence Weighted Dose” to a “Diffraction‐Decay Weighted Dose”, a description of RADDOSE‐ED for use in electron diffraction experiments, where dose is historically quoted in electrons/Å2 rather than in gray (Gy), and finally the development of a RADDOSE‐3D GUI, enabling easy access to all the options available in the program. | INTRODUCTION Structural biology has until recently relied on X-ray crystallography to provide much of the three-dimensional information on proteins and other macromolecules that inform biological function.Recently, single-particle cryogenic electron microscopy and electron diffraction techniques have advanced to the point where results from them are also giving new and exciting contributions to our knowledge.However, in all of these experimental methods, the samples suffer from radiation damage (RD) inflicted by the incident X-rays/electrons, and this RD remains one of the major bottlenecks to accurate structure determination.RD in macromolecular crystallography (MX) has been characterised over the last 60 years (for a recent review see Garman & Weik, 2023) and manifests in both reciprocal space and in real space.In reciprocal space, there is fading of the diffracted signal, starting with the highest resolution reflections and gradually extending inwards to lower resolution as irradiation continues.Finding an appropriate model for this intensity decay has proved challenging and this issue is addressed in more detail below.Diffraction fading ultimately affects the biological detail that can be gleaned from the structure, so it has become a mainstream concern for MX.Concomitant with the reflection intensity decrease, for cryo-cooled crystals at a synchrotron, the unit cell volume is seen to expand, the scaling Bfactors increase linearly with exposure, the internal agreement quality indicators for the dataset become worse (e.g., higher R merge values), and the mosaicity often increases.In real space, atomic B-factors become larger, and specific structural damage to particular moieties is observed in a reproducible order; for example, reduction of metal ions and disulphide bond scission (also observed at room temperature) occur before decarboxylation of aspartate and glutamate residues. In MX, the primary metric against which the rates of damage have been monitored is the absorbed dose, defined as the absorbed energy (J) per mass (kg) in units of gray (Gy = J/kg).The dose in an experiment cannot be measured, it can only be estimated from the properties of the beam (incident beam flux density, beam profile, and energy) and the sample (atomic composition, dimensions of the crystal) so that the absorption coefficients can be calculated.To enable experimenters to more easily estimate dose, we have written and freely distributed an open-source software program called RADDOSE-3D (Bury et al., 2018;Zeldin, Gerstel, & Garman, 2013) which allows time-and space-resolved modelling of dose.Due largely to the initial object-orientated modular architecture of the code, we have been able to continually develop and improve it for the last 11 years.In RADDOSE-3D, an experiment is represented by three objects, the "Crystal", "Beam", and "Wedge" blocks.By defining these objects in the program input, RADDOSE-3D can "simulate" the experiment and estimate the absorbed dose within the sample (Figure 1).A full description of progress from the first release in 2013 until 2018 was given by Bury et al. (2018).Other papers since then have detailed various extensions to the code and RADDOSE-3D can now be used to estimate the absorbed dose for a wide range of structural biology modalities.Specifically, modifications to the code have been implemented which allow dose estimations for small angle X-ray scattering (SAXS) investigations (Brooks-Bartlett, 2016) and for small molecule crystallography experiments (Christensen et al., 2019), with a subsequent improvement to include the energy carried away from the sample by fluorescent photons (Fernando et al., 2021).A more sophisticated treatment of photoelectron escape (which reduces the absorbed energy and thus lowers the dose) has been implemented to cater for the increased use of microbeams and microcrystals, and now includes an option for Monte Carlo simulations to provide more accurate calculations (Dickerson & Garman, 2021).We have also released RADDOSE-XFEL (Dickerson et al., 2020) which can provide estimates of the dose absorbed during very short X-ray pulse (fs) experiments at X-ray free electron lasers (XFELs) by tracking the time taken for the various energy loss processes. In this article, we will summarise the unpublished capabilities recently implemented in RADDOSE-3D.We divide the descriptions of new work into three main sections below: (1) the option to specify an intensity decay model, (2) a description of RADDOSE-ED for use in microcrystal electron diffraction (MicroED) studies where traditionally the effects of RD have been monitored against fluence (electrons/area, e À /Å 2 ) rather than against dose in gray, and (3) an introduction to a new graphical user interface (GUI) that gives researchers all the current capabilities in a user-friendly form. However, to ensure clarity between the different fields involved in the descriptions below, here we provide unambiguous definitions of certain key terms.Fluence is defined as photons (or electrons) per unit area (ph/mm 2 or e À /Å 2 ), respectively, flux is photons (or electrons) per second, and flux density is photons (or electrons) per unit area per second.The information coefficient (or "diffraction efficiency" for modalities involving diffraction), is defined as the signal intensity per MGy of absorbed dose.It should be noted that cryoEM papers in the literature frequently discuss quantities in terms of "per unit damage," which often refers to global damage measured by an increase in overall B-factor.In MX papers, I=I 0 (where I is the total intensity of a dataset or section of data and I 0 is the intensity of the same sweep of data extrapolated to zero dose) is used as a unit of global damage and B net is a new unit of specific damage (Shelley & Garman, 2022). | INTENSITY DECAY MODELS IN RADDOSE-3D Intensity decay models (IDMs) describe the decrease in the intensity of diffracted X-rays as the absorbed dose increases.Table 1 shows commonly used or recently proposed IDMs.The associated parameters are either purely empirical or have some physical justification (for further discussion of the general form of IDMs, see Section 2.2.1).In the first part of this section, we will demonstrate the implementation of a previously published IDM (that of Leal et al. (2013)) into RADDOSE-3D and show how dose estimates by this implementation can explain the progression of specific and global RD in diffraction data.In the second part, we place this IDM in a broader context, show how we estimated its parameter values through fitting to diffraction data, and analyse its physical basis in order to motivate further work on how the parameter values for each crystal might be predicted from physical principles.Finally, we discuss the significance of this F I G U R E 1 Overview of how RADDOSE-3D is structured to take inputs describing the crystal (or solution sample for SAXS), beam, and exposure, and then to output a range of metrics relating to the diffraction efficiency and the dose.Required inputs are in bold, but see the RADDOSE-3D documentation for the exact implementation and structure of the input file.A more detailed discussion concerning the interpretation of different dose metrics is included in Section 2, which describes the implementation of intensity decay models in RADDOSE-3D.In the figure, D 1 and D 2 refer to the absorbed dose at two different positions in a crystal, see Section 2 for more details.Section 3 describes RADDOSE-ED, which together with Dickerson et al. (2020), implements processes involving the interaction of electrons with atoms. T A B L E 1 Intensity decay models. Form of relative intensity decay with dose Fitted parameters Resolution dependence of dose-dependent decay None This is the IDM used in the original implementation of DWD as a fluence-weighted dose (Zeldin, Brockhauser, et al., 2013). N/A Linear (Owen et al., 2006) D1 2 is the half-dose, experimentally measured at 43 MGy in Owen et al. (2006) for intensities in the resolution range 50-2.4Å D1 2 varies with resolution shell, see discussion in Owen et al. (2006), but no explicit relationship incorporated into model. Standard dose-response (Owen et al., 2014) I ∞ is the lower asymptote (i.e., the final diffracting power).log x 0 ð Þ is the decay curve midpoint, p is the Hill slope. The values of the midpoint and Hill coefficient may depend on the resolution. Four-state kinetic (Sygusch & Allaire, 1988) Þis the relative intensity for a small region of the crystal as used in Equation (3).F native is the contribution of the undamaged fraction of the crystal that decreases linearly with dose.F perturbed is the contribution of a fraction only slightly perturbed by damage (e.g., by site-specific damage or only a few ionisation events per unit cell) such that the scattering power is still similar to the native state.F disordered is the contribution of a fraction of the crystal that has been significantly disordered but is still capable of contributing to diffraction.The fractions evolve according to a sequential kinetic scheme Native !Perturbed !Disordered !Amorphous with rate constants (with respect to dose) that are fitted empirically.The amorphous fraction does not contribute to diffraction and thus does not appear in the equation. The resolutiondependence of intensity decay is captured in the F disordered term. Exponential decay (Holton, 2009;Holton & Frankel, 2010) H is the Howell criterion (units MGy Å À1 ), derived from meta-data for a range of experimental measurements, of 10 MGyÅ À1 for cryo-temperature experiments in Howells et al. (2009) based on data in resolution range 100-1 Å. B 0 is the Wilson B-factor at zero dose. | Implementing an IDM in RADDOSE-3D In this part, we first describe how, through the incorporation of the IDM proposed by Leal et al. (2013), the diffraction-weighted dose (DWD) metric of RADDOSE-3D has been modified from a fluence-weighted dose (FWD) to a diffraction-decay weighted dose (DDWD).We then show how the DDWD estimated by RADDOSE-3D can explain the extent of RD in electron density maps, using the dataset collected by de la Mora et al. (2020) as an example.Finally, we discuss how DDWD compares to other dose metrics that are output by RADDOSE-3D. | Diffraction-weighted dose in RADDOSE-3D Diffraction-weighted dose (DWD), as first implemented by Zeldin et al. (2013), weighted the cumulative dose to each part of the crystal by the incident fluence.Here we will refer to this as the fluence-weighted dose (FWD): where t is time, such that t iÀ1 !t i describes the time of the exposure, D x !This is an advantageous metric compared to the total average dose across the whole crystal (weighting all voxels equally), the maximum dose (weighting the voxel with the highest dose as one, and all other voxels as zero) or the average dose in the exposed region (defining an exposed region by an incident intensity threshold and weighting all voxels in this region equally).This is because voxels that are irradiated by more intense regions of the beam contribute proportionally more to the FWD, and voxels outside the beam have negligible incident intensity and thus make negligible contribution to the FWD.However, as pointed out in the original publication (Zeldin, Brockhauser, et al., 2013) and by other studies thereafter (Brooks-Bartlett, 2016;de la Mora et al., 2020;Warkentin et al., 2017), weighting by Form of relative intensity decay with dose Fitted parameters Resolution dependence of dose-dependent decay K is an empirical scale factor, sometimes denoted instead by s ¼ 1=K.Assuming no other variables are affecting the intensities (such as changes to illuminated volume), this is usually taken to be unity, as in Borek et al. (2007).This is equivalent to assuming that the effects of global damage are captured entirely by the linear B-factor increase.However, Leal et al. (2013) suggest a dose-dependent form of the scale factor This can improve the fit especially for room temperature diffraction data. Note: The linear and dose-response models are given in terms of the relative intensity for the n th image, In I0 , as a function of the cumulative dose in the n th image, D. For the remaining models where the resolution-dependence of intensity decay is more precisely defined, the IDM is given as the function M D,h ð Þin the appropriate form as to be substituted into Equation (3) below to calculate the relative diffraction efficiency for a region of a crystal.M D,h ð Þis a function of the dose, D, and the magnitude of the scattering wavevector, h ¼ 1 d where d is the spacing between Bragg planes.It should be noted that in practice the relative intensity is taken instead to be In I1 where I 1 is the first measured intensity at some small initial dose, so care should be taken to account for the fact that this is not the true intensity at zero dose.Similarly, care should be taken if data are normalised independently for each individual resolution bin since information on the resolution-dependence of intensity at zero dose will be lost during this normalisation procedure.incident fluence alone does not account for the decay in relative intensity due to RD as the dose increases.For a true DWD, an appropriate intensity decay model (IDM) that can be applied for each volume element of the crystal must be incorporated into the definition of DWD. | Diffraction-decay weighted dose in RADDOSE-3D There is now the option to output the diffraction weighted dose result of RADDOSE-3D as a diffractiondecay weighted dose (Brooks-Bartlett, 2016;de la Mora et al., 2020;Warkentin et al., 2017), DDWD, which weights the cumulative dose to each part of the crystal by the predicted fluence out of that region of the crystal.DDWD is defined as follows: where η is the predicted relative diffraction efficiency according to the IDM.The parameters t, F x !, t , and D x !,t are as defined for FWD above and η is defined according to: where M D, h ð Þ is the IDM describing the decay in relative intensity as a function of the dose D absorbed in a small volume at a certain position in the crystal, and of the magnitude of the scattering wavevector h ¼ 1 d .The integrals are evaluated using representative experimental values of h 2 and I as described in Popov and Bourenkov (2003). The appropriate parameter values for the IDM are specified by the user in the crystal block section of the input file, as explained in the RADDOSE-3D documentation.For the adapted scaling model (Leal et al., 2013), Equation (3) equates to: which mirrors Equation (4) in Leal et al. (2013).See Table 1 for further explanation of B 0 , β, and γ. The representative h 2 and I values used to evaluate the integrals are those from the BEST diffraction data collected on 72 different proteins (scaled together, and with varied folds, molecular weights, space groups, and data resolutions at both cryo and room temperature) (Popov & Bourenkov, 2003).The integral is taken over all resolution shells in the BEST data (12.0-0.9Å resolution window) and thus the DDWD output is what would be expected of a typical protein with relative intensities evaluated over this resolution window.Using representative values, rather than requiring the user to input their own [h 2 , I h ð Þ] values, allows these to be coded directly into RADDOSE-3D, thereby reducing its execution time. The modular nature of the RADDOSE-3D code means new models that may be proposed by the crystallography community can easily be included.If no IDM is specified, the program defaults to outputting the FWD, by setting η ¼ 1. | Example use of RADDOSE-3D with incorporated IDM To validate the implementation of this DDWD in RADDOSE-3D, we reanalysed the high dose rate, room temperature dataset from de la Mora et al. (2020).RADDOSE-3D was first used to calculate the FWD with respect to exposure time (input parameters are shown in the middle column of Supplementary Table S2).The Python script used to generate the modelled beam profile is available on the RADDOSE-3D GitHub repository and can straightforwardly be adapted to generate any modelled beam profile for input into RADDOSE-3D.Appropriate parameter values for the Leal et al. (2013) IDM were estimated as described in Section 2.2.2.To give the estimated DDWD with respect to the exposure time, the implementation of the Leal et al. (2013) model in RADDOSE-3D was then run (inputs are shown in the right column of Supplementary Table S2, which are the same inputs as for the FWD calculation, except for the specification to use the Leal et al. (2013) model with the appropriate parameter values). The results in Figure 2 show that whilst the FWD increases linearly with exposure time, DDWD increases to a maximum before gradually decreasing, because at high total doses the less damaged regions that have absorbed lower doses contribute more to the diffraction pattern.Furthermore, this behaviour correlates with the degree of damage observed in a disulphide bond: the absolute value of the integrated difference electron density for this bond is shown in Figure 2, calculated as described in de la Mora et al. (2020), where a greater value indicates a more damaged bond.DDWD gives information on the extent to which the absorbed dose manifests in the electron density map and thus correlates with the damage to this disulphide bond. Because the integrals in Equation ( 3) are evaluated over the BEST data for the resolution range 12-0.9Å, our implementation will be accurate when the resolution range of the data being analysed matches this resolution range.However, Figure 2 shows that even if the resolution ranges do not match exactly, the resultant error is systematic such that the calculated DDWD is still useful semi-quantitatively for understanding the dose effects that manifest in the electron density map.What is critical for this analysis is that the IDM fits the relative intensity decay curve well across the full range of doses analysed in the experiment.In the original analysis in the supplementary material of de la Mora et al. ( 2020), an exponential decay IDM is used and the calculated DDWD significantly increases again at the highest doses, implying that the disulphide bond damage should increase again, but this behaviour is not observed in the electron density. | Interpreting dose metrics output by RADDOSE-3D Accurate dose estimation is important for designing a data collection strategy.For rotation crystallography, it is necessary to ensure a full crystal rotation of data are collected before RD has significantly affected the signal.Since IDMs are all smoothly decreasing functions, often well-approximated by the simple linear model at low/medium doses for cryo-temperature experiments, the original implementation of DWD as the FWD (Zeldin, Brockhauser, et al., 2013) output by RADDOSE-3D remains a useful dose metric for designing a data collection strategy; RADDOSE-3D is implemented at multiple beamlines (such as I04 at DLS, BL12-1 and BL12-2 at SSRL (Garman & Weik, 2023)).Furthermore, dose estimates are essential inputs for dedicated programs that optimise data collection strategy.For example, the program BEST (Bourenkov & Popov, 2010) (within the EDNA framework (Incardona et al., 2009)) implements the Leal et al. (2013) IDM alongside a model for radiation-induced non-isomorphism, taking dose rate estimates from RADDOSE version 1 (Murray et al., 2004), to design an optimal data collection strategy based on a few initial diffraction images.Similarly, the program KUMA within the ZOO framework (Hirata et al., 2019) implements RADDOSE version 2 (Paithankar & Garman, 2010) to suggest exposure conditions with an absorbed dose of 10MGy (Hirata et al., 2019).For optimisation programs that implement an IDM internally, such as BEST, if it is necessary to input a single dose metric then the FWD (Zeldin, Brockhauser, et al., 2013) is the output from RADDOSE-3D that is generally applicable for more complex exposure schemes (e.g., helical) as discussed in Bury et al. (2018).However, the behaviour of DDWD shows the advantage, particularly for room temperature data collection, of explicitly accounting for the spatial distribution of dose within the crystal and applying the model of RD to each small region of the crystal (see Section 2.2.2 for a related discussion on using the FWD to fit IDMs).Among the outputs of RADDOSE-3D is a file containing the dose for each voxel of the crystal (Bury et al., 2018). A second important application for dose estimation is to understand the extent of RD in electron density maps.It is essential to avoid the misinterpretation of radiationdamage-induced changes as biologically significant.DDWD is more informative than FWD about the extent of RD in the final electron density map, as illustrated by the analysis in Section 2.1.3. When evaluating different dose metrics, it is important to account for how robust the dose metric is to inaccuracies in the model for the beam intensity profile, particularly for the low-intensity edges of the beam.For example, the average dose in the exposed region is sensitive to such inaccuracies, whereas FWD and DDWD are relatively insensitive to them.Another strategy is to make a conservative approximation of the beam as purely Gaussian (i.e., neglecting any low-intensity tails), as discussed in de la Mora et al. ( 2020). | Understanding the IDM implemented in RADDOSE-3D In this part we first place the IDM proposed by Leal et al. (2013) and implemented in RADDOSE-3D in the context of some general properties of all IDMs (Section 2.2.1), then show how this IDM was fitted to diffraction data to obtain parameter values for the DDWD calculation in Section 2.1.3(Section 2.2.2), and finally explore the physical basis for the terms of this IDM (Sections 2.2.3 and 2.2.4) to motivate further work towards an IDM with predictable parameter values. | Introduction to the general form of IDMs IDMs all contain an implicit assumption of uniform irradiation of the crystal or region of the crystal to which the IDM applies.To achieve uniform irradiation in practice in experimental studies, it is possible to use a beam with an exceptionally flat "top hat" intensity profile, such as is implemented at the EMBL beamline, P14, at PETRA III, Hamburg (Garman & Weik, 2017).Alternatively, a more common approach is to apply an appropriate correction for non-uniform illumination during analysis, for example, the three-beam model in the supplementary material of de la Mora et al. ( 2020) which combines a model for the beam and the exponential "H-model" IDM (see next paragraph) (Holton & Frankel, 2010) into a single model.In the definition of DDWD, the IDM is applied to many small regions of the crystal, each of which is small enough to be treated as uniformly irradiated.The comparative advantage of using RADDOSE-3D for the analysis of IDMs is that it directly calculates the impact of non-uniform illumination on the distribution of dose in the crystal (Bury et al., 2018). Figure 3 shows typical diffraction intensity data from the room-temperature high dose rate dataset described in de la Mora et al. (2020).The data are plotted as the average intensity for a series of small resolution shells, for each of a series of sequential small exposures (for this dataset, an exposure time of 2 ms per exposure).The figure demonstrates that intensity decay depends on both dose and resolution.The definition of the relative diffraction efficiency η, Equation ( 3), encodes this dependence on dose and resolution.To account for the fact that spherically averaged squared structure-factor magnitudes are a complicated function of resolution, Equation (3) uses the empirical approximation given by the BEST data (Popov & Bourenkov, 2003).However, it is important to stress that these data do not include the effect on intensity of the atomic B-factors at zero dose, so this effect must be encoded into M D, h ð Þ.The simplest way to do this is through a term exp À 1 2 B 0 h 2 À Á where B 0 is an average isotropic B-factor at zero dose (using an average B-factor assumes the distribution of atomic B-factors is not too broad or skewed).M D,h ð Þ includes further terms that describe the dose-dependence of intensity decay.For example, a "scale" term describes any resolution-independent contribution to intensity decay (such as K ¼ exp Àγ 2 D 2 ð Þwithin the model of Leal et al. (2013).A final term describes dose-dependent intensity decay where the decay varies depending on the resolution.The two main hypotheses for this term are the "Bmodel" exp À 1 2 βDh 2 À Á , as suggested in the Leal et al. model (Leal et al., 2013) and other scaling models, and the 1 for further citations, parameter definitions, and units). The evaluation of IDMs requires meta-analyses of many datasets to increase the statistical power of the hypothesis testing.This approach has been implemented multiple times (Atakisi et al., 2019;Holton & Frankel, 2010;Howells et al., 2009;Leal et al., 2013).The resolution-dependence of IDMs is especially significant because the loss of diffraction efficiency in the higher resolution shells has implications for the ability of the structure to inform biological hypotheses (Owen et al., 2006).An advantage of the B-model is that it directly encodes the robust linear relationship observed between scaling B-factor and dose (Borek et al., 2010;Bourenkov & Popov, 2010;Kmetko et al., 2006;Leal et al., 2013).A linear increase of B-factor as dose increases is expected under a central limit theorem if radiation-induced atomic displacements are randomly distributed, small but numerous, and accumulate in proportion to the dose (Borek et al., 2013).It has also been suggested that a different resolution-dependence, and thus form of IDM, might apply at medium to high resolution (<10 Å) compared to low resolutions (>10 Å) (Atakisi et al., 2019).Central to crystallographic data analysis is the equivalence of modelling unit cell constituents as a collection of point scattering sources (i.e., atoms) versus scattering from a continuous electron density.In the context of IDMs, it has been shown that increasing the scaling B-factor is an equivalent model to that of the Gaussian blurring of electron density at random locations in the unit cell (Atakisi et al., 2019). Before we consider how the parameter values of the Leal et al. (2013) IDM were fitted for use in our DDWD calculation, it is worth emphasising that the parameter values of IDMs are temperature-dependent.For example, the γ parameter of this IDM is approximately zero only F I G U R E 3 Average intensity as a function of dose and resolution, illustrated for merged but not scaled reflection room temperature high-dose rate data from de la Mora et al. (2020).FWD was calculated by RADDOSE-3D as described in Supplementary Table S2 and averaging by thin resolution shells (evenly spaced h 2 ) was performed by AIMLESS.The FWD is for the cumulative exposure time (e.g., 30 ms) whereas the average intensities are for an individual exposure (always 2 ms).An anomalously weak exposure at 0.45 MGy was removed from the data before plotting.(a) is a plot of the average intensity (AvI) against FWD and h 2 coloured by intensity value, (b) is a plot of the natural logarithm of the average intensity (ln(AvI)) against FWD and h 2 coloured by ln(AvI), (c) is a plot of ln(AvI) against h 2 coloured by FWD value and (d) is a plot of ln(AvI) against FWD coloured by h 2 value. for cryogenic datasets (Leal et al., 2013) whereas for room temperature datasets the scale factor term becomes strongly dose-dependent.Temperature will affect not only the energy required to break bonds (and hence the rate of bond breakage per unit dose), but also the mobility of ions and radicals and their subsequent radiation chemistry, and thus the distribution of the absorbed dose both within unit cells and through the crystal (Weik & Colletier, 2010).Finally, none of the models in Table 1 explicitly account for the possibility of dose-rate effects (see (Garman & Weik, 2023) for further discussion). | Fitting the adapted scaling model To estimate parameter values for use in the DDWD calculation in Section 2.1.3,the Leal et al. (2013) model was fitted to the room-temperature, high dose rate data from de la Mora et al. (2020).Merged (but not scaled) reflection files for each sequential 2 ms exposure dataset were reanalysed: note that no B-factor or scale factor correction had been applied to the intensities before this analysis.We stress that the appropriate intensities to use for fitting the model are the result of reflection integration (in this case by CrystFEL (White et al., 2012)) and thus have no contribution from background, and the intensity values tend to zero at high doses as shown in Figure 3. Wilson B-factors and scale factors were calculated for each 2 ms exposure dataset by AIMLESS (Evans & Murshudov, 2013).A maximum resolution limit of 1.71 Å was always specified, because for the data at the longest exposure times higher resolutions than this showed some noise in their Wilson plots, and it was important to ensure the same region of reciprocal space was followed over all exposure times.The Wilson B-factor and the scale factor K (the reciprocal of the Wilson scale factor, s) were plotted against the FWD estimated by RADDOSE-3D.Fitting of the B-factor term of the Leal et al. (2013) model was performed by least-squares regression over the region where the B-factor plot is still linear (FWD ≲ 0.8 MGy), as shown in Figure 4.The scale factor term, K, was fitted by least-squares regression over the whole data range as indicated in Figure 4.The estimated parameter values agree well (within a factor of two) with previously reported values for chicken egg white lysozyme (HEWL) (Leal et al., 2013).The cryo-temperature dataset from the same study was also analysed for the same range of FWD values and similarly gave parameter values broadly consistent with previous studies (Leal et al., 2013). We fit the model to the FWD rather than to another dose metric (e.g., total dose) because FWD accounts for the fact that the dose absorbed by the regions of the crystal that are experiencing more intense regions of the beam has proportionally more impact on the relative intensity and should be given a greater weight.More precisely, fitting an IDM against the FWD is equivalent to making the following approximation: where η D, h ð Þ is the relative diffraction efficiency for each region of the crystal as defined as a function of D and h by Equation (3) (noting that η ¼ 1 at zero dose), and all other terms are as defined in Equation ( 1).The close agreement between the DDWD calculated as described in Section 2.1.2using the model fitted in this way, and the disulphide difference density (Figure 2) suggests that obtaining model parameters through fitting using the FWD is a useful strategy.This close agreement also shows that the dose-dependent scale factor term K is not a correction factor that should be included only on the right-hand side of Equation ( 5) to account for the nonuniform distribution of F x !, t , but instead measures an underlying RD process and is thus a required term within η D,h ð Þ on both sides of Equation ( 5).This conclusion is further supported by datasets where whole crystals are uniformly irradiated, such as reported in Brooks-Bartlett (2016), where the scale factors and B-factors display the same trends as shown in Figure 4.The fact that we are fitting the model to apply to each region of the crystal by using reflection intensity data from the whole crystal plotted against the FWD also justifies why we should fit the model only to the initial linear region of the Wilson B-factor plot (Figure 4a).As we will discuss further in Section 2.2.4,beyond the initial linear region the average B-factor of a whole crystal decreases because it is calculated from only measurable diffraction.This results in an equivalent effect to that explaining the decrease in DDWD at high doses (see comparison between Wilson Bfactor and DDWD in Figure 2). | Interpretations of the adapted scaling model For the purposes of DDWD calculation, the underlying meaning of the terms in the Leal et al. (2013) model is not important, as it is only used to provide an accurate prediction of the diffracted flux exiting each region of the crystal.However, to motivate further work towards an IDM that has predictable parameters, the physical basis of the Leal et al. (2013) model will now be discussed. The Leal et al. (2013) model can be interpreted in kinetic terms, inspired by previous kinetic models (Hendrickson, 1976;Sygusch & Allaire, 1988), as explained when the model was originally proposed (Leal et al., 2013).Assume that the crystal contains two fractions of atoms (i.e., scattering sources), first a fraction that contributes to Bragg diffraction, P Bragg , and a fraction that no longer contributes to diffraction, P None .The atoms within the Bragg fraction will accumulate small and numerous displacements in proportion to the absorbed dose.As discussed in Section 2.2.1, under a central limit theorem, we derive a linear increase in the B-factors of these atoms with dose, from an initial value, such that: RI Bragg where RI Bragg is the contribution of P Bragg to the relative intensity.However, it is also necessary to account for the conversion of P Bragg to the fraction of atoms that no longer contribute to Bragg diffraction due to RD (P None ).This may be related to the progression of defects in the crystal lattice on large scales that effectively reduce the number of unit cells exposed to the beam, as proposed by Leal et al. (2013) and discussed further in Section 2.2.4. Whatever the cause, if we assume this "Bragg to None" conversion occurs at a rate with respect to dose that is directly proportional, by a rate constant 2γ 2 , to the size of the Bragg fraction and to the dose, D, then we have: Solving this equation for P Bragg as a function of dose we find that: where P 0, Bragg is the value of P Bragg at zero dose.Substituting Equations ( 8) into (6), and assuming that the only contribution to the total relative intensity is due to the fraction P Bragg (and thus P 0, Bragg = 1), gives the same form as the Leal et al. model: The parameter γ could in principle be predicted from the rate constants of the various physical and chemical processes that bring about the "Bragg to None" conversion. Another hypothesis for the origin of a dose-dependent term besides the B-factor term is that RD-induced changes to unit cell size and mosaicity occur at a greater rate than the increase in overall B-factor with dose (proposed by Warkentin et al. (2017) in the context of lag phases).Depending on the crystal orientation relative to the beam, for a subset of diffraction images, this may cause a few reflections to broaden or migrate into the measured region of reciprocal space and thus their measured intensity will increase.However, the total diffracted intensity should be calculated over a large region of reciprocal space sampling thousands of reflections and so the impact of a few reflections on the intensity statistics should be small. | Physical limits for B-factors and implications for macroscopic crystal stability The Leal et al. model predicts that the average B-factor increases indefinitely as dose increases, which is physically impossible within the constraints of a crystal lattice.According to this model, with parameters fitted as in Figure 4, within the dose range of the room temperature dataset the average B-factor calculated according to B 0 þ βD increases to as high as 129 Å 2 at 3.5 MGy.We might compare this to the B-factor of a bulk solvent model where no solvent mask is specified: usually B sol ≈ 125-200 Å 2 (Weichenberger et al., 2015) (although this is not formally a B-factor of the solvent atoms it does quantify, for the whole unit cell, the contribution of the solvent in reciprocal space).Eventually, the contribution to diffraction of an atom with high B-factor becomes negligible relative to the sensitivity of the detector.However, probably before it reaches these large values, the average B-factor will be so high that it is physically unreasonable for describing atoms constrained in an ordered crystalline lattice.This is because the integrity of the crystal lattice at the mesoscopic/macroscopic scale emerges from the structural integrity of the macromolecules within the lattice and the contacts between them.It is therefore sensitive to microscopic atomic displacements that result from ionisation events and subsequent radiochemistry; large atomic displacements should disrupt the integrity of the crystal.IDMs should have a term to account for this, and this may be the origin of the scale factor term K ¼ exp Àγ 2 D 2 ð Þin the Leal et al. (2013) model.As described in Section 2.2.1, a linear increase of B-factor as dose increases is expected only if certain conditions are met: that radiation-induced atomic displacements are randomly distributed, small but numerous, and accumulate in proportion to the dose (Borek et al., 2013).Furthermore, the definition of the B-factor assumes a crystal with intact unit cells.Most microscopic phenomena such as bond breakage are likely to satisfy these conditions.By definition, microscopic phenomena involve small perturbations, and since the perturbations are small and localised they are more likely to be randomly distributed from the perspective of a whole crystal.Conversely, mesoscopic/macroscopic structural breakdown within the crystal may involve larger displacements (on the scale of whole unit cells) which may be concerted (i.e., not totally random).Thus, crystal lattice breakdown may be a RD process that is not well modelled by a linear average B-factor increase.Again this is consistent with the scale factor term K in the model of Leal et al. being due to the contribution of crystal structural breakdown, as suggested when the model was originally proposed (Leal et al., 2013).Because breakdown of the macroscopic crystal lattice causes atomic displacements, we might expect it to contribute to changes in the apparent average B-factor.However, crystal breakdown should instead reduce the number of intact unit cells because the apparent B-factor loses its physical meaning if we consider scattering from regions of the sample that are no longer crystalline. More comprehensively than an average B-factor, we might consider the dose-dependent shift to the full distribution of atomic B-factors, p B D ð Þ. Global damage is then this whole distribution shifting to higher values, whereas specific damage is represented by specific atoms that have B-factors that shift by relatively more than other atoms as dose increases (Gerstel et al., 2015).To formulate an IDM in the form M D, h ð Þ we need to consider the distribution p B that would be calculated for a small region of the crystal if we had knowledge of the true atomic positions and motions of all atoms in each unit cell within this region (if we fit the IDM to the FWD, the relevant p B is for the whole crystal and has the contribution of unit cells weighted by their incident fluence).Importantly, as dose increases these distributions will become significantly different to the diffraction-decay weighted atomic B-factor distribution that would be calculated from processing a diffraction pattern all the way to a refined structure, to which only measurable diffraction contributes. p B is defined in terms of individual atoms (and assuming a crystal with intact unit cells).By the same logic as applied in the preceding discussion of the average B-factor, shifts in p B are most easily rationalised in terms of microscopic phenomena, for example bond breakage is associated with increased B-factors of the atoms involved in the bond.We would like to model how RD at this microscopic level might propagate to mesoscopic/ macroscopic defects in the crystal.For the model of Leal et al., the parameters β and γ are correlated (as shown in Figure 5), which is consistent with the scale factor and Bfactor terms of this model both ultimately arising from the same or related phenomena at the microscopic level. Figure 5 shows the simplest possible model for how a dose-dependent shift to p B could result not only in an increasing average B-factor but also a term like the scalefactor K.For the purposes of illustration, p B is taken to be an inverse gamma function with a mean that increases according to the B-model (i.e., B 0 þ βD, see Supplementary materials, section 1.2.3 for details).To produce a term with behaviour similar to the scale factor term K, we make two further assumptions.First, the extent of mesoscopic/macroscopic lattice defects is proportional to the fraction of atoms with B-factors above a certain threshold, B Break , because we assume the crystal lattice is robust to displacements of individual atoms only up to a certain limit.Second, the decay in diffraction intensity which cannot be explained by the linear average B-factor increase is directly proportional to the extent of these defects, so the defects proportionally reduce the effective number of exposed unit cells.Figure 5 shows the fitting of this model to the variation of K with dose and shows how p B varies with dose according to the fitted model.Inspection of Figure 5c suggests the exact distribution used to model p B should not have a huge impact on the predictions of this model so long as it is approximately bell-shaped and the whole distribution shifts past B Break to higher values as dose increases. The best fit for B Break approximately matches the value at which the observed linear relationship between Wilson B-factor and dose breaks down.This is expected because in this model the combination of the linear relationship (B ¼ B 0 þ βD) and the parameter B Break determines the doses at which the scale factor term goes through a steep decrease, which has a large effect on the relative intensity and thus the measured value of diffraction-weighted quantities (see end of this section for further discussion).This value of B Break represents a mean square atomic displacement of If the many assumptions underlying this simple model hold, we can interpret this as a maximum average square atomic displacement that can be tolerated by the intact crystal lattice for this particular sample. In this formulation, we have considered the B-factor distribution of all atoms.However, it is necessary to give greater weight to the subset of atoms that are relatively more important for the structural integrity of the lattice (e.g., atoms at crystal contact sites).This has no effect on the model predictions if the atomic B-factor distribution of these atoms mirrors the B-factor distribution of all atoms.However, there will evidently be sampledependent exceptions to this.A particularly striking example is crystals of oligomeric dodecin, where the decarboxylation of Glu57 probably causes a destabilisation of oligomers (and hence crystalline order) leading to global RD much faster than expected (Bieger et al., 2003;Murray et al., 2005).Further limitations of this treatment include the fact that by considering only atoms with Bfactors it neglects any effect of radiation-induced changes to bulk solvent on the crystal lattice (e.g., an internal pressure due to gas generation) unless these have a comparable effect on the atomic B-factor distribution.Furthermore, it does not directly model phenomena where causation occurs at the macroscopic level of the crystal lattice (e.g., mechanical forces propagated by the lattice, which might drive cooperativity and spreading of defects) and reduces mesoscopic/macroscopic lattice stability to just a single parameter, B Break .Evidently, our understanding would also be improved by a more accurate model for p B and how p B changes with dose, and a way to quantify the contribution of individual atoms to mesoscopic/ macroscopic lattice stability.We hope that future models can improve on the simplistic B Break model. The linear increase in the apparent Wilson B-factor with dose breaks down at ≈0.8 MGy (see Figure 2).This is likely because the calculated Wilson B-factor is itself "diffraction decay-weighted" given it is calculated from the measurable diffraction.The behaviour of the Wilson B-factor approximately matches the behaviour of DDWD (see Figure 2c), which is calculated assuming the model of Leal et al. (2013) holds over the full dose range analysed.The scale factor term, K, has a large effect on the diffracted intensity and the linearity of the Wilson Bfactor plot breaks down as K rapidly decreases between 0.5 and 1.0 MGy.If we accept that the decreasing scale factor term represents a structural breakdown of the crystal lattice, beyond FWD ≈0.8 MGy it is unclear whether defining B-factors is even meaningful for the most damaged regions of the "crystal" which receive a local dose well in excess of 1 MGy and have probably essentially become amorphous.The fact that calculating Wilson Bfactor from the whole crystal is at all possible is due to diffraction from the weakly illuminated regions of the crystal that have only received a relatively low dose. | IDMs and DDWD: Outlook In this work we have shown how, by incorporating the IDM by Leal et al. (2013) into RADDOSE-3D, the code can be used to calculate the DDWD.This DDWD can be used to understand the extent of RD in an electron density map, exemplified by its correlation with the difference density of a disulphide bond (de la Mora et al., 2020).The parameters of the IDM, as estimated by fitting to global intensity statistics (Wilson B-factor and scale factor), could be used in the DDWD calculation (which applies the IDM to each region of the crystal to determine the diffracted flux scattered from that region). However, it would be even more advantageous to be able to predict the parameter values of the IDM in advance of the experiment, without the need to fit them to each specific diffraction dataset.This work has examined a range of possible frameworks for describing the physical and chemical changes to the crystal as dose increases, which give rise to the form and parameter values of the IDM.These include conversion between crystal fractions according to kinetic rate equations, and dose-dependent increases to atomic B-factors (the average B-factor or more exactly a shift to the entire distribution of atomic B-factors).There is sample-dependent influence on these processes, as evidenced by the large table of parameter in Leal et al. (2013), and also suggested by the range of h exponent values in Atakisi et al. (2019), which will need to be understood before a generally predictive IDM can be formulated.Sample sensitivity to global damage will vary due to a range of factors relating to the unit cell contents and the structural integrity of the crystal lattice.First, analogous to how different proteins have different rates of specific damage with respect to dose due to their exact structure (e.g., disulphide bond breakage depending on the angle made with a carbonyl oxygen (Bhattacharyya et al., 2020), and staple-group disulphides having greater susceptibility (Gerstel et al., 2015)), different proteins may have slightly different susceptibility to breakage of bonds between their light elements, even if the large number of such bonds in a given protein molecule should average out most variation.Second, the complex chemistry known to occur during RD (including the formation and propagation of radicals (Owen et al., 2012) and of hydrogen gas (Meents et al., 2010)) suggests that slightly different amino acid compositions and environments within the crystal may have some influence on radiation sensitivity.Thirdly, differences in crystal packing may impose different constraints on unit cell expansion and increased mosaicity which may contribute to the intensity decay, in addition to more direct effects such as those observed for the highly radiation-sensitive crystals of oligomeric dodecin discussed in Section 2.2.4 (Bieger et al., 2003;Murray et al., 2005).These sample-dependent influences could be dissected experimentally by using an independent technique to monitor the damage state of the crystal as data collection progresses (e.g., spectroscopy (Fernando et al., 2021) or X-ray topography (Suzuki et al., 2022)).Also useful would be a systematic comparison of intensity decay curves for a broader range of proteins than the standard model proteins (HEWL, insulin, thaumatin).In particular, it may be desirable to study different crystal forms of the same protein (due to alternative functional conformations or induced, for example, by changing the crystal temperature (Weik et al., 2001)), or compare sequence variants that have different melting temperatures or are functional at different temperatures in vivo.We speculate that strategies optimised for extracting predictive power despite a large parameter space will be optimal for solving this problem, as would a more detailed understanding of the dynamics of radiolysis and subsequent chemistry. Improved predictive power of IDMs would have implications for crystallographic data analysis.Scaling programs, such as AIMLESS (Evans, 2006;Evans & Murshudov, 2013), XDS (Kabsch, 2010), HKL3000 (Minor et al., 2022), andDIALS (Beilsten-Edmands et al., 2020), must account for intensity decay due to RD as part of placing reflections on a consistent scale through application of scale, decay and absorption terms, leveraging multiplicity within the dataset.The decay factor is usually an overall B-factor correction analogous to that described in the IDMs in Table 1.The scale factor also contributes to correction for the effects of global RD, where these effects cannot solely be taken into account by the B-factor correction (i.e., the scale factor is dosedependent as in the IDM by Leal et al. (2013)) but in the context of scaling it is also important as a more general correction, for example, for changes to the illumination volume.Because scaling must only be consistent within a dataset, these programs use an experimental coordinate such as frame number, rotation angle, or exposure time as a proxy for dose, rather than the absolute dose estimated through consideration of the physics of X-rays interacting with the crystal, for example, by RADDOSE-3D.Some scaling programs provide an option for zerodose extrapolation for individual reflections using linear, polynomial, or exponential functions (Borek et al., 2007;Diederichs et al., 2003;Kabsch, 2010).However, these attempts are complicated by the fact that individual reflections do not all follow the average trend described by IDMs (e.g., some reflection intensities actually increase, as noted by Blake and Phillips in 1962 [Blake & Phillips, 1962]), and by the need to handle negative values of experimentally measured intensities and weak intensities.Attempts to improve zero dose extrapolation have included using the Wilson intensity distribution as a Bayesian prior or encoding IDMs into the process function of a Hidden Markov model describing the evolution of the crystal with increasing dose (Brooks-Bartlett, 2016).Additional corrections may also be applied to account for anisotropy induced by RD. It has generally been assumed that the intensity decay due to global damage is largely separable from the effects on the intensities due to specific damage processes.Specific damage requires that damage events occur at the same atomic position in all unit cells of the crystal.Hence, it is usually detectable in real space only for radiation-sensitive sites that damage at a greater rate (with respect to dose) than the majority of positions within the unit cell.Recent progress in quantifying specific damage includes using singular value decomposition to model it as a component distinct from global damage in reciprocal space (Borek et al., 2013), its separation into individual components in real space through independent component analysis (Borek et al., 2018), and analysis of atomic B-factors in real space through the B Damage metric (Gerstel et al., 2015), and the related metric B net that can be calculated for a whole structure and enable comparison between structures (Shelley & Garman, 2022).Conversely, inherent in the derivation of an isotropic global scaling B-factor (i.e., that encoded in the "B-model") through a central limit theorem is the fact that damage events occur randomly according to a uniform distribution across the atomic positions in the unit cell.Conceptually, the observed distribution of B Damage values (Gerstel et al., 2015) could result from either a homogeneous distribution of dose within an average unit cell but different activation energies for damage at different positions, or an inhomogeneous distribution of absorbed dose within an average unit cell, or probably a combination of the two possibilities.The input energy required for atomic displacements and bond breakage is a function of temperature, as is the distribution of absorbed dose due to molecular motion, secondary damage, and mobile ions/radicals.The DDWD estimated by RADDOSE-3D is proportional to the dose per average unit cell represented by an electron density map, and so would be the appropriate dose metric for comparison to atomic B-factor distributions of refined structures.Therefore, we expect the implementation of DDWD in RADDOSE-3D will be a further useful tool for scientists wanting to characterise the extent of RD in their structures. | RADDOSE-ED Although X-ray crystallography has enabled us to determine the structures of a wide variety of molecules from small molecules to comparatively large macromolecular complexes (Shi, 2014), the technique is reliant on the successful production of large, well-diffracting crystals.To successfully obtain a structure from a single protein crystal, Holton & Frankel (2010) predicted that a spherical crystal must be at least 1.2 μm in diameter, or potentially 0.34 μm if there is significant photoelectron escape.This theoretical limit has not yet been reached, with the smallest crystals successfully used for structure determination having scattering powers ≈ 15Â larger than a 1.2 μm sized crystal.This size limit can be somewhat reduced by using multiple crystals, as in serial synchrotron crystallography (SSX) (Gati et al., 2014;Stellato et al., 2014), and reduced even further if RD can be outrun, as is the case for SFX at XFELs (Chapman et al., 2014;Nass, 2019).Nonetheless, although possible using SFX (Colletier et al., 2016), sub-micron-sized crystals still present major difficulties for successful structure determination using X-rays. Electrons are theoretically much more suited than X-rays for imaging thin samples.As calculated by Henderson (1995) by comparing scattering cross sections, electrons offer approximately three orders of magnitude more signal per unit radiation dose for very thin specimens.As a result of this, there is a long history of structure determination using electron crystallography.Traditionally, this has involved using 2D crystals that consist of just a single monolayer of molecules.The first protein structure to be determined by 2D crystallography was of purple membrane (Henderson & Unwin, 1975) using a combination of both diffraction patterns and images.More recently, the methodology has been extended further to 3D crystals, in a technique often called MicroED (Shi et al., 2013).This has proved useful in determining structures from crystals that do not grow large enough for successful structure determination using X-rays (Clabbers et al., 2022), and structures from single crystals more than 2 orders of magnitude smaller than that achieved using MX have been solved (Rodriguez et al., 2015). On the other hand, the much higher scattering cross sections of electrons compared to X-rays make it exceedingly difficult to determine structures from samples more than just a few hundred nanometres thick using beam energies commonly available in modern transmission electron microscopes.Experimental studies suggest that solving structures from crystals thicker than 2 inelastic scattering mean free path lengths (MFP lengths, ≈600 nm for 300 keV electrons) is extremely difficult (Martynowycz et al., 2021).As a result, crystals too large for MicroED are often thinned using focused ion beam (FIB) milling (Duyvesteyn et al., 2018;Martynowycz et al., 2019). The fundamental cause of the limitation in crystal size for both electrons and X-rays is RD since it limits the amount of signal we can achieve before the molecules are destroyed.RD studies in X-ray crystallography have benefited from using dose as a metric against which to monitor its manifestations, enabling us to compare its progression between datasets from a variety of samples and under different data collection conditions.This has accelerated our understanding of RD to biological macromolecules from ionising radiation as well as allowing the optimisation of data collection strategies to improve the chance of successful structure determination.In electron crystallography, "dose" has typically been reported in terms of fluence (e À /Å 2 ), but this fails to account for other factors that determine the dose such as primary beam energy and sample composition.Since this makes inter-comparisons and thus finding optimisation strategies challenging, some efforts have been made to convert e À /Å 2 to gray (Baker & Rubinstein, 2010;Egerton, 2021), but this change has so far not been widely adopted. RADDOSE-3D (Bury et al., 2018;Zeldin, Gerstel, & Garman, 2013) has been used by many as a simple tool to estimate dose for X-ray crystallography experiments, and more recently SAXS (Brooks-Bartlett et al., 2017) and SFX experiments (Dickerson et al., 2020).However, it has so far been limited only to calculations for incident X-rays, not having been written for other projectiles.We have now extended RADDOSE-3D to calculate doses for electron irradiation; specifically electron crystallography experiments in the subprogram RADDOSE-ED.We demonstrate that RADDOSE-ED can be used to convert fluence to dose for electron crystallography experiments and that for a given amount of absorbed dose, the extent of RD is comparable to that in X-ray crystallography.Moreover, by calculating an information coefficient (Peet et al., 2019), defined as the signal intensity per MGy of absorbed dose, we demonstrate how RADDOSE-ED can be used to optimise the beam energy for a given specimen thickness. | Methods RADDOSE-ED calculates electron stopping powers to estimate the energy absorbed by the sample, and hence estimate the dose.It also calculates elastic and inelastic scattering cross sections, allowing estimation of the information coefficient.RADDOSE-ED can be run using a standard RADDOSE-3D input file, but with an additional flag "Subprogram EMED" in the crystal block.The electron fluence is also given in e À /Å 2 instead of specifying an X-ray flux in photons/s. | Calculation of dose for incident electrons To calculate the dose in Gy, we must calculate the mass of the exposed volume, as well as the energy absorbed by the sample.The mass is calculated as the sum of the masses of all atoms in the exposed area.The absorbed energy is calculated using the electronic stopping power for electrons.The total electronic stopping power is the sum of two stopping power components, collision stopping power, S col , and radiative stopping power, S rad .The collision stopping power is the average energy loss per unit path length as a result of Coulomb collisions with bound atomic electrons, resulting in ionisations and excitations (Brice, 1985).The radiative stopping power is the average energy loss per unit path length due to the emission of Bremsstrahlung in the electric field of the atomic nucleus and atomic electrons. The collision stopping power for an atom, S col , is calculated in a similar way as described in RADDOSE-XFEL (Dickerson et al., 2020) and is as follows: (Bethe, 1930;Bethe, 1932). where ρ is the crystal density, N a is Avogadro's number, r e is the classical electron radius, m is the electron rest mass, c is the velocity of light, Z is the atomic number, A is the atomic mass, E e is the incident electron kinetic energy, I is the mean excitation potential, and δ is the density effect correction.I is an experimentally determined parameter that is tabulated in ICRU report 37 (ICRU, 1984), and these values are multiplied by the constant 1.13 to modify them from the gas phase to the liquid/solid phase (ICRU, 1984). The density effect correction is applied since the passage of electrons through a medium polarises atoms and this polarisation in turn decreases the electromagnetic field acting on the particle, reducing the stopping power.The size of δ increases with the density of the material and the kinetic energy of the electron.This was calculated according to the fits provided by Sternheimer et al. (Sternheimer, 1952;Sternheimer et al., 1984) as follows: where x 0 is the value of x below which δ ¼ 0, x 1 is the value above which the relation between x and δ can be considered linear, and a, b, and C are constants dependent on the element (Sternheimer, 1952).The collision stopping power for the entire sample is obtained using the Bragg additivity rule, stating that the collision stopping power for a compound is the weighted sum of the atomic constituents.This is equivalent to replacing the Z=A term with: where j denotes the j'th atomic constituent and ω j is the fraction of the total molecular weight in the unit cell that the j'th atom contributes.The mean excitation energy and density effect correction are also modified accordingly: For the incident electron energy (E e ) range of interest, the radiative stopping power constitutes a much smaller contribution to the total stopping power than the collision stopping power.For instance, the collision stopping power is 99.8% of the total stopping power of liquid water at 300 keV, and 98.6% at 2000 keV (ICRU, 1984).Since this contribution is so small, we used an estimate of the radiative stopping power, S rad as follows (Hussein et al., 2023): where Z h i is the mean Z by mass.The total stopping power is then the sum of the radiative and collision stopping powers. | Scattering cross sections For energies E e ≤ 300 keV, both the inelastic and elastic scattering cross sections for each element are calculated in the same way as in RADDOSE-XFEL (Dickerson et al., 2020), which calculates the stopping power of photoelectrons of energies on the order of that of the incident X-rays.The elastic scattering cross sections are taken from tabulated values (Jablonski et al., 2016).The inelastic scattering cross sections are calculated using the generalized oscillator strength model for outer shell collisions (Sempau et al., 2001), and a combination of the plane-wave Born approximation and distorted-wave Born approximation for inner shell collisions (Bote et al., 2009).For energies greater than 300 keV, the inelastic scattering cross sections are also calculated similarly.For elastic scattering cross sections, σ el , at these energies, a simple formula that matches well with partial wave computations is used (Langmore & Smith, 1992). To calculate both inelastic and elastic scattering cross sections for the entire sample, the cross sections are summed for each atom present in the exposed volume and then converted to a MFP.Poisson statistics are then used to determine the number of elastic and inelastic scattering events, which is appropriate since scattering events are independent. | Parameters important for dose calculation We have investigated which parameters are important for accurately estimating the dose in electron crystallography experiments.Doses were estimated in RADDOSE-ED for a 200 nm cubic crystal of pure low-density amorphous ice, and the incident beam energy was varied between 10 and 2000 keV in 10 keV steps (Figure 6a).Doses initially steeply drop with increasing beam energy due to a decrease in collision stopping power, before beginning to rise very gradually above 1160 keV as a result of the increasing radiative stopping power. As well as varying the beam energy, the effect of atomic composition on dose was tested.We used a 200 nm crystal containing 500 of the same atom, ranging in atomic number from 1 to 83, and we also made estimations for radon, thorium, uranium, and plutonium samples.The doses generally decrease with increasing atomic number (Figure 6b), which agrees well with those calculated by Egerton (2021). | Comparison to MX To determine if RD progresses similarly at cryotemperatures in MicroED as it does in X-ray crystallography at 100 K, the absorbed dose of a MicroED dataset for proteinase K (Hattne et al., 2018) was calculated with RADDOSE-ED (Table 2).The reduction in the intensity of the diffraction pattern is shown in Table 2 and compares well with values from X-ray crystallography (e.g., for lysozyme at a temperature of 100 K), a D 1=2 of 12-14 MGy at 1.6-2.5 Å resolution (Teng & Moffat, 2000) and of 12.5-12.9MGy at 1.8-35 Å resolution (de la Mora et al., 2011) has been observed.In terms of specific damage in MicroED, the disulphide bonds break before decarboxylation first appears (Hattne et al., 2018), as also observed in X-ray crystallography (de la Mora et al., 2011). | Information coefficient As well as estimating the absorbed dose, RADDOSE-ED also estimates the diffracted intensity per unit dose ("the diffraction efficiency" [Dickerson & Garman, 2019]), which is similar to the information coefficient defined by Peet et al. (2019) but applied to MicroED instead of to single particle cryoEM (SPA).The number of incident electrons that contribute useful signal is assumed to be those that elastically scatter once, and only once, in the sample and do not inelastically scatter.This is because inelastically scattered electrons will broaden the diffraction spots, and multiple elastic scattering (dynamical scattering) breaks the relationship between the Bragg intensities and the single-scattering values used for structure determination (Glaeser & Downing, 1993;Subramanian et al., 2015). This number is then divided by the absorbed dose to give the information coefficient.For any given crystal, RADDOSE-ED will also estimate and output the beam energy that maximises the information coefficient. We used RADDOSE-ED to estimate the information coefficient for crystals of pure low-density amorphous ice, with thicknesses varying from 10 to 1000 nm in 10 nm steps, and beam energies of 100, 200, 300, 500, 1000, and 2000 keV (Figure 7).RADDOSE-ED was also used to estimate the beam energy that maximises the information coefficient for a given crystal thickness (inset of Figure 7). For very thin specimens, such as 2D crystals, lower incident electron energies are more favourable, with an optimal energy of 100 keV.As the specimen gets thicker, the information coefficient increases for all incident 100 200 300 All carboxylate groups removed 22 Note: The doses are calculated for the point at which the intensity drops to 50% of that in the first frame (D 1=2 ), as well as for where the disulphide bond breakage and decarboxylation of aspartate and glutamate side chains were observed at cryo-temperature (Hattne et al., 2018). energies and peaks between 110 nm (100 keV) and 270 nm (2000 keV).The peak information coefficient increases as the energy increases, with the overall highest being with an 820 keV beam and a 250 nm thick crystal.At energies above this, the peak information coefficient begins to reduce, and there is little or no extra improvement for crystals <1 μm thick.This behaviour is caused by several factors.For thin specimens, lower energies are more beneficial since the ratio of elastic scattering to inelastic scattering is higher (Peet et al., 2019), meaning that there is more signal per unit dose.As specimen thickness increases, the information coefficient increases since the number of scatterers, and hence the diffracted intensity, increases.For thicker specimens, higher incident energies, and thus lower scattering cross sections, are required to minimise losses in signal from inelastic and dynamical scattering.However, since the ratio between elastic and inelastic scattering continues to reduce with increasing energy, there is eventually no improvement for samples thicker than ≈250 nm, leading to an optimum energy of 820 keV. | Discussion: RADDOSE-ED We have extended RADDOSE-3D to include a new subprogram, RADDOSE-ED, to convert fluences in e À /Å 2 into doses in MGy for electron diffraction experiments.Since account is now taken of primary beam energy and specimen composition, the biggest error in dose is likely to be the fluence measurement itself.Methods to measure this as accurately as possible are discussed by Krause et al. (2021).Unless beam currents are very low, one must be careful when using the counts from a direct electron detector, because of coincidence loss (Li et al., 2013).Measuring the beam current as indicated by the built-in screen ampere meter can also often be inaccurate.Installing a Faraday cup, or using the drift tube of a spectrometer as a Faraday cup, is often the most accurate method (Krause et al., 2021), and can also be used to properly calibrate the screen ampere meter.We compared the symptoms of RD (specifically the rates of intensity decay, disulphide bond breakage, and decarboxylation) at cryo-temperatures in MicroED and X-ray crystallography, and determined that both global and specific damage events happen at similar doses.Considering that the majority of damage in X-ray crystallography is from photoelectrons and subsequent electrons produced (O'Neill et al., 2002), this is not surprising. The information coefficient output by RADDOSE-ED allows beam energy and crystal thickness to be optimised.For thin crystals, such as 2D crystals, lower energies are predicted to be more optimal.As specimen thickness increases, signal losses due to multiple elastic scattering and inelastic scattering mean that higher energies, where the cross sections are lower, become more favourable.In fact, much higher energies than we are currently using should be used to both maximise the information coefficient and extend the upper limit of crystal thickness that can be productively investigated by MicroED, which has been experimentally measured to be two inelastic MFPs (Martynowycz et al., 2021).However, as the accelerating voltage increases, the size and cost of the microscope also rise, making energies above 300 keV a potentially costly endeavour.Although our results suggest 820 keV is optimum, the improvement above 500 keV becomes relatively modest, making 500 keV perhaps an ideal compromise between cost and maximising the information coefficient. It is important to note that the information coefficient is not the only metric that will determine the quality of electron diffraction data.The information coefficient only considers signal and not noise; it assumes that electrons that have inelastically scattered, or elastically scattered more than once, are removed.Although this is true for inelastically scattered electrons if an energy filter is used, electrons that elastically scatter more than once (also termed dynamical scattering) are only removed if they scatter beyond the objective aperture.Dynamical scattering is particularly problematic since it breaks the kinematic approximation, meaning that kinematically forbidden reflections will now appear.As a result, diffraction from thick specimens is likely to be worse than predicted by a simple information coefficient, and higher energies thus may be more beneficial than suggested by the results shown in Figure 7. On the other hand, the harmful effects of dynamical scattering can potentially be reduced both experimentally (Clabbers & Abrahams, 2018;Subramanian et al., 2015) and computationally (Clabbers et al., 2019;Klar et al., 2023;Palatinus et al., 2015;Spence & Donatelli, 2021). For crystals still too thick to be amenable to MicroED, specimens can be thinned by FIB milling to reduce them to the optimum thickness for a given primary beam energy (Duyvesteyn et al., 2018;Martynowycz et al., 2019).However, this will leave a layer of damage, estimated to be 30-60 nm thick for FIB milling ion energies of 30 keV (Berger et al., 2023;Lucas & Grigorieff, 2023;Parkhurst et al., 2023;Tuijtel et al., 2023;Yang et al., 2023).As a result, any FIB milled specimen will have less signal than expected for its thickness and the information coefficients will thus be reduced. Lastly, although written for electron diffraction, RADDOSE-ED can in principle be used to convert fluences in SPA or cryogenic electron tomography (cryoET) into doses in MGy.The doses are likely to be a slight overestimate for very thin specimens used for SPA since RADDOSE-ED does not consider the escape of secondary electrons from the sample.The information coefficient described here does not apply to SPA or cryoET, since in those regimes the experimenter is mostly interested in a specific molecule of a particular size.As specimen thickness increases beyond this size, the signal will only decrease as a result of extra scattering (Dickerson et al., 2022;Russo et al., 2022). | RADDOSE-3D GUI RADDOSE-3D has been interfaced at several synchrotron beamlines worldwide and is becoming embedded in assisting in data optimisation strategies.For instance, it has been integrated into Blu-Ice, used to control data collections at all the MX beamlines at the Stanford Synchrotron Radiation Laboratory, and into GDA at beamline I04 at Diamond Light Source (Masmaliyeva & Murshudov, 2019;McPhillips et al., 2002).To aid experimenters in planning their experiments, we have written an open-access RADDOSE-3D GUI in C++ which is suitable for running on both Windows and Linux-based operating systems (Figure 8).The GUI allows the user to select which RADDOSE utility they wish to run on a drop-down menu tab at the top of the right hand of the screen which lists: "Standard RADDOSE-3D", "XFEL", "Monte-Carlo", and "RADDOSE-ED."Once selected, the appropriate data entry boxes are displayed with 3 different tabs for the three blocks of "Crystal", "Beam", and "Wedge".For instance, if a standard RADDOSE-3D run for MX, SAXS, or SMX is required, the following information about the sample can be entered in the "Crystal" tab: its shape (cuboid, polyhedron, cylindrical spherical), its XYZ dimensions (in μm), and the desired Pixels per Micron (default 0.1: this affects the voxelation and hence the resolution of the calculation).The next drop-down menu allows the user to specify the absorption coefficient calculation (ACC) appropriate to their experimental modality (MX, SAXS, The number of single elastically scattered electrons that have not inelastically scattered per MGy of absorbed dose (information coefficient) versus the thickness of a crystal consisting of pure low-density amorphous ice.This is plotted for 6 different incident electron energies between 100 and 2000 keV.The optimum incident energy for a given thickness is plotted in the inset.In general, incident electron energies above 300 keV and crystals ≈200 nm thick maximise the information coefficient. or SMX): the ACC in-built in RADDOSE-3D, EXP (uses a PDB file), SEQUENCE, SAXS, SAXSSEQ, SMALLMOL, or CIF as input.Once an option is selected, the data entry boxes change to match the application (e.g., for SAXS, the protein concentration and an "advanced input" tab at the bottom allow entry of the sample container characteristics: type, thickness, density, composition).For an MX run, the crystal unit cell, the number of monomers, the number of residues per monomer, and the heavy elements per monomer are entered as well as the solvent atom composition (mM) and the solvent fraction.Input can then be manually edited using a tab at the bottom of the screen.The "Beam" and "Wedge" blocks can be similarly completed before running the program by pressing "Run".The dose estimation results appear on the screen.Note that a full description of all the options is available in the User Guide (https://github.com/GarmanGroup/RADDOSE-3D/blob/master/doc/user-guide.pdf). F I G U R E 8 The main window for the RADDOSE-3D GUI, which shows the inputs for the crystal block of the standard RADDOSE-3D program. The GUI can be downloaded from https://github.com/GarmanGroup/RADDOSE-3D.To run it, Java must already be working, and if R (https://www.r-project.org/) is installed, a 3D representation of the dose distribution for the sample can be produced from the RADDOSE-3D output. | CONCLUSIONS In this article, we have presented three related new developments that augment the toolbox available to structural biologists faced with the challenge of RD to their samples.First, we have now incorporated an option to include an intensity decay model into the calculation of the previous "fluence weighted dose" to give a "diffraction-decay weighted dose" and discussed the interpretation and context of this intensity decay model.Second, we have extended the capabilities of the open source RADDOSE-3D code which originally gave dose estimates solely for incident X-rays but which now gives the option, in its subprogram RADDOSE-ED, of calculating the dose absorbed for incident electrons as used in MicroED experiments.Since the use of gray should allow a more realistic comparison of RD between different instruments and samples, the dose given by RADDOSE-ED is in gray instead of e À =Å 2 , which are historically used in electron microscopy.Thirdly, we have written a RADDOSE-3D GUI to provide a simple platform that includes options for estimating the absorbed dose for a wide range of structural biology experiments (MX, SMX, SAXS, XFEL, ED).We hope that these developments will augment the experimentalists' capabilities and contribute to a better understanding of RD effects, as well as enable further optimisation of data collection strategies. AUTHOR CONTRIBUTIONS Joshua L. Joshua L. Dickerson and Patrick T. N. McCubbin contributed equally to this study. F I G U R E 2 Comparison of DDWD, computed using the Leal et al. (2013) IDM implemented in RADDOSE-3D, to the FWD computed by RADDOSE-3D, and to measures of specific and global damage for the room temperature high dose rate diffraction data from de la Mora et al. (2020).(a) Fluence-weighted dose (FWD, orange curve) increases linearly over the exposure time whereas diffraction-decay weighted dose (DDWD, blue curve) goes through a maximum.(b) DDWD correlates with the observed specific damage to a disulphide bond and (c) with the Wilson B-factor (see text for further details). Fitting the IDM proposed by Leal et al. (2013) to (a, c) the room temperature high dose rate and (b, d) cryo-temperature data from de la Mora et al. (2020).Anomalous data points (indicated by red circles) were excluded during fitting but still shown on the graph: at ≈0.45 MGy for the room temperature data, and ≈0.45 and ≈2.38 MGy for the cryo-data. F I G U R E 5 Simplified model for the scale factor K in terms of dose-dependent change to the atomic B-factor distribution, fitted to the room temperature high-dose rate dataset from (de la Mora et al., 2020).(a) Correlation between β and γ values reported by Leal et al. (2013) (Spearman's rho = 0.474, Pearson's r = 0.656) which is consistent with the corresponding B-factor and scale factor terms modelling the same underlying mechanism.(b) Predicted curve and parameter values for the model fitted against the experimental dose-dependence of the scale factor K. The shape, a, is a parameter of the model used for the atomic B-factor distribution (see Supplementary section 1.2.3 for details), and const. is a proportionality constant.(c) For the parameter values from (a), how the modelled atomic B-factor distribution would change with dose.The area under the curve to the left of B Break is proportional to the scale factor K. F I G U R E 6 The RADDOSE-ED estimated dose per e À /Å 2 for a 200 nm cubic crystal of pure low-density amorphous ice as (a) a function of incident beam energy and (b) atomic composition.The estimated doses are 6.5 MGy/(e À /Å 2 ) at 100 keV, 4.4 MGy/(e À /Å 2 ) at 200 keV, and 3.7 MGy/(e À /Å 2 ) at 300 keV.T A B L E 2 RADDOSE-ED calculated dose of a MicroED radiation damage dataset for proteinase K.
2024-06-27T05:08:46.307Z
2024-06-25T00:00:00.000
{ "year": 2024, "sha1": "d7fe9a03580646de5c946e8cf87959389a94015d", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d7fe9a03580646de5c946e8cf87959389a94015d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
38287602
pes2o/s2orc
v3-fos-license
Influence of body weight and substrate granulometry on the reproduction of Limnodrilus hoffmeisteri ( Oligochaeta : Naididae : Tubificinae ) Limnodrilus hoffmeisteri Claparede, 1862 is a cosmopolitan Oligochaeta widely used as indicator of organic pollution in water bodies. Previous contributions have shown the effects of organic matter and temperature on the life history of the species, although very little is known about the factors that influence its reproduction. This study aimed 1) to test whether the larger weight of individuals results in an increase in the reproduction rate and 2) to test the influence of two granulometric fractions of sand on the reproduction and growth the species. In the first experiment, specimens of L. hoffmeisteri were separated in two groups with different average weights (small individuals = 6.63 ± 1.28 mg; large individuals = 12.44 ± 3.99 mg) and kept at 15 ± 1°C for 21 days. The results of this experiment showed that the number of cocoons was statistically similar between the groups, but the mean number of eggs per cocoon produced by large individuals (2.78 ± 0.35) was greater than that produced by small individuals (7.45 ± 2.50). In the second experiment, weekly observations were conducted for 25 weeks in two groups of 30 specimens: one kept in fine sand and the other in medium sand, at 25 ± 1°C. The single significant difference was in the number of cocoons per adult per day (0.37 ± 0.22 and 0.23 ± 0.24, for fine and medium sand, respectively). Individuals reared in fine sand produced a greater number of descendants compared to those reared in medium sand in the same period of time. Species of limnic Oligochaeta are recognized as important food sources for various aquatic insects (LODEN 1974) and fish (KOSIOREK 1974, RIERA et al. 1991, GOPHEN et al. 1998, RAHMAN et al. 2006).YAN & LIANG (2004) found that Oligochaeta are a rich source of food, since 90% of their dry weight consists of protein and fat.Limnodrilus hoffmeisteri CLAPAREDE, 1862 (Naididae) is a common and abundant aquatic oligochaeta in many parts of the world (KENNEDY 1965), being widely used as an indicator of organically polluted environments (PAOLETTI & SAMBUGAR 1984, VERDONSCHOT 1989, FINOGENOVA 1996, ALVES & LUCCA 2000, ALVES et al. 2006, DORNFELD et al. 2006, MARTINS et al. 2008).The biology of this species has been widely studied (KENNEDY 1966, ASTON 1973, JUGET et al. 1989, NASCIMENTO & ALVES 2008), but the results of many studies (KENNEDY 1965, ASTON 1973, FISHER & BEETON 1975, REYNOLDSON 1987, PASTERIS et al. 1999, RABURU et al. 2002) differ, especially with respect to the growth rate and the number of cocoons and eggs found.These discrepancies come from the lack of standardized research methods (for example, differences in density, food quality, type of sediment), which makes it difficult to replicate the experiments and to compare their results (FISHER & BEETON 1975, SOBHANA & NAIR 1984). Despite the elevated number of studies involving L. hoffmeisteri and organic pollution of aquatic environments, there is little data on the factors that influence this species' distribution, behavior, and reproduction, probably because taxonomic problems (FISHER & BEETON 1975, SOBHANA & NAIR 1984, PASTERIS et al. 1999, RABURU et al. 2002).ASTON (1973) noticed differences in the average number of eggs per cocoons in two different experiments with L. hoffmeisteri (one experiment on the effect of temperature, and another on the effect of dissolved oxygen, on egg production).He raised the hypothesis that this difference was influenced by the weight of the individuals used in the experiments, because they were different.Since then, there have been no attempts to corroborate Aston's hypothesis. The substrate is essential to the survival and reproduction of oligochaeta and could influence the distribution of species.For instance, ASTON & MILNER (1982) reported the importance of sediment to the survival, growth and reproduction of Tubificidae, since it facilitates dislocation and feeding, physically supporting the organisms during their respiratory movements.According to SAUTER & GÜDE (1996), the size of the substrate grains influences the distribution of Oligochaeta species.Ecological Influence of body weight and substrate granulometry on the reproduction of Limnodrilus hoffmeisteri (Oligochaeta: Naididae: Tubificinae) to test whether the larger weight of individuals results in an increase in the reproduction rate and 2) to test the influence of two granulometric fractions of sand on the reproduction and growth the species.In the first experiment, specimens of L. hoffmeisteri were separated in two groups with different average weights (small individuals = 6.63 ± 1.28 mg; large individuals = 12.44 ± 3.99 mg) and kept at 15 ± 1°C for 21 days.The results of this experiment showed that the number of cocoons was statistically similar between the groups, but the mean number of eggs per cocoon produced by large individuals (2.78 ± 0.35) was greater than that produced by small individuals (7.45 ± 2.50).In the second experiment, weekly observations were conducted for 25 weeks in two groups of 30 specimens: one kept in fine sand and the other in medium sand, at 25 ± 1°C.The single significant difference was in the number of cocoons per adult per day (0.37 ± 0.22 and 0.23 ± 0.24, for fine and medium sand, respectively).Individuals reared in fine sand produced a greater number of descendants compared to those reared in medium sand in the same period of time.2000).However, MOORE (1979) highlighted the importance of organic matter to the distribution of these animals, since availability of organic matter increases the number of algae and bacteria -both food sources for Oligochaeta.This study had two main objectives.The first was to test whether large individuals of L. hoffmeisteri produce more eggs and/or cocoons than small individuals.The second was to assess the influence of two granulometric fractions of sand on the reproduction and growth of L. hoffmeisteri under laboratory conditions. MATERIAL AND METHODS The specimens of L. hoffmeisteri used in the experiments were obtained from a culture maintained at the Laboratório de Invertebrados Bentônicos, Universidade Federal de Juiz de Fora (Juiz de Fora, MG, Brazil) under room temperature and controlled luminosity conditions.The sand used was collected from the Peixe River (21°54'37"S, 43°33'24"W), located in the city of Juiz de Fora.It was previously inspected under a 40x stereomicroscope to remove invertebrates.The sand was separated into a medium fraction (0.250-1.00 mm) and a fine fraction (0.057-0.250 mm) by sifting. Relationship between body weight and egg-laying A total of 50 adults in the reproductive stage (with visible clitelum and eggs in the ovisac) were chosen.They were weighted and separated into two groups: small individuals (25 individuals with 6.63 ± 1.28 mg of medium weight) and large individuals (25 individuals with 12.44 ± 3.99 mg of medium weight). The individuals in each group were kept in five 250-mL beakers (five individuals per beaker) containing 100 mL of fine sand and 100 mL of dechlorinated and well-aerated water.At the beginning of the experiment, 0.1 g (dry weight) of fish feed (Alcon BASIC ® -MEP200 Complex -Tab.I) was added to each beaker as organic matter source.The treatments were maintained in Biological Oxygen Demand (B.O.D. -EletroLab ® EL 101) incubators at 15 ± 1°C for 21 days -similar conditions to those used by ASTON (1973) -, with adjustments only of the water level. At the end of 21 days, the sediment was washed in a 0.25 mm sifter and analyzed under a 40x stereomicroscope to count young, adults and cocoons.To count the eggs, sand grains attached to the cocoons were removed with a Stanley knife.We recorded final adult weight, number of cocoons and eggs, average number of cocoons per adult per day, average number of eggs per cocoon and average daily growth rate (G w %, according to REYNOLDSON 1987): G w % = [(lnW 2 -lnW 1 ) x 100] x t -1 , where: W 1 = initial weight (mg); W 2 = final weight (mg); an t = time in days. To compare the initial and final average weight of adults between the groups, the Mann-Whitney test at 5% significance was used.The t-test (at 5% significance) was used to compare the others variables (number of cocoons, number of eggs per cocoon, cocoons per adult per day, and G w %) between the treatments.All tests have an n of 10 (5 of each treatment), so that each beaker was a replica. Influence of grain size on the reproduction We assessed the effect of grain size in three steps: 1) cocoon production; 2) hatching; and 3) growth and reproduction.This assessment was conducted in 250-mL beakers, which contained 100 mL of substrate (fine or medium sand), 100 mL of water (dechlorinated and aerated) and 0.1 g of fish food (Alcon BASIC ® -MEP200 Complex) as a source of organic matter. In the first step, specimens were kept in 12 beakers (six with fine sand and six with medium sand), each containing five mature specimens, to allow them to produce the cocoons.The beakers were kept in incubators at 25 ± 1°C.Every other day, for 20 days, the substrate of each beaker was washed in a ZOOLOGIA 28 (5): 558-564, October, 2011 0.25-mm sifter and analyzed under a stereomicroscope to collect and count the cocoons. In the second step, we used a 3-mL Pasteur pipette to remove cocoons from the sifter and transferred them to 100-mL beakers, containing 25 mL of substrate (fine or medium sand, according to the substrate in which they were collected) and 25 mL of dechlorinated and aerated water.All cocoons collected on the same day for each substrate were put in the same beaker (one with fine sand and another one with medium sand).The beakers containing the cocoons were kept in incubators at 25 ± 0.1°C and analyzed under a stereomicroscope every other day (during a 20-day period) to observe and count eclosions, allowing the observers to record the time between laying the cocoon and its eclosion. The third step began with the selection of 30 young individuals for each type of substrate among the new hatchings, to observe growth and sexual maturation.Individuals were selected based on the presence of normal movement and absence of body deformations.For each substrate, six 250-mL beakers, containing 100 mL of sand, 100 mL of water and five individuals each, were analyzed weekly during 25 weeks (175 days).The weights of individuals' and the number of eggs and cocoons were recorded.In order to do this, the substrate was washed in 0.25mm sifters and analyzed under a stereomicroscope.We started weighing the individuals a week after their eclosion, because they were very small at the time of hatching, and the process could hurt them.To avoid stress, before washing the substrate, we removed the organisms and put them in a Petri dish containing only dechlorinated water.After having collected the cocoons we put the organisms back in the beakers filled with new sand and water, and with 0.1 g of fish food. For each treatment, average daily growth rate (G w %), time of sexual maturation, number of cocoons per adult per day and number of eggs per cocoon were determined.The test of proportion (z-test) with an n of 2 was used to compare the proportion of eclosion in each interval of time between the treatments.The t-test was used to compare the average time of sexual maturation, mean individual weight, average number of eggs per cocoon and average number of cocoons per adult weekly between the two types of sand.For all tests, a 5% of significance was adopted, with n = 12.The weights of individuals were transformed in natural logarithm [ln(weight +1)] to normalize their distribution (the Shapiro-Wilks Normality Test was used with 5% of significance). Relationship between body weight and egg-laying A total of 93.3% of the small and 100% of the large individuals survived.The initial weights of small and large individuals were 6.63 ± 1.28 mg and 12.44 ± 3.99 mg, respectively (U = 538.00;n = 10; p < 0.001) and, after 21 days, the final weights were 8.01 ± 1.60 mg for small and 15.89 ± 5.97 mg for large individuals (U = 540.00,n = 10, p < 0.001).There was no significant difference between the average daily growth rates (G w %) of small and large individuals (0.90 ± 0.48% and 1.17 ± 1.19%, respectively; t = 0.598, n = 10, p = 0.574). Influence of grain size on the reproduction A total of 115 cocoons from fine sand and 101 from medium sand were collected at the first step.The time between laying the cocoon and its eclosion, observed at the second step, is shown in figure 2. In fine sand, 84.83% of the young hatched between 8 and 12 d.Virtually the same rate was observed for medium sand, 83.67% hatched in the same period (z = 0.321, n = 2, p = 0.748). The curves of growth (Fig. 3) show almost a constant weight gain during the 25 weeks of observation.Comparing the two curves, it is possible to observe that the individuals maintained in medium sand grew less than individuals maintained in fine sand.This difference became more evident after the 18th week.Despite this, the average daily growth rates (G w %) for the two treatments at the end of the 25 weeks did not differ significantly (Tab.II). The time of sexual maturation (laying of the first cocoon) varied between 3 and 10 weeks of life (average 7.17 ± 2.93) for the organisms kept in fine sand, and 6 to 11 weeks (average 9.00 ± 2.00) for those kept in medium sand (t = -1.228,n = 12, p = 0.251).The average number of cocoons per adult per week was slightly higher in the fine sand treatment in almost all weeks (Fig. 4).Therefore, in the end of the 25 weeks, the avarege number of cocoons per adult in fine sand was greater than in medium sand (Tab.II).By contrast, the number of eggs per cocoons was similar in all weeks (Fig. 5) and at the end of the 25 weeks (Tab.II). DISCUSSION The cocoons of L. hoffmeisteri are covered with fine sediment particles that decrease their detection in the substrate (ASTON 1973, LAZIM et al. 1989), a fact that was also observed in this study.This characteristic is likely to provide more protection against organisms that can harm embryo development.ASTON (1973), in two different experiments, observed an average of approximately 5 and 1.35 eggs per cocoon when he studied L. hoffmeisteri, with average weights of 10.5 and 3.5 mg, respectively.His results, combined with the results of the present study, show a positive correlation between the weight of individuals and the number of eggs per cocoon.PARIS & PITELKA (1962) found a positive relationship between the size of the female of Armadillidium vulgare Latreille, 1804, a terrestrial isopod, and the number of juveniles produced.VREYS & MICHIELS (1995) and ILANO et al. (2004) observed a positive correlation between the size of the genitor and the number of eggs/cocoons produced by the gastropod Buccinum isaotakii (Kira, 1959) and the planaria Dugesia gonocephala (Girard, 1850).These studies confirm the positive relationship between body mass and reproduction for some invertebrates. An interesting observation is that heavier L. hoffmeisteri individuals laid more eggs per cocoon, while the number of cocoons from lighter and heavier invidicuals was statistically similar (present study).By contrast, an increase in temperature led to an increase in the number of cocoons of this species, while the number of eggs per cocoon was maintained (NASCIMENTO & ALVES 2009).Higher temperatures accelerate the metabolism of organisms and cause an increase in the number of reproductive events (HOWE 1967).In Oligochaeta this can be represented by the number of cocoons produced.Moreover, an increase in body weight leads to more fertility, which implies a greater number of eggs per individual (VREYS & MICHIELS 1995).Individuals with larger body mass can invest more energy in reproduction, so they have more reproductive success compared to smaller conspecifics (VREYS & MICHIELS 1995). In the preset study, a positive growth rate was practically constant throughout the experiment, even after the specimens reached sexual maturity and laid their cocoons.MARCHESE & BRINKHURST (1996) Table II.Average number of eggs per cocoon (± standard deviation [SD]) and of cocoons per adult per day (± SD), and average rate of daily growth (Gw%) (± SD), observed for Limnodrilus hoffmeisteri cultivated in fine sand (0.057-0.250 mm) and medium sand (0.250-1.000 mm) at 25 ± 1°C during 175 days.sowerbyi Beddard, 1892 (Naididae: Tubificinae) after they laid their first cocoon.SEBENS (1987) emphasizes that some invertebrates, including Oligochaeta, can exhibit undetermined growth, with no asymptote in the growth curve.This growth pattern seems to be present in L. hoffmeisteri, because even in the reproduction phase the organisms continue to grow constantly. According to the results of NASCIMENTO & ALVES (2009), the time of embryonic development following eclosion for L. hoffmeisteri was less than 21 days at 25°C.This agrees with our findings here, which showed that more than 80% of the young specimens hatched within 8 to 12 days after the cocoon was laid.This period is shorter than that observed for B. sowerbyi (14 to 16 days) under similar temperature conditions (NASCIMENTO & ALVES 2008, LOBO & ALVES 2011).ASTON (1973) observed that L. hoffmeisteri is capable of developing from an embryo to a mature individual in less than five weeks.The development period obtained here was longer (seven weeks on average) than observed by that author.This may have been a result of the stress caused by weekly handling.According to MARCHESE & BRINKHURST (1996), development is slower in individuals of B. sowerbyi handled on a weekly basis than in individuals handled every two weeks.For L. hoffmeisteri, weekly observations are necessary, since after two weeks there would be a large number of young specimens hatching, making it difficult to assess the number of eggs per cocoon. In the present study, a significant difference between the two granulometric fractions tested was not observed for most parameters analyzed.The most important difference observed was the average number of cocoons per adult per day, which was greater in fine sand treatment.This shows that individuals living in this kind of substrate are more fit, since they can breed a greater number of descendants in a longer period of time.This is a likely explanation for the positive correlation between the abundance of L. hoffmeisteri and the fine fraction of sediment (< 0.210 mm) from Diogo Lagoon (Luiz Antônio, SP, Brazil) reported by ALVES & STRIXINO (2000).Additionally, it should account for the observation of largest number of cocoons reported by ASTON & MILNER (1982) in their experiment mixing fine sand (0.072-0.250 mm) and medium sand (0.250-1.000 mm) in activated sewage (product from sewage treatment, containing 78% organic matter), compared to other tested grain size fractions (pure activated sewage, coarse sand, clay, and mud). We conclude that the hypothesis raised by ASTON (1973), that large individuals of L. hoffmeisteri produce a larger amount of eggs, is accepted, since the body weight positively correlated with the number of eggs produced.Additionally, we conclude that the grain size influences the reproduction of the species, with individuals reared in fine sediment producing more eggs per cocoon than those reared in medium sediment. ACKNOWLEDGMENTS We would like to thank the Fundação de Amparo à Pesquisa do Estado de Minas Gerais (FAPEMIG) for the research grant given to the first author, and all the reviewers for the suggestions that increase the quality of this paper. 2Corresponding author.E-mail: haroldo.lsn@gmail.comABSTRACT.Limnodrilus hoffmeisteri Claparede, 1862 is a cosmopolitan Oligochaeta widely used as indicator of organic pollution in water bodies.Previous contributions have shown the effects of organic matter and temperature on the life history of the species, although very little is known about the factors that influence its reproduction.This study aimed 1) Table I . Composition of the fish food Alcon Basic ® used as organic matter source in the experiments with Limnodrilus hoffmeisteri.Figures provided by the manufacturer (value per kilogram of the product).
2017-10-23T22:57:21.836Z
2011-10-01T00:00:00.000
{ "year": 2011, "sha1": "f706335d1c85a96afc2e6c9ceae33c49a69299c0", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/zool/a/wJ87qM76sJkQpgXL7tvXcgq/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f706335d1c85a96afc2e6c9ceae33c49a69299c0", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
15727860
pes2o/s2orc
v3-fos-license
Re-examination of long distance effects in $b\to sl^+l^-$ We re-analyse the long distance contributions to the process $b \to s l^+ l^-$. Full $q^2$-behavior of the vector meson dominance amplitude is used together with the effect of Terasaki suppression, and comparisons with the previous results are given. We show that the interference between short- and long- distance contributions makes it difficult to extract the short distance information from the dominant long distance background, either in the dilepton invariant mass distribution or in the single lepton energy spectrum. Rare decays through the flavor changing b → s transitions provide good test of the standard model (SM), and are expected to give signals of new physics [1]. The branching ratio of the process b → sγ, which has already been measured by the CLEO collaboration [2], is within the SM predictions. Unlike the decay b → sγ, the process b → sl + l − (l = e or µ) is expected to be dominated by long distance contributions through the mechanism of vector meson dominance(VMD) [3]. However, it was usually believed that the long distance (resonance) contributions arise only in some particular region of the invariant mass spectrum of the dilepton pair [4], since the involved resonance ψ(ψ ′ ) peak is very sharp. Detailed calculation [3] shows that there exists significant interference between the short and long distance contributions, which leaves only a small portion of kinematic region at low dilepton invariant mass where the interference effect by the resonances is small. The energy spectrum of single lepton has also been given in [5] where a window of nearly pure short distance information is found. In the present work, we will re-examine both the dilepton invariant mass and the single lepton energy spectrums using an alternative treatment of the VMD amplitude. In the previous analysis of the cascade decays b → sψ(ψ ′ ) and ψ(ψ ′ ) → l + l − , an effective description is made for the later electromagnetic transition ψ(ψ ′ ) → l + l − [3,5]. The dependence of the VMD amplitude on the square of the dilepton invariant mass, q 2 , is approximated by that of the resonance mass m 2 ψ or m 2 ψ ′ in the denominator of the photon propagator. This approximation is only valid near the resonance region, and consequently, the previous analysis are not complete in the whole phase space 1 . Let us start with the short distance contributions to b → sl + l − with l = e or µ. The short distance contributions come from box, Z and photon penguin diagrams. The QCD corrected effective Hamiltonian in SM is [4]: with P L = (1−γ 5 )/2, P R = (1+γ 5 )/2, and q = p l + +p l − is the invariant mass of the dilepton. By normalizing to the semileptonic rate, the strong dependence on the b-quark mass cancels out. The differential decay rate dΓ( where f (m c /m b ) is the phase space factor: If we take the experimental result Br(B → X c eν) = 10.8% [10], the differential decay rate of Fig.1 as the dash-dotted line. In addition, there are also long distance resonance contributions from cc state. There are six known resonances in the cc system that can contribute to this decay mode [11]. The lowest two, ψ and ψ ′ , were considered in the previous analyses [3,5]. Here we also consider the same two resonances. The higher resonances will also contribute, but they are less important in our case of discussing the uncertainties, as will be shown later. Applying the VMD mechanism, the long distance contribution is through b → sψ, and ψ → γ → l + l − , where the resonance can also be ψ ′ . These give the effective Lagrangian where a 2 = C 1 + C 2 /3 is a QCD corrected coefficient of the four quark operators. Below we will use the phenomenological value a 2 = 0.24 which comes from fitting the data of B meson decays [12]. Note that the expression (5) differs from the previous ones [3,5] by keeping the photon propagator as −ig µν /q 2 instead of −ig µν /m 2 ψ or −ig µν /m 2 ψ ′ . Thus it holds in the whole kinematic region. The effective coupling of a vector meson g V (q 2 )(V = ψ , ψ ′ ) is defined by where ǫ V µ is the polarization vector of the vector meson V . On the mass-shell of the vector meson, g V (q 2 ) is replaced by the decay constant g V (m 2 V ) which can be obtained from the leptonic width of the vector meson: The structure of eqn. (5) is the same as that of the operator O 9 . It is convenient to include the resonance contribution in eqn.(3) by simply making the replacement Assuming a constant coupling g 2 ψ (q 2 ) ≡ g 2 ψ (m 2 ψ ) as done in [3,5], the numerical result is given in Fig.1 as the dashed line. It is easy to see that, this spectrum is enhanced in the low q 2 region due to the explicit inclusion of the photon propagator. From Fig.1, we can also expect that higher resonances other than ψ or ψ ′ contribute mainly in the region 0.6 <ŝ < 1 where we are not interested, since near this tail of the spectrum no useful short distance information is expected to emerge. The assumption made above on the constant coupling of g 2 ψ (q 2 ) can be improved by accounting for the mechanism of Terasaki suppression for the ψ − γ conversion [13]. In the framework of VMD, the data on photoproduction of ψ indicates a large suppression of g ψ (0) compare to g ψ (m 2 ψ ) [14]. This has been confirmed in [15] by constraining the dominant long distance contribution to s → dγ using the present upper bound on the Ω − → Ξ − γ decay rate. As a result, it can be concluded that this suppression results in a much smaller long distance contribution to b → sγ transition [14]. Now we use a momentum dependent g V (q 2 )(V = ψ , ψ ′ ) in L res , which is used in [16] to obtain a reduced resonance to nonresonance interference where a broader region of invariant mass spectrum sensitive to short distance physics is claimed. The momentum dependence of g V (q 2 )(V = ψ , ψ ′ ) derived using a dispersion relation [13] is where c ψ = 0.54 , c ψ ′ = 0.77 and d ψ = d ψ ′ = 0.043. h(q 2 ) is defined by with r = q 2 /m 2 V for 0 ≤ q 2 ≤ m 2 V . As a result, eqn. (9), which is valid for 0 ≤ q 2 ≤ m 2 V , is an interpolation of g V from the photoproduction experimental data on g V (0) to g V (m 2 V ) from the leptonic width based on quark-loop diagram. We assume g V (q 2 ) = g V (m 2 V ) for q 2 > m 2 V mainly due to the fact that the behavior of the ψ − γ conversion strength is not clear in this region, and is not important in our case (see below). Applying Terasaki's formula (9) for the q 2 dependence of g V (q 2 ), the differential decay rate of b → sl + l − receives suppression in low q 2 region. However, there is still significant interference between the resonance and the short distance contributions, due to the factor 1/q 2 coming from the propagator of the virtual photon. This is also shown in Fig.1. Now we turn to the energy spectrum of single lepton. The integration over q 2 is complicated, since many functions here involveŝ. We simply give the numerical results in Fig.2 for l = µ(see also [5]). One can see that, if the 1/q 2 behavior is replaced by 1/m 2 ψ (or 1/m 2 ψ ′ ) everywhere, there is almost no contribution from the ψ, ψ ′ resonance when β < 0.2. This result is what has been arrived in Ref. [5]. If, however, this 1/q 2 is retained, there are also contributions from the resonances even in the low β region and consequently, the resonance background is still serious. Including the effect of Terasaki suppression of g V (q 2 ) in the low q 2 region, the result is also shown in Fig.2 where the resonance background is only half reduced. From both Fig.1 and Fig.2, we can observe that the long distance VMD contributions to the process b → sl + l − are large, if an alternative treatment of the electromagnetic subprocess is performed. It can also be seen that the single lepton energy spectrum is almost useless in extraction of any short distance information from the resonance background. The total branching ratio of b → sl + l − turn out to be 3.6 × 10 −4 or 4.9 × 10 −4 with the effect of Terasaki suppression included or not. We have treated the resonance contribution from ψ, ψ ′ to b → sl + l − alternatively without using the effective description of the electromagnetic sub-process ψ(ψ ′ ) → γl + l − . The long distance contributions are found to be significant, especially in the single lepton energy spectrum. Concerning other higher resonances, and also other uncertainties existed in this decay mode [6], we conclude that it is difficult to extract short distance information, which is sensitive to new physics, from the dominant long distance contributions.
2014-10-01T00:00:00.000Z
1997-02-18T00:00:00.000
{ "year": 1997, "sha1": "44599abd26200f89f37c2606fca703c65a5e19a0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/9702358", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "44599abd26200f89f37c2606fca703c65a5e19a0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250676503
pes2o/s2orc
v3-fos-license
Novel dual mode disk-shaped resonator filter with HTS thin film We propose a novel dual mode disk-shaped resonator filter with high temperature superconductor (HTS) thin films. The 5 GHz-band YBa2Cu3O7-x (YBCO) microstrip dual mode disk resonator filter on an MgO (100) substrate keeps a perfect circle shape and a waveguide line of about a half-wavelength that is capacitively-coupled with the disk resonator to generate a dual mode. The fabricated filter had an equivalent frequency response of a two-pole filter with two attenuation poles. The coupled coefficient of two orthogonal modes can be controlled by the length of coupled waveguide line and the gap space between feeder and disk resonator. The fabricated filter showed very low third-order intermodulation distortion (IMD3) of -73 dBc with an output power of 10 W at a temperature of 65 K. In addition, the proposed structure can be fabricated using a side lithography and etching processes. This is an advantage for multi-stage filter applications. We believe this filter is a promising candidate structure for RF transmit system applications. Introduction In future wireless communications, it will be important to use radio-waves effectively because the optimum frequency band is limited for each application. A solution is suppression of spurious signals employing a high Q filter of high temperature superconductor (HTS) thin films on a low loss substrate. This is because HTS surface resistance at microwave frequencies is lower than normal metals, such as Gold and Copper. For transmit system applications of the base stations, filters with high power handling and low intermodulation distortion (IMD) are required [1]. Many kinds of HTS planar-circuit filters have already been studied [2], of which a disk shaped microstrip resonators with HTS film on MgO substrate [3][4][5][6] has been found to have high power handling and IMD performance. On the other hand, the diameter of a disk shaped resonator filter is half a wavelength (λ) or integral multiple of it. As it consumes a large space compared to other patterned filters, a disk shaped filter with dual mode is desired from the point of view saving device space. In addition, an effective RF filter has attenuation poles to obtain high filter performance [2]. A disk resonator with a disk-like pattern and a small notch is a well known dual-mode structure [2,7]. However, the current concentration at the notch is higher than in other parts of the resonator. It is one of the limiting factors of the input power in the notched HTS disk resonators. Also, the current concentration in the film leads to nonlinear behaviour of a HTS thin film [8]. To improve IMD performance, it is necessary to reduce nonlinear behaviour [9]. This is expected to equalize the current distribution in the resonator improving high power handling and IMD performance. Therefore, the shape of the resonator disk should be kept smooth. Various kinds of methods to generate a dual mode with smooth shape have already been proposed [10,12]. There were reports [10,11] of the dual mode filter which has an upper conducting layer and a dielectric substrate on microstrip disk resonators. Additionally, an elliptic-disk dual mode filter [12] was studied. These filters have high power handling capability as compared to notched filters. In this paper, we propose a dual mode disk-shaped microstrip resonator with a coupled waveguide line. The disk resonator keeps a perfect circle shape and a waveguide line of about half a wavelength is capacitively-coupled with a disk resonator to generate a dual mode. In addition, it can be fabricated using only a one side patterning process. This is an advantage for multi-disk filter fabrication. A 5 GHz-band YBCO microstrip dual mode disk resonator filter with a coupled waveguide line was demonstrated and the power handling and IMD performance were discussed. Figure 1 shows a schematic circuit pattern view of the proposed dual-mode disk-shaped microstrip resonator with a coupled waveguide line. The input and output feeders are orthogonally capacitivelycoupled with the disk resonator. To generate a dual mode, a coupled waveguide line of about λ/2 of equivalent electrical length is capacitively-coupled. We considered that two orthogonal modes, horizontal and vertical, were coupled through a coupled waveguide line. In addition, these parts are located on the same layer. The device with two-pole band pass filter (BPF) can be expected to have dual mode resonance. Analysis of dual mode disk-shaped resonator filter with a coupled waveguide line This filter structure was analyzed using an electromagnetic (EM) simulator using the moment method. In the simulation conditions, MgO (100) with εr= 9.7, tan δ= 5x10 -6 and thickness 0.5 mm was used for the substrate. HTS thin films, for signal and ground layers, and a package material for the shield, were assumed to be perfect conductors. The package inner size used was for 20 mm square. To design the S parameter frequency responses of the device, we assumed that the coupling coefficient ( k ) of two orthogonal modes is related to the delay length of the coupled waveguide. The delay length can be controlled by the length and capacitance between feeder and disk resonator. The coupling coefficient as a function of the length of coupled waveguide was analyzed using simulation. The coupling coefficient is described by [13] 2 1 2 2 Where, 1 f and 2 f are the lower and higher resonant frequencies of orthogonal modes, respectively. If the coupling coefficient is changed by delay length, it is expected that the delay is also changed by the gap between feeder and disk resonator. The delay length as a function of the gap between feeder and disk resonator was analyzed. A notched filter with the same band width was simulated to compare with the current density. Experimental An MgO (100) substrate with thickness 0.5 mm and with epitaxial YBa 2 Cu 3 O 7-x (YBCO) thin films deposited both sides were used. Each YBCO film thickness was 500 nm. One side of the YBCO thin film was patterned using photolithography [14]. The 5 GHz dual-mode dick resonator BPF was designed and fabricated. The band width of filter was 100 MHz at -3 dB down. The diameter of the disk resonator was 11 mm. The filter circuit was packaged in a gold-plated copper box. The frequency response of S parameters was measured at a temperature of 65 K with a network analyzer. The nonlinear performance was measured using a two-tone method. Two power sources combined near the lower and higher cut-off frequency (fundamental) and increased by 10 kHz were inputted into filter. The output power of the fundamental and third-order intermodulation distortion (IMD3) was measured using a spectrum analyzer with an input power of up to 10 W at 65 K. Figure 2 shows simulated frequency responses of dual mode disk-shaped resonator filters with a coupled waveguide line as a function of the coupled waveguide line length. This filter has the equivalent frequency response of a two-pole filter with two attenuation poles. Therefore, it is believed that the waveguide line of λ/2 only affects the delay line. It shows that the coupling coefficient of two orthogonal modes is changed with the delay length of coupled waveguide. Figure 3 shows the calculated coupling coefficient as a function of the coupled waveguide line length, using the simulation results. The length of the waveguide line included the feeder length. The line length was normalized by the wavelength. This means that the coupling coefficient of the filter can be controlled by the length of the coupled waveguide line. If the coupling coefficient can be changed by the phase delay, the electrical delay at gaps between the feeder and the disk resonator also influences it. The electrical delay was calculated using a disk resonator with the opposed feeder. Figure 4 shows the Based on the above results, the 5 GHz-band dual mode filter was designed and fabricated. The external Q (Qe) of the filter was also optimized with the feeder width and the gap between feeder and resonator (not shown). Figure 5 shows a photograph of the fabricated filter packaged in a shield case without a shield cover. Figure 6 shows the measured frequency response of the fabricated filter at a temperature of 65 K. Fabricated filter has two-pole filter performance with two attenuation poles and reflection loss below 20 dB at the passband frequency. concentration was analyzed using an EM simulator. The coupled waveguide line is about λ/2, but does not perform as a resonator. There is no significant resonance point in the coupled waveguide at the band pass frequency region. However, the current density at bend points of the coupled line near the feeder is higher than at other points. But the simulated current density is two times as smaller as it at the notch of a notched HTS disk resonator with same bandwidth. If it can be reduced, the filter is expected to have high IMD performance. Future analysis should consider the reduction of current density in these regions. Conclusion A novel dual mode disk-shaped microstrip resonator filter with a coupled waveguide line employed YBCO thin film was demonstrated. The fabricated filter of 5 GHz-band with 100 MHz bandwidth has equivalent frequency response of a two-pole filter with two attenuation poles. The coupled coefficient of two orthogonal modes can be controlled by the length of the coupled waveguide line and the gap space between the feeder and the disk resonator. This filter has a very low IMD3 of -73 dBc with an output power of 10W obtained at a temperature of 65 K. We believe the presented resonator with HTS films is a candidate element structures for multi-stage BPFs with higher cut-off frequency responses in transmit system applications.
2022-06-28T05:35:15.588Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "306d3391ccff6eadec54097535dd4e44f6454559", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/97/1/012149", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "306d3391ccff6eadec54097535dd4e44f6454559", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
118499288
pes2o/s2orc
v3-fos-license
Scattering of evanescent wave by two cylinders near a flat boundary Two-dimensional problem of evanescent wave scattering by dielectric or metallic cylinders near the interface between two dielectric media is solved numerically by boundary integral equations method. A special Green function was proposed to avoid the infinite integration. A pattern with a circular and a prolate elliptic cylinders, respectively, is suggested to simulate the sample and the probe in near-field optical microscopy. The energy flux in the midplane of the probe-cylinder is calculated as a function of its position. The diffraction limit in optics, known to originate from the wave nature of light, gives a striking example of a physical restriction being a target of increasing efforts to overcome. More than a century ago it was realized that the wavelength limits the smallest spot within the electromagnetic energy can be localized, as well as the smallest details one can optically resolve are comparable to the wavelength. However, further study showed that these obstacles, in fact, concern a traveling (homogeneous) electromagnetic wave. Unlike, the inhomogeneous (also referred to as evanescent) waves, which can not propagate far away from their source, open the way to clear up the limitations due to diffraction and go towards the optics of tiny objects. For instance, the nanosized highly-polarizable (i.e. metal) particles well manage to concentrate the light energy within few nanometer range [1]. A near-field scanning optical microscopy (NSOM) was suggested to obtain optical signal from the objects at nanoscale (see [2,3] and references therein) using sharp tips; the latter serve much like an optical antenna [4], which receives the energy of the local field and then transfers it to a detector. Thus, the nanophotonics is, basically, an optics of evanescent waves, and, consequently, the fundamental optical processes (such as diffraction, interference, scattering) are to be reconsidered. In past two decades a substantial progress is achieved in nano-optics [5,6]. However, a significant methodological deficiency persists even for the plain, basic problems, like scattering of the evanescent wave by a body. The trouble is that evanescent wave can not be considered in isolation from its source (for instance, the interface where the total internal reflection takes place), therefore the source is certainly affected by the scatterer as being located within a few wavelengths. In paper [7] the general analytical approach is suggested that makes possible to do very effective calculations of the evanescent wave scattering on a 2D particle (a cylinder) near a flat boundary. In the present work we make the next step and consider the problem of two optically coupled objects placed into the inhomogeneous wave. Our main goal is to get a physical insight into the near-field scanning optical microscopy, which minimally involves two small bodies -the studied object and the probe. We believe our work is a useful starting point for analysis of particular NSOM schemes which will allow for correct extraction of the near field and structural information from NSOM data. Keeping in mind this application to the realistic configurations, we should focus our attention on the first-principles approaches, avoiding restricting assumptions and approximations. We start from the Helmholtz equation where △ is the Laplace operator, k is the wavevector. Consider domain D with permittivity ε in and its boundary Γ = ∂D. Denote ε out the permittivity of exterior of D. The Green theorem can be written inside and outside domain D, respectively Here r / ∈ Γ, H ′ ≡ H(r ′ ), ∂/∂n ′ is the derivative along the internal normal, g(r, r ′ ) is a fundamental solution to the inhomogeneous Helmholtz equation It should be noted that function g(r, r ′ ) is not fully arbitrary, namely, should satisfy the radiation condition. Hence, the implicit integral over an infinitely remote contour in Eq. 3 reduces to the field in the absence of scatterer D, denoted as H 0 . Let us consider TM wave for which H is the magnetic field. It has to satisfy boundary conditions for the field and its normal derivative where square brackets denote the jump, ε corresponds to either ε in or ε out . Conditions (5) mean that the magnetic field is always continuous at Γ, whereas its normal derivative has a jump depending on ε in , ε out . In order to find the field with the help of the Green theorem (2), (3) we need to know H and ∂H/∂n at boundary Γ. These two independent functions satisfy the following coupled equations at r ∈ Γ: which are obtained by approaching Γ from either inner or outer domains. Here the fundamental solution inside and outside D is denoted as g in and g out , respectively. After solving coupled integral equations (6), (7) the field in arbitrary point can be calculated using (2), (3). This approach is the base for the boundary element method (BEM) [8,9]. Its advantage consists in diminishing the problem dimension. For instance, in two-dimensional geometry the method deals with one-dimensional contour Γ, and then occurs to be very Figure 1: The geometry of light scattering. fast and accurate. The method could be extended to several domains. For this purpose we must consider integration (6), (7) along an unlinked manifold Γ with corresponding fundamental solution g in inside each. External function g out should satisfy the Sommerfeld radiation condition at infinity, i.e. be the diverging spherical or cylindrical wave. We consider an evanescent wave, Fig. 1. The plane running wave H(r, t) = H inc exp (−iωt + ik 1 · r) goes from the dielectric media ε 1 to other medium with permittivity ε 2 ; θ 1 is the incident angle between the wavevector k 1 and the normal to the boundary. While it is greater than θ 0 = arcsin ε 2 /ε 1 , the angle of the total internal reflection, only the evanescent wave with coordinate dependence exp(−κy + ik 2x x) penetrates into medium 2, where ω, c are the frequency and speed of light. Two independent polarization states of falling wave are possible. We consider TM-wave with magnetic field vector perpendicular to the plane of incidence. This case is more interesting in view of the plasmon resonances study, since electric field vector lies in xy plane where the cylinder has a finite size. The solution for TE-wave can be considered in the same way. The magnetic field of the wave obeys the Helmholtz equation (1), then the BEM method is applicable. However, the problem arises with the infinite integration path along x-axis that is hard for numerical calculation. To avoid this difficulty we look for the specific Green function G(x, y; x ′ , y ′ ) satisfying inhomogeneous equation (4) in media 1 and 2. Function G depends on difference x − x ′ only, due to translational symmetry. The boundary condition at y = 0 is After the Fourier transformation Eq. (4) is reduced to an ordinary differential equation having exponential solutions. Using conditions (9) we obtain the function at y ′ > 0 in q-domain Here r(q) = (ε 1 µ 2 − ε 2 µ 1 )/(ε 1 µ 2 + ε 2 µ 1 ) is the Fresnel reflection coefficient of p-wave at normal incidence [10], µ 2 1,2 = q 2 − k 2 1,2 . Carrying out Fourier transformation (10) of function (11) at y > 0 we have two terms and G = G 1 + G 2 , where the sign of square root is given by the rule q 2 − k 2 2 → −i k 2 2 − q 2 , q 2 < k 2 2 . The first term (12) can be calculated analytically and reduces to the Green function in the homogeneous space where H (1) 0 denotes the Hankel function of the first kind [11], ρ 2 ± = (x − x ′ ) 2 + (y ± y ′ ) 2 . The second term G 2 gives the effect of the reflected image source. The amplitude of source at each q is equal to the reflection coefficient r(q). Thus along with the point source at (x ′ , y ′ ) we have to consider the mirror-image source r(q) at (x ′ , −y ′ ). The total field at the upper half-plane is the sum of fields generated by the source and its image at each q. The Green function automatically takes into account the multiple scattering. The Green function of this type was studied for homogeneous waves: spherical acoustic, see [12,13], or cylindrical electromagnetic waves [14]. The asymptotic behavior of the Green function in far field can be found by the steepest descent method [15]. The stationary point is q 0 = k 2 |x−x ′ |/ρ, where ρ = ρ − for G 1 and ρ = ρ + for G 2 . The result is a sum of cylindrical waves is the reflection coefficient at q = q 0 , ϕ is the polar angle of observation counted out from x-direction. The reflection coefficient is . It turns to zero at ϕ = arctan ε 1 /ε 2 , i.e. at the Brewster angle. Green function (12), (13), exploited as the external fundamental solution for coupled integral equations (6), (7), allows us to avoid the infinite integration along axis x. Then we can solve the equations for two contours. The algorithm has been tested in the case of one contour and homogeneous wave when ε 1 = ε 2 . There are analytical formula for circular cylinder [16] and numerical calculations for a cylinder with elliptic cross-section [17]. Comparison demonstrates the relative consistency within 10 −4 for N = 360 panels approximating contour Γ. Without scattering body the electric field vector E in medium 2 has only y-component. The scatterer produces a small component E x and the evanescent wave is partially converted into diverging one. The corresponding Pointing vector acquires a nonzero y-component, S y , so the energy flux outgoing from the plane interface appears. The flux of scattered wave is calculated at distance ∼ 2λ, i.e. in the wave zone, and normalized by the average flux of the incoming wave in the first medium S inc = cH 2 inc /8π √ ε 1 . Indicatrix of the scattering into the upper half-space is shown in Fig. 2. Hereafter the insets show the configuration of scattering bodies. The scattering is minimal in normal or longitudinal direction and maximal at some medium angles. It is the quadruple contribution due to the field of image source (15) and decay of the evanescent wave amplitude with y. The angle ϕ of the first maximum increases with ε 1 . For ε 1 = 2 the first maximum is 23 • and for ε 1 = 3 it is 41 • . The forward-backward asymmetry of indicatrix is a clear evidence of violation of the dipole approximation owing to finite sizes (2kR ≈ 0.8). The method can be applied also to different cylinders. We choose the first cylinder (ε in = ε 3 ) with round cross section whereas that of the second cylinder (ε in = ε 4 ) is a prolate ellipse with minor semiaxis a = 0.04 µm and axis ratio b/a = 10. The major axis is chosen along y-direction. Fig. 3 shows the field near the tip as a function of coordinate x at y = 0.15 µm, when the ellipse is moving along to x axis. The pattern in the near-field domain is much more complicated than in the wave zone. We see the slight lowfrequency oscillations at the right side and deep high-frequency at the left. The physical nature of the oscillations consists in the interference between falling evanescent wave and the diverging wave scattered by the circle. They are counter-propagated at the left and co-propagated at the right. Their spatial frequencies are different k ± = k x ± k 2 . However, in considered case k x = k 1 sin θ 1 ≈ k 2 , then the interference oscillation at the right side has a nearly zero frequency. The slight interference pattern at the right in this case is caused by the scattering by the tip and these oscillations vanish without the tip. Fig. 4 shows the energy going along the major axis through the ellipse middle cross-section as a function of coordinate x, namely, the component We see interference oscillations in the coordinate dependence, the amplitude decreasing with the distance between the tip and the plane. As follows from Fourier transform (11), the higher spatial harmonics with q 2 > k 2 2 decay exponentially with distance like exp(−µ 2 y). Then the small details are not visible in the far-field pattern. However at ky ≪ 1 the exponent is not negligible, and then the small-scale details become apparent. The near field can be observed if one extract the signal and transfer it to the far zone. In our calculation the stretched ellipse plays a role of such a transmitter. The energy flux through the far central plane is a simplest (two-dimensional in our case) model of NSOM [5,2,3]. The curves in Fig. 4 correspond to instrumental function of the microscope. Although, this statement should not be taken literally. The multiple scattering leads to back influence of the probe to the object then it is not a usual linear function of response. The coupled boundary equations describing the scattering of the evanescent wave are solved for two cylinders. The asymmetry of indicatrix and oscillations in the coordinate dependence are observed. The BEM with proposed Green's function is rather general and applicable for any contour Γ or several contours. It can be extended also for 3D geometry. The Green function could find applications in other calculations, e.g. volume integral equations including the Born series and discrete dipole approximation. Authors are grateful to E. V. Podivilov for helpful discussions. This work is supported by the Government program NSh-4339.2010.2, program # 21 of the Russian Academy of Sciences Presidium, and interdisciplinary grant #42 from the Siberian Branch of RAS.
2011-05-15T07:55:22.000Z
2011-05-15T00:00:00.000
{ "year": 2012, "sha1": "ab919899dc598d48f1f40b6d7d30631cb564ba43", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1105.2930", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ab919899dc598d48f1f40b6d7d30631cb564ba43", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
155540943
pes2o/s2orc
v3-fos-license
Structure-Based Approaches to Antigen-Specific Therapy of Myasthenia Gravis A majority of Myasthenia Gravis (MG) cases (~85%) are caused by pathological autoimmune antibodies to muscle nicotinic acetylcholine receptors (nAChRs). An attractive approach to treating MG is therefore blocking the binding of autoimmune antibodies to nAChR, or removing specifically nAChR antibodies, or selectively inhibiting and eliminating nAChR-specific B cells. This chapter will review highresolution structural studies of muscle nAChR and its complexes with antibodies derived from experimental autoimmune Myasthenia Gravis (EAMG). Based on these structural analyses, various strategies, including using small molecules to block the binding of MG autoimmune antibodies, and engineered chimeric nAChR antigen to specifically target and eliminate B cells that produce nAChR-specific antibodies, will be discussed. Introduction Myasthenia Gravis (MG) is an autoimmune disease that afflicts a significant human population.MG patients suffer from a variable degree of skeletal muscle weakness.The symptoms range from mere lack of muscle strength to life-threatening respiratory failure.MG is a chronic disease that can last many years and negatively impact the quality of living and life expectancy of afflicted individuals.Although MG rate is reported to be 7-20 out 100,000 [1] and the diagnosed MG cases are increasing, probably due to increased awareness of this debilitating disease, the aging population and other intrinsic and extrinsic factors that disturb the human immune system [1]. The majority of MG cases (~85%) are caused by pathological autoantibodies to muscle nicotinic acetylcholine receptors (nAChRs), a ligand-gated ion channel that mediates rapid signal communication between spinal motor neurons and the muscle cells.Autoantibodies against other neuromuscular junction (NMJ) proteins, including muscle-specific kinase (MuSK) and lipoprotein-related protein 4 (LRP4), can also cause muscle weakness in a small fraction of patient [2,3].The heterogeneous nature of MG autoantibody presents a challenge to both diagnosis and treatment of the disease. Most MG patients respond favorably to these treatment options to achieve effective symptom relief, and in some cases even clinical remission.Cholinesterase inhibiting drugs can temporarily enhance neuromuscular transmission by delaying the breakdown of acetylcholine (ACh) to compensate for the loss of NMJ nAChRs, but this treatment option only works in a fraction of patients and does not alter the autoimmune response.The more broadly used nonspecific immunosuppressive drugs work by inhibiting lymphocyte activation and proliferation but have little effect on long-lived plasma cells that are terminally differentiated and continue producing pathogenic antibodies [5,6].This may explain why treatment with nonspecific immunosuppressive drugs takes long time to show clinical improvement. There are two major limitations in the current MG treatment.First, up to 10% of MG patients do not tolerate or are resistant to the available treatments [7].Second, all immunosuppressant drugs, which are often used in the long-term control of chronic MG, inevitably carry the serious risks of infection and cancer.As such continued efforts have been put into searching for better MG treatment, as evident by the long list of clinical trials (ClinicalTrials.gov)testing well known immunosuppressive drugs such as methotrexate and azathioprine, as well as new biologics agents such as the anti-CD20 monoclonal antibody rituximab (which depletes B cells) and the anticomplement C5 monoclonal antibody eculizumab. An ideal therapeutic approach to MG would be to inhibit the pathogenic autoimmune response to nAChR specifically without disrupting other functions of the immune system.Because nAChR is a dominant autoantigen in MG, it has served as the primary target for a wide range of studies attempting to develop antigenspecific therapy to induce immune tolerance to nAChR [8][9][10][11][12][13][14].While some of these approaches showed promising results in animal model of experimental autoimmune MG (EAMG), translation to human MG treatment is uncertain.Furthermore, introducing an autoantigen like nAChR or its derivative peptides risks to inadvertently enhance the pathogenic autoimmune response. Here, we will first review structural and molecular features of nAChR and its complexes with autoantibodies.Based on insights derived from structural studies, we will discuss several strategies to specifically inhibit the binding of pathological autoantibodies to nAChR or specifically eliminate nAChR-specific B cells. Structural study of nAChR As the first isolated neurotransmitter receptor and ion channel, nicotinic acetylcholine receptors (nAChRs) have been the focus of extensive studies to understand the basic mechanisms of neuronal signaling.These receptors are also being targeted for drug development against a variety of diseases, including addiction, depression, attention-deficit/hyperactivity disorder (ADHD), schizophrenia, Alzheimer's disease, pain and inflammation [15].nAChRs have been analyzed by a variety of biochemical, biophysical and electrophysiological experiments [16].Tremendous efforts have been put into pursuing the atomic structure of nAChR.Electron microscopic analyses of nAChR from Torpedo marmorata by Unwin and colleagues have led to a 4 Å resolution model of the intact channel [17,18], providing one of the most comprehensive structural model for nAChR.The structural details, however, are limited by the relatively low resolution.In this regard, the high-resolution structure of the acetylcholine binding protein (AChBP) published by Sixma and colleagues in 2001 was a major breakthrough [19].AChBP shares ~24% sequence identity with nAChRs and has the same pentameric assembly.Its structures in different bound states have provided detailed information on the binding of a variety of agonists and antagonists [20].But AChBP does not function as an ion channel and may lack necessary structural features required for transmitting the ligand-binding signal across the protein body [21,22].The crystal structures of several prokaryotic homologues of nAChR have also been determined from different species and in different states [23][24][25].These structures together with detailed biochemical and biophysical characterization have provided a great deal of insight into the fundamental mechanisms of ligand-dependent channel gating (reviewed in Corringer et al [26]).More recently, the structure of the anionic glutamate receptor (GluCl) from C. elegans [27], and human α4β2 neuronal nicotinic receptor have also been determined [28].However, direct structural information of mammalian muscle nAChRs at high resolution will be needed for further dissecting the mechanisms of neuromuscular junction signal transmission and for drug development against MG [29]. High-resolution structural analysis enabled by stabilizing nAChR mutants Although large quantities of nAChR were available from Tor pedo electric ray organ, crystallization was not successful, probably due to the heterogeneity of the protein samples prepared from the natural source.Heterologous expression in bacterial results in insoluble protein is due to the lack of proper post translation modifications such as glycosylation.Yeast Pichia pastoris has been a favorable expression system for overexpressing nAChR because of its mammalian-like glycosylation system.However, the expressed nAChR protein or extracellular domain (ECD) is often unstable, leading to aggregation and low yield [30,31].We have employed a number of strategies to overcome this difficulty, including expressing different family members of nAChR or its sub-domain (mostly ECD), constructing AChBP-nAChR chimera, and introducing specific mutations to enhance expression and stability [32].Using the nAChR α1 as an example, we screened a PCR-generated mutant library of mouse nAChR α1 ECD for variants with increased expression and stability which led to the isolation of a triple mutant (V8E/W149R/V155A) that has much improved expression and stability than the wild type protein, and ultimately the determination of the crystal structure of nAChR α1 ECD bound to a-bungarotoxin at 1.94 Å resolution [22].Structure comparison with the 4 Å electron microscopic model of nAChR and AChBP reveals that the isolated ECD is very similar to its counterpart in the intact channel and that the stabilizing mutations do not appear to alter the overall structure of the ECD. All of the three mutations map to the surface of the protein (Figure 1a), with one (V8E) located on the N-terminal helix and the other two (W149R and V155A) located on loop B. The V8E mutation introduces a salt bridge with Lys84 (Figure 1b), whereas the W149R mutation introduces a salt bridge with Asp89 (Figure 1c).These salt bridges apparently contribute to protein stability as evident by the well-defined electron density of these exposed residues with long and charged side chains.Thus, the mutations seem to enhance the protein stability through at least two mechanisms.One is to remove surface exposed hydrophobic residues, including V155A (Figure 1d); the other is to introduce salt bridges on the protein surface.These observations suggest that the ECD of nAChR may be rationally engineered to improve solubility and stability.In principle, one can use homology models to guide the selection of exposed hydrophobic residues and to engineer surface salt bridges, which can increase the stability of recombinant mammalian nAChRs.This insight will be important for the design of stable chimeric nAChR antigen for specific targeting and elimination of nAChR-specific B cells (discussed further below). Functionally instrinsic instability of nAChR ECD Most proteins have a densely packed hydrophobic core that is important for stable folding in aqueous solution.However, a hydration pocket was found inside the beta sandwich core of the nAChR α1 ECD [22].This hydration pocket consists of two buried hydrophilic residues, Thr52 and Ser126, two ordered water molecules, and a few cavities, creating a packing defect near the disulfide that connects the two beta sheets.Both Thr52 and Ser126 are highly conserved in nAChRs but are substituted by large hydrophobic residues (Phe, Leu or Val) in the non-channel homologue AChBPs.This observation suggests that the nAChR ECD has evolved with a nonoptimally packed core, hence predisposed to undergo conformational change during ligand-induced gating.Replacing Thr52 and Ser126 with their hydrophobic counterparts in AChBP significantly impaired the gating function of nAChR without affecting the folding of the protein structure [22].This role of the hydration pocket on the conformation flexibility/dynamics of the nAChR ECD is supported by recent molecular dynamics studies [34].This model also suggests that the specific location of the hydration cavity is important for a particular class of pentameric LGICs [35].A practical implication of these observations is that one can design stabilization mutants of LGICs, including nAChR ECD, by structure-guided modifications of such packing defects, which are evolved for intrinsic ion channel functions but may be detrimental to recombinant production of proteins as therapeutic antigen. Structural studies of the complexes between nAChR ECD and EAMG antibodies Antibodies generated by the immune system may bind various epitopes on nAChR.It is therefore important to know if MG autoantibodies are randomly distributed to various epitopes and if they contribute equally or differently to the disease phenotype.This question is also therapeutically relevant if one wishes to use small molecules or single valent antibody [36] to block the binding of most Structure-Based Approaches to Antigen-Specific Therapy of Myasthenia Gravis DOI: http://dx.doi.org/10.5772/intechopen.84715pathologically relevant autoantibodies to nAChR.Mammalian muscle nAChR has a pentameric structure composed of two α1, one β1, one δ, and one ε (adult form) or γ (fetal form) subunit(s) [18].Extensive studies suggest that autoantibodies to α1 play a major role in MG pathology [37][38][39][40].Furthermore, more than half of all autoantibodies in MG and EAMG bind an overlapping region on the nAChR α1 subunit, known as the main immunogenic region (MIR) [41].The MIR is defined by the ability of a single rat monoclonal antibody (mAb), mAb35, to inhibit the binding of about 65% autoantibodies from MG patients or rats with EAMG [42][43][44].Subsequent studies have mapped MIR to a peptide region that spans residues 67-76 on nAChR α1 [45,46].Monoclonal antibodies directed to the MIR can passively transfer EAMG and possess all the key pathological functions of serum autoantibodies from MG patients [37].Moreover, a recent study showed that titer levels of MIR-competing autoantibodies from MG patients, rather than the total amount of nAChR autoantibodies, correlate with disease severity [47].These observations suggest that autoantibodies directed to the MIR on nAChR α1 play a major role in the pathogenesis of MG [41].However, autoantibodies classified as MIR-directed by competition assay may not necessarily have the same binding mechanisms to nAChR: two MIR-competing autoantibodies may share common or overlapping epitopes or may bind different epitopes but compete through steric effect [14]. Given their established myasthenogenic role, extensive efforts have been put into characterizing the interactions between MG autoantibodies and nAChR using biochemical [45,46,[48][49][50][51][52][53], structural [22,[54][55][56], and modeling approaches [57].More recently, the first crystal structures of human (pdb code: 5HBT) and mouse (pdb code: 5HBV) nAChR ECD bound by the Fab fragment of an EAMG autoantibody, Fab35 were determined [58].Both crystal structures are very similar, so the discussion here will focus mainly on the human complex (pdb code: 5HBT).The crystal structure, which also contains α-Btx that binds and stabilizes nAChR ECD to facilitate crystallization, shows that Fab35 binds to nAChR α1 in an upright orientation, away from the α-Btx (Figure 2).The Fab35 binding sites on nAChR α1 include the MIR and the N-terminal helix.Fab35 has the canonical IgG antibody structure where the complementarity determining regions (CDRs) from the heavy chain, CDR-H2 and CDR-H3, and the light chain, CDR-L3, form the binding site for nAChR α1.Contacting residues from Fab35 and nAChR α1 (defined as being closer than 4.5 Å) can be mapped using the crystal structure.Such contacting analysis revealed several "hotspots" on nAChR α1 that make numerous contacts to Fab35, including Asn68 and Asp71 from the MIR loop and Arg6 and Lys10 from the N-terminal helix.As shown in Figure 3, each of these four "hotspots" anchors an extensive network of interactions that display remarkable chemical complementarities.The importance of these hotspots are supported by extensive mutagenesis studies [50,51,53,59], which showed that Asn68 and Asp71 of the MIR are essential for MG autoantibody binding, while the surrounding Pro69 and Tyr72, when mutated, also affect the interaction between the antibody and the receptor.Mutation of N68D and D71K in the intact receptor also suggested ASn68 and Asp71 are of vital importance for the interaction [49].On the N-terminal helix of Tor pedo nAChR α1, two exposed residues, Arg6 and Asn10, which correspond to Arg6 and Lys10 in human nAChR α1, respectively, are found to be critical to MG antibody binding by mutational analyses [53].Many nAChR residues found to be important for antibody binding by mutagenesis studies, including Asn68 and Asp71of the MIR and Arg6 and Lys10 of the N-terminal helix, were indeed found to be interaction "hotspots" at the Fab35/nAChR α1 interface.More recent studies using natively folded nAChR α1α7 chimera proteins [52] or GFP-fused protein fragments [53] showed that the N-terminal helix (residues 1-14) and the nearby loop region (residues 15-32) are also important for high affinity MG antibody binding.These biochemical observations are in excellent agreement with the binding interface structure observed in the crystals (Figure 2). Although biochemical mapping of antibody-binding residues on nAChR α1 were performed with different antibodies (e.g., mAb210 and mAb132A) [45,46,[48][49][50][51][52][53], it is remarkable that these biochemical data agree so well with the crystal structure.The fact that many MIR residues at the center of the antibody-receptor interface are important for the high affinity binding of a variety of MG antibodies suggests that many MIR-directed autoantibodies share similar binding mechanisms to the Detailed interactions between Fab35 and nAChR α1 ECD at the binding interface. (a) Binding interactions at the vicinity of Asp71 of α1 (located at the MIR). (b) Interactions at the vicinity of Asn68 of α1 (located at the MIR). (c) Interactions involving Arg6 and Lys10 of α1 (located at the N-terminus of α1). (d) Interactions mediated by His3 of α1 (located at the N-terminus of α1) (Adapted from Noridomi et al. [58]). core MIR/N-helix region.This is a rather surprising finding given the potential heterogeneity of nAChR antibodies mentioned above.An important implication of this finding is that it may be possible to find small molecule inhibitors to block the binding of a large fraction of pathological MG autoantibodies to nAChR. Structural comparison of Fab35 with other MG autoantibodies To see how various MG/EAMG mAbs may bind nAChR through similar or different mechanisms, we compared the structure of Fab35 with that of two other MG mAbs (Fab198: pdb code 1FN4 and Fab192: pdb code, 1C5D) that have been determined previously [55,56].Superposition of the structure of Fab198 and Fab35 from the ternary complex shows that these two Fabs share a similar antigen-binding site (Figure 4a).As such, the MIR loop fits snugly into the pocket formed by the CDR-H2, CDR-H3 and CDR-L3 loops of Fab198, as predicated by previous modeling studies [57].The CDR-H2 loop of Fab198 is also in a position to interact with the N-terminal α-helix adjacent to the MIR (Figure 4b).Even more remarkably, many key α1-binding residues in Fab35 are also conserved in Fab198 and they appear to have similar contacts to nAChR α1 in the modeled Fab198/nAChR α1 binding interface (Figure 4b).These residues include Trp47 from CDR-H2, Arg50 from CDR-H2, and Tyr95 from CDR-L3 at the center of the MIR-binding pocket, and Trp52 and Asp54 (both from CDR-H2) which interact with the N-terminal α-helix.In contrast to the structural similarities shown above, the CDR-H3 loops between Fab198 and Fab35 differ significantly in length and sequence.The CDR-H3 loop of Fab198 is too short to interact with the surface pocket of nAChR α1, which is occupied by the corresponding CDR-H3 loop of Fab35 in the complex crystal structure (Figure 4b).These structural analyses suggest that mAb35 and mAb198 share a high degree of similarity in binding mechanism to the core MIR/N-terminal helix region but differ in the periphery of the binding interface.On the other hand, superposition of the structure of Fab192 onto that of Fab35 in the ternary complex reveals substantial differences (not shown here).The variable domains (V H and V L ) have a significant rotational twist, such that the MIR loop does not fit into the antigen-binding site of Fab192.What is more, the key α1-binding residues of Fab35, like Arg50 and Trp52 of CDR-H2, are not conserved in Fab192.These structural differences suggest that Fab192 may differ significantly from Fab35 in terms of binding mechanisms to nAChR α1, confirming and extending the differences previously recognized between the two [52]. MG autoantibody repertoire and MIR-directed autoantibodies A number of studies showed that the total amount of nAChR antibodies in the serum of MG patients does not seem to correlate with disease severity, suggesting that various nAChR antibodies that bind different regions on nAChR may contribute differently to this disease [41,[60][61][62]].As discussed above, the total amount of autoantibody from MG patients directed to the MIR of nAChR α1 subunit did show significant correlation with disease severity [47].These observations suggest that autoantibodies directed to nAChR α1 MIR play a major role in the pathogenesis of MG [41].It is now clear that many MIR-directed autoantibodies bind a composite epitope consisting of the original MIR (α1, 67-76) and the N-terminal helix (α1, 2-14) (N-helix) and surrounding regions (α1, [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32].The structural analyses above and published biochemical data suggest that some MIR-directed autoantibodies (e.g., mAb35 and mAb198) bind epitopes centered around the MIR/N-helix core region while others (e.g., mAb192) seems to require epitopes outside the MIR/N-helix core.Nevertheless, based on crystallography studies and structure-guided analyses of existing biochemical data, it can be concluded that despite the heterogeneity of MG autoantibody repertoire a large fraction of MG autoantibodies share a highly-conserved binding mechanism to a core region on the nAChR, suggesting that it is possible to use a single or a limited set of small molecules to block the binding of a large fraction of MG autoantibodies.Because MG autoantibodies directed to the MIR region on nAChR are most relevant to the MG disease, MIR and its surrounding region are therefore an attractive target site for developing small molecules to block the binding of MG autoantibodies.Blocking the binding of MG autoantibodies to nAChR will likely have a direct impact on the antibody-mediated pathologies and may even alter the long-term immune response to nAChR in MG patient. Small molecules blocking the binding of MIR-directed autoantibody to nAChR Targeting protein-protein interface for drug development is generally more challenging than the enzyme active sites [63].This is especially true for flat protein interfaces lacking features for small molecule binding.However, successes have been achieved with a number of well-known targets, including the p53/MDM2 complex [64], the Bcl-xL/Bak complex [65] and the IL2/IL2R complex [66,67].A common feature of these complexes is that the protein-protein binding interfaces contain concave pockets lined with hydrophobic residues, which may provide favorable anchoring points for small molecules to bind and compete with protein-protein interactions.The crystal structure of the Fab35/nAChR α1 complex revealed that their binding interface is characterized by mutual insertions of loops into the pockets of binding partners.On the receptor side (Figure 5), the MIR loop inserts deeply into a surface pocket between V H and V L , and the N-terminal α-helix sits into a groove on the surface of V H . On the antibody side (Figure 6), the CDR-H3 protrudes into a surface pocket formed by the N-terminal α-helix, the loop following the N-terminal α-helix, the MIR and the loop preceding the MIR (referred to as the CDRH3 pocket here after).Based on these structural features, two MG inhibitor design strategies can be envisioned.One is to find small molecules that bind the surface pockets on Fab35 (Figure 5).But this approach faces the potential issue of antibody heterogeneity in sera of human MG patients because small molecule inhibitors may bind some but not other pathological autoantibodies, as it is highly possible antibodies binding to the same epitope may have subtle differences in their antigen-binding site structures.Another approach is to find small molecules to bind the CDRH3 pocket on nAChR (Figure 6).Small molecules bound to this site will directly interfere with the binding of mAB35 by competing with its CDR-H3.Even for other mAbs with short CDR-H3, such as mAb198, the compounds may also block the binding of CDR-H3 through steric hindrances.Moreover, since the CDRH3 pocket is immediate adjacent (about 6-8 Å) (Figure 6) to the MIR/N-helix core region critical for the binding of a large group of MG autoantibodies, compounds bound to CDRH3 could sterically and/or allosterically inhibit the binding of most pathological MG autoantibodies efficiently.Because of its concaved structure, CDRH3 pocket could serve as the anchoring point to design and/or screen small molecules that bind nAChR α1 and complete with MG autoantibodies directed to MIR and its nearby regions. nAChR-specific B cell inhibition and depletion with engineered antigen chimera The fact that pathogenic B cell clones can populate for a long time in patients' body may explain why MG is usually a chronic disease.Ectopic germinal centers are found in the thymus of many MG patients who are diagnosed with thymoma or thymus hyperplasia, where nAChR-specific B lymphocyte are constantly activated, selected and matured to produce the antibody, leading to the disease [68].This disease model underlies the rationale of thymectomy as widely treatment of MG, but the result varies depending on the subtype of the disease, with a complete remission rate of 25-53% [69].These results suggest there are possibly other unknown sites where nAChR specific B cells are activated, selected and matured [13]. Using B cell surface marker CD20 [70][71][72] or possibly CD19 [73] as the target, disease-causing B cells can be depleted at the cost of killing normal B cells.For example, an ongoing clinical trial, NCT02110706, is testing if rituximab, which targets CD20 on B cells, can be a safe and beneficial therapeutics for MG.In general, treatment with B cell depletion agent often requires a long recovery time before B cells return to normal level again [71].Moreover, the treatment has been reported to have a short effective duration time for MuSK-positive MG [74].Long-term usage of such agent may compromise immunological function with increased risk of infection such as Progressive multifocal leukoencephalopathy (PML) and malignancy [72].As such, strategies targeting nAChR specific B cells seem to be attractive.Since each B cell expresses B cell Receptors of the same idiotype as its secreted antibody on its surface, one can use such property to specifically target autoreactive B cell as long as the antigenicity of the autoimmune disease is clear.The idea was borrowed from immunotoxins [75] in which an antigen-toxin chimera was constructed.The antigen moiety is used to target the B cells that express the BCR of the same idiotype as the antibody and the toxin moiety is responsible for conveying death signal to the target B cells.In a pioneering study in 1983 the author fused thymoglobulin with ricin to treat an autoimmune disorder-Hashimoto's thyroiditis [76].Another attempt was tried a decade later in another autoimmune disease-Pemphigus Vulgaris, in which the authors constructed antigen-toxin fusion protein that can specifically target Dsg3-specific hybridoma cells [77].Similar strategies have also been attempted in the treatment of MG.In a study of 2006, the author fused the nAChR α1 ECD to a plant toxin and showed its effectiveness in specifically killing of α1-specific B cells [78].More recently, researchers have developed a variant of such strategy in which nAChR α1 ECD was fused with Fc domain of antibody, which was used to convey the negative signal, since B cells express and only express one kind of Fc receptor, namely FcRγIIB, which transduce negative signal for B cell activation.Consequently, such chimeric protein will specifically target the nAChR α1 specific B cell via the binding to the BCR and deliver negative signal to inhibit α1 specific B cells [79,80]. The idea of antigen-chimera in the treatment of MG seems attractive but will not be practical unless the chimeric protein is stable enough to be used as a therapeutic agent.As mentioned above, nAChR α1 is just one subunit of the nAChR pentamer and is intrinsically unstable, making the expression of wild type nAChR α1 ECD in stable soluble form very challenging.However, as discussed earlier in this chapter, crystallography studies of nAChR α1 ECD in recent years have accumulated extensive experience and knowledge in designing strategic mutations to improve the stability and expression level of nAChR α1 ECD protein while preserve the binding of MIR-directed MG autoantibodies [22,31,58] These progresses will greatly facilitate the approach to using engineered antigen chimera to specially inhibit and eliminate nAChR-specific B cells for MG treatment. Outlook Insights from structural studies and molecular biology/biochemical analyses may ultimately lead to precision medicine and personalized treatment of MG by © 2019 The Author(s).Licensee IntechOpen.This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Structure-Based Approaches to Antigen-Specific Therapy of Myasthenia Gravis DOI: http://dx.doi.org/10.5772/intechopen.84715antigen profiling of patient and the use of corresponding molecular missiles to eliminate antigen specific antibodies or B-cells, induce antigen specific tolerance, or blocking nAChR-autoantibody binding by small molecules.These approaches, once established in the treatment of MG, could be expanded to other autoimmune diseases with well-defined antigen targets. of soluble ligand-and antibodybinding extracellular domain of human muscle acetylcholine receptor alpha subunit in yeast Pichia pastoris.Role of glycosylation in alpha-bungarotoxin binding.The Journal of Biological Chemistry.2002;277:26980-26986. DOI: 10.1074/jbc.M110731200 Figure 1 . Figure 1.Mutations that stabilize nAChR α1 ECD.(a) The three mutations (boxed and indicated by arrow) are mapped on the surface of nAChR α1 ECD (dark green) and away from the binding site of α-bungarotoxin (orange) and the glycan (magenta); (b) the mutation Val8Glu establishes a salt bridge with Lys84.The surrounding structure is well ordered, showing well-defined electron density; (c) the mutation Trp149Arg establishes a salt bridge with Asp89.The side chains of both residues show well-defined electron density; (d) the mutation of Val155Ala removes an exposed hydrophobic residue.The surrounding structure is well ordered (Adapted from Chen[33]). Figure 2 . Figure 2. Crystal structure of the ternary complex of nAChR α1 ECD bound by Fab35 and α-Btx.(a) Ribbon representation of nAChR α1 ECD (α1: cyan) in complex with α-Btx (green) and Fab35 (heavy chain (H, yellow) and light chain (L, magenta)).The variable domains (VH and VL) and the constant domains (CH and CL) of the antibody are indicated accordingly.(b) Surface representation of the ternary complex.(c) Zoomed-in view of the binding interface.The complementarity determining regions of the heavy chain and light chain are indicated as H1, H2, H3, L1, L2, and L3, respectively (Adapted from Noridomi et al.[58]). Figure 4 . Figure 4. Structural comparisons among MG mAbs.(a) Superposition of Fab198 [55] (heavy chain: purple and light chain: dark green) onto Fab35 in the Fab35/nAChR α1/α-Btx ternary complex using the Cα backbone.(b) Detailed comparison of the binding interface.The residues are colored according to their protein subunits. Figure 5 . Figure 5. Surface pockets on Fab35 bound by the nAChR MIR loop (white dashed circle) and the N-terminal helix (black dashed circle). Figure 6 . Figure 6.The surface pocket (green dashed circle) on nAChR α1 bound by the CDR-H3 loop from Fab35 (indicated as H3 in the figure).
2019-05-17T13:33:47.932Z
2019-05-06T00:00:00.000
{ "year": 2019, "sha1": "0da0ffbd684e492b8160c2d087a57df8aa806603", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/65911", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "134c9fdd2a9f8d9262da8d084c0ca521bb5a80bd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
264141335
pes2o/s2orc
v3-fos-license
Journal of Strategic Security Journal of Strategic Security Abstract This article examines the current state of professionalism in national security intelligence analysis in the U.S. Government. Since the introduction of major intelligence reforms directed by the Intelligence Reform and Terrorism Prevention Act (IRTPA) in December, 2004, we have seen notable strides in many aspects of intelligence professionalization, including in analysis. But progress is halting, uneven, and by no means permanent. To consolidate its gains, and if it is to continue improving, the U.S. intelligence community (IC) should commit itself to accomplishing a new program of further professionalization of analysis to ensure that it will develop an analytic cadre that is fully prepared to deal with the complexities of an emerging multipolar and highly dynamic world that the IC itself is forecasting. Some recent reforms in intelligence analysis can be assessed against established standards of more fully developed professions; these may well fall short of moving the IC closer to the more fully professionalized analytical capability required for producing the kind of analysis needed now by the United States. Introduction 1 Since the introduction of major intelligence reforms directed by the Intelligence Reform and Terrorism Prevention Act (IRTPA) in December, 2004, we have seen notable strides in many aspects of intelligence professionalization, including analysis.But progress is halting, uneven, and by no means permanent.To consolidate its gains-and if it is to continue improving-the U.S. intelligence community (IC) should commit itself to accomplishing a new program of further professionalization of analysis.While the progress made in the decade since the passage of IRTPA is notably encouraging, we believe it will fall well short of developing the kind of analytic cadre that will be needed to deal with the complexities of an emerging multipolar and highly dynamic world that the IC anticipates it will be facing. 2en recent reforms in intelligence analysis are assessed against established standards of more fully developed professions, it is clear that a fully professionalized analysis capability remains a distant goal.This article assesses U.S. intelligence analysis as a nascent profession against other more fully developed professions.It argues for an intensified and sustained effort to emulate key criteria and rigorous standards that have proven effective in the professionalization of other disciplines.While the focus here is on intelligence analysis for national security, some aspects are also relevant to analysis in law enforcement, competitive intelligence for the private sector, and possibly for other nations whose intelligence services operate similarly to those in the United States.Professionalization of analysis, toward which many practitioners have spent the past decade working, has become a major contributor to both the quality and utility of analysis.Signs of progress can be seen in nearly all the major characteristics of what constitutes a true discipline.There have been impressive strides in analytic tradecraft (the methodology of intelligence analysis), intelligence training and education, community-wide knowledge management, and analytic standards.Indeed, professionalization is continuing and perhaps even accelerating in some areas.Although this progress remains uneven across the U.S. intelligence community, recent milestones are real pace-setters:  The National Intelligence University (NIU)-once only a virtual oneis now a bricks-and-mortar institution operated by the Defense Intelligence Agency.Shortly moving to Bethesda, Maryland with plans for program expansion, NIU has incorporated a variety of accredited degree programs previously offered by the National Defense Intelligence College. 3Until recently, the Office of the Director of National Intelligence (ODNI) has offered sound introductory intelligence analysis training to analysts across the community.This has been particularly important for standardizing analytic tradecraft and standards across the IC, and for smaller and more resource-limited agencies not able to provide it for themselves.(IC-sponsored analyst training has recently suffered cutbacks due to budget pressures).  The creation of the I-Space has facilitated collaboration and the Library of National Intelligence (LNI) has begun the cataloguing, sharing, and retrieval of intelligence-based information. 4Some agencies have begun advanced intelligence tradecraft training and specialization, which in some cases suggests a step toward certifying analysts as being eligible to enter a more selective group of senior analysts whose skills have been demonstrated as fully proficient.  The development of specific standards for analyst competencies in core, tradecraft, and subject matter expertise is recently underway within the ODNI, DIA, and the law enforcement community, an important prerequisite to anticipated analyst certification. 5 the intelligence community has moved forward with such reforms, the debate over the meaning and significance of "professionalization" has progressed as well.Scholars such as Stephen Marrin have articulated that practitioners of intelligence analysis have not moved quickly enough to adopt needed characteristics of the legal or medical professions.Moreover, he rightly laments the gap between these practitioners and intelligence studies scholars, which prevents practicing analysts from learning from the hard-won lessons gleaned from serious historical study of past intelligence operations and assessments. 6 Intelligence Analysis a Discipline? 7 Certainly the growth of intelligence studies has been remarkable.One measure is the annual International Studies Association meeting, which in 2014, for example, featured nearly 20 panels focused on all aspects of intelligence, with representation from U.S. and foreign intelligence services as well as many university scholars. 8Other practitioner-scholars have also remarked on the need to move further along the path of professionalization if analytic performance is to improve.For example, some practitioners have argued that intelligence analysis, in comparison with medicine and law, is a nascent profession that will require time to develop key attributes such as a distinct literature, certification, governing boards, and knowledge management. 9However, professionalization of intelligence analysis entails more than subject matter expertise, but rather involves good understanding of the operation and practice of intelligence itself, including the collection requirements and exploitation process, the epistemology and tradecraft required for accurate and reliable analysis, and the national security decisionmaking process which intelligence analysis can ably support-or entirely miss the mark. A key premise is that professionalization will improve the quality and relevance of intelligence.Marrin rightly argues that the lack of professionalization has resulted in wide variation in analytic competence and an overall diminution in the role that analysis could play in decisionmaking. 10Studies of intelligence failures also highlight impairments caused by collection gaps, foreign denial and deception, misinterpretation of information, and faulty analytic assumptions.Inadequate warning and feeble or off-target analysis provided to decisionmakers is the result. 11These sources of intelligence failure often lie at the heart of why policymakers can feel justified when they disregard or dispute analytic judgments.They also imply major professional deficiencies in the conduct of analysis.The reported release of a recent National Intelligence Estimate on Afghanistan -described as markedly gloomy in the press -was greeted by some White House officials as simply "a view," and not necessarily the determining one. 12This suggests less than full confidence in the professionalism of intelligence among the most important users of its products. providing informed judgments and reliable forecasts, they become more indispensible in directing the smart use of U.S national power.Senior commanders have come to rely on intelligence analysis as being an integral part of their understanding the physical as well as virtual battlefields. Likewise, national level leaders need analysis to comprehend not only the "facts" as we know them, but also to assess uncertainties of complex international developments so they can carefully weigh the risks of taking or rejecting specific actions.Increasingly, as the United States has to make resource choices on what military strategies and programs to develop, which diplomatic crises to engage in, or what contingency plans to prepare, intelligence can help to assess the urgency, signficance, and consequences or risks those decisions might entail. Analysis for the 21st Century Policymakers are likely to become even more reliant on intelligence as their decisions become more complex, with more second-and third-order consequences that are harder to foresee.But good analysis will be challenged by declining resources and growing complexity of the problems that policymakers will have to face: Fiscal Constraints Winston Churchill once said: "We have run out of money, so now we have to think."As is evident in recent American fiscal and budgetary crises, we are in an era when resources will be more constrained than the previous decade of rapid budget growth.Plans are underway to reduce spending for the coming years that may jeopardize analysis.The total intelligence budget has decreased two years in a row, falling four percent overall.14Additional cuts will surely continue. Traditionally, training and outreach efforts are routinely treated as expendable, rather than reducing other "mission essential" operations.However, we believe that improved analysis based on more professional training and education as well as interaction with outside scholars and experts can be a key force multiplier for reduced U.S. military and foreign affairs budgets. Shifting Global Power Another major challenge facing the United States is the dynamic international environment.Chairman of the Joint Chiefs of Staff General Martin Dempsey has described the future as an "increasingly competitive environment" marked by persistent conflict.15DNI James Clapper's 2013 worldwide brief to the congressional oversight committees likewise stressed the unpredictability of the current environment, and the DNI's 2014 National Intelligence Strategy described the security environment as complex and evolving, with "extremely dangerous, pervasive, and elusive threats." 16Reinforcing this, the National Intelligence Council's (NIC) Global Trends 2030 describes our future world this way: "The diffusion of power among countries will have a dramatic impact by 2030.Asia will have surpassed North America and Europe combined in terms of global power, based upon GDP, population size, military spending, and technological investment.China alone will probably have the largest economy, surpassing that of the United States a few years before 2030….The shift in national power may be overshadowed by an even more fundamental shift in the nature of power.Enabled by communications technologies, power will shift toward multifaceted and amorphous networks that will form to influence state and global actions.Those countries with some of the strongest fundamentals-GDP, population size, etc.-will not be able to punch their weight unless they also learn to operate in networks and coalitions in a multipolar world."17A nation's learning curve-aided by intelligence-will help establish its place in the international pecking order, and do much to shape its relative security amid turbulence.Both the topics and types of analysis will have to shift. Additionally, so-called "wicked problems" such as global climate change, crisis-driven mass migrations, healthcare, pandemics, nuclear weapons, human and drug trafficking, and social injustice will become routine analytical tasks.But their dimensions are poorly defined, nearly impossible to readily solve without a change in attitudes by affected populations, and have interdependencies with other critical issues.Such daunting problems as these will demand higher-order intelligence analysis.Satisfying increasing intelligence demands cannot be accomplished without greater professionalization and expertise building over the coming decade. Attributes of Established Professions Established or more mature professions such as law and medicine, as well as others such as engineering, accounting, airline pilots, and career military service (the "profession of arms") demonstrate certain attributes that imbue their practice-the work of their practitioners-as "professional."Six of the most important attributes are summarized below. 18They are important for their heavy integral presence in mature professions, but relative underdevelopment in intelligence analysis: 1. Governing bodies that set quality standards for professional performance of their members, for example, the American Bar Association and American Medical Association whose members cannot practice without association membership, or perform at substandard levels and still retain membership. Rigorous education and continuous training for practitioners throughout the duration their professional practice to acquire, sustain, and refine their knowledge and skills. 3. Certification requirements that limit admissionthat is, prevent their employment-to only those who qualify, and also levy professional growth requirements 18 Fisher, Johnston, and Clement identify most of these attributes, which together constitue a "discipline," in law, medicine and library services, citing extensive literature documenting the development of these disciplines, pp.57-66.Marrin's half-dozen attributes of professionalization mostly correspond with ours, but he also includes human capital management and ethics.on career practitioners in order to continue their practice. 4. Knowledge management systems to organize information in their domains such as West's Key Number system for lawyers, the National Library of Medicine and the Medical Subject Heading index for MDs, and Dewey's Decimal System for librarians, and to facilitate information retrieval and expansion. Systematic, rigorous, and reliable research methods to build and advance durable knowledge.And 6. Institutionalized lessons-learned or best practices studies conducted to support continuous organizational learning. Assessing Analysis How well does intelligence analysis stack up when assessed by these attributes?In general, initial steps are promising, but preliminary and unsteady.Specifically, while we can see notable progress in the direction of professsionalization as identified in the attributes of the more mature professions cited above, it is also clear that intelligence analysis remains some distance from professional maturity as seen in such professions as law and medicine.What follows are some notable highlights and shortfalls on the path to professionalizing analysis: 1. Governing bodies: The ODNI has begun to establish IC-wide standards in Intelligence Community Directives ICD 203 (analytic standards) and ICD 610 (competencies for professionals), and in Intelligence Community Standards ICS 610-7 (needed competency standards for analysts).However, the DNI has no real authority to set or enforce IC-wide standards.In practice, analysts' governing bodies are their agency or component management chains.In general, most agency leadership and management chains seem discernibly more interested in short-term analytic production than in longerterm development of analytic professionalization.A commited leadership would have to make professionalization goals specific, and implement metrics or other measures of effectiveness to assess and monitor progress toward that goal.Promotion boards would have to include senior trainers or managers more focused on technique and insight than on production files.An IC-wide issue, professionalization will require a substantial commitment not only within the ODNI, but also from intelligence managers in the agencies, and at all levels, from first-line managers through the senior ranks.19 Training and education: The new National Intelligence University (NIU) represents a promising start, but little available evidence suggests any connection between curricular development and analytic professionalization. To our knowledge, there is little specific "analyst" track of courses with established standards designed to achieve a specific level of analytic sophistication.Individual agency-developed training programs vary enormously in scope, depth, duration, and quality; some agencies support new analyst training for several months and some shorter mid-career courses in advanced analysis that qualify analysts for more senior positions, while other agencies offer almost none or very tailored training that does not directly support a well-rounded, "complete" analyst.Such professional development seems at best implicit and ad hoc. Certification: The IC has barely begun in this area.The ODNI could take the lead in both certification and in developing an analytic governing body at the IC level, but centralization may be controversial.Some agencies are entertaining the concept of analyst certification, but rather than having it done independently, there should be some overall, IC-wide, direction given to agencies to set and meet some common standards.Entrance to the analytic cadre, like any other intelligence occupational speciality, requires security certification.But competency or standards in the performance of analysis are not yet tested in the IC, and no real certification process beyond routine and agency-specific periodic performance appraisals affects entrance to or ability to stay in the analytic ranks. 20 Knowledge Management: The National Intelligence Library (NLI) represents a tentative but promising start, but security classification levels and need-to-know criteria impose daunting limits on information access and retrieval by analysts.Comprehensive knowledge management in intelligence can never be fully implemented similarly to unclassified disciplines such as law or medicine, and the Snowden and Manning disclosures highlight the risks of internal repositories to the insider threat, and make advances more difficult.Better use of unclassified work by intelligence scholars as well as additional leeway in reaching out to non-government experts would assist in having more readily available resources for analysts.The fledgling Lessons Learned center at CIA, for example, has focused far more on operational studies than on analytic ones.In general, the Community is hard pressed to identify proven "best practices" learned from past analyses as a guide to improving future analysis.Additionally, periodic "analytic line reviews" which some agencies have tried in a limited way also have lessons-learned value for both substantive and methodological evaluation of a body of analytic reporting on particular topics or issues. Next Steps in Professionalization Given the present state of intelligence analysis as briefly characterized here, and guided by both the attributes of established professions and the notable gaps they highlight in the emerging profession of analysis, we suggest the following five recommendations as measures that can help reduce those gaps.Implementation of the following five recommendations can help appreciably in advancing the goal of professionalization of intelligence analysis. Recommendation 1: A Joint Professional Analysis Education (JPAE) Program Maximizing the contribution of intelligence analysis to informed national security policies will demand that a much higher priority be placed on professionalization than presently exists across the intelligence community.Not only must current training and education programs be protected from ongoing budget cuts, but new and better integrated programs will be needed.Something akin to the Joint Professional Military Education (JPME) system of training and certification should be considered as a model for fully professionalizing the cadre of intelligence analysts. 22 Joint Professional Military Education: A Possible Model? The elaborate system of Joint Professional Military Education (JPME) is built around the "profession of arms," which began as in the early 1800s, with the establishment of the Military Academy at West Point (1802), the Naval Academy (1845) and later the Naval War College (1884) and the Army War College (1901).In the twentieth century it blossomed to include other senior service colleges, along with specialized command and staff colleges. As a result of studying the lessons from World Wars I and II, and after considerable inter-service consideration, the concept of joint education rather than single-service education took hold.After the Second World War, General Eisenhower and other wartime flag officers determined that there was a need for advancing senior officers from all the services to be educated together and develop more interagency cooperation, and thus, under the auspices of the Joint Staff, the National War College was founded in Washington D.C. in 1946.Since then, the JPME programs have expanded well beyond military officers to include senior civilians in the national security enterprise as well as senior officers from foreign militaries.Many have become fully accredited degree-granting institutions. The military leadership has recognized the need to develop professional military skills throughout an officer's career, from basic training courses to specialized disciplines (infantry, artillery, air, naval, amphibious, and other operational specialties) and ultimately to senior-level education that prepares officers for national-level responsibilities.At the earlier stages of an officer's career, "skills" training is emphasized; however, as the officer is promoted, the JPME objectives shift to "educating" the officer into the art of national security strategy development, interagency cooperation, and multinational operations.These steps in the JPME ladder are considered prerequisites for promotion to higher commands and ultimately to national-level decision-making.Indeed, the Goldwater Nichols Military Reform Act of 1986 makes joint professional military education a statutory requirement for promotion to flag-officer rank. In the course of a 20-year career, an officer can minimally assume two-to-three years' full-time equivalent of training and education, often more.At particular ranks, they undergo specified types of training and education, typically required for further advancement.To be considered for promotion to General or Admiral, officers must move out of the field to gain an understanding of the broader national security context in which their missions have to be performed, as well as to comprehend the roles and missions of other civilian departments and agencies with which they will have to work.The stress on "jointness" -especially since the Goldwater-Nichols reforms -has become accepted practice, with other civilian agencies also recognizing the importance of their senior officers gaining joint duty experiences on the way to executive-level positions of responsibility. Source: Cynthia Watson, Military Education: A Reference Handbook, 2007 In "jointness," we advocate a common understanding of the analytic profession, its attributes, and its standards across the entire IC, analogous to the earlier impact of Goldwater-Nichols on the military services and the specific intent of IRTPA-not force a homogenization of all analysts that removes the unique skills and work practices required for different agencies.Such a JPAE system need not slavishly copy all aspects of the joint professional military education system, but it should strive to integrate the varous training programs directed by individual agencies and establish some common standards for the training each agency gives its analysts.Accordingly, as analysts progress through their careers, different training and education goals could be set; at various points in their careers they would be assigned to complete those programs in order to advance further in their chosen analytic track. For example, an analyst entering on duty might be expected to take a basic analysis course, offered by an individual agency or, if not available there, then by the ODNI.Having completed this entry-level basic training, the analyst might then work on an account for a period of time, before next being expected to take additional full-time training.We believe there are several areas, where additional training might be considered, which we touch on briefly: Basic Understanding of Epistemology.Knowledge-building requires that analysts understand the basis for what constitutes reliable knowledge or information. 23Postmortems of intelligence failures-highlighted most recently by the 2002 Iraq WMD NIE-demonstrate that analysts often rely too heavily on unsubstantiated information, merely because it came from what had been thought to be either authoritative sources or because it fit a current mind-set.Likewise analysts' judgments can be swayed by the authority of their more senior managers or the organization's current assessment of a problem (the "analytical line"), without considering whether such judgments are based on something more empirically or scientifically based.Too few analysts have been schooled in the nature of knowledge or think about the basis on which they are reaching conclusions.Hence, concerted attention to basic epistemology that underpins the analytic profession should be a foundational element of every analyst's training. 23 Expertise-building.Another step in an analyst's career-long training should be expertise-building, clearly an important theme in DCIA Brennan's proposed reorrganization emphasizing Mission Centers.Fewer analysts today are hired at the Ph.D. level, though most have had some courses on their regional, country, or functional accounts as part of undergraduate or master's level education.Some agencies currently offer time-off or tuitionreimbursement for master's level graduate studies.This approach is haphazard and does not build expertise in a systematic or planned way.A more regulated educational program of subject matter expertise would expose analysts to new analytic methods as well as to leading experts in their fields outside the intelligence community. Senior Service College Experience.A final step in the JPAE might then be participation in a year-long CAPSTONE-style course at a senior service college, or at an NIU-equivalent program for rising senior analysts.These programs are "joint" by their very nature, as they bring together mid-career military and civilian officers from services, the national security agencies, and the intelligence community, whose parent agencies expect might become future leaders of their instituions.This year-long exposure to the "whole-ofgovernment" system would give intelligence analysts an entirely different perspective on how they can best serve warriors, diplomats, and law enforcement officials as well as the NSC and other very senior customers.An NIU-equivalent program bringing together officers from across the IC would have the benefit of creating a more common culture and networks of senior officers now more prepared to work collaboratively. 25 Recommendation 2: Standardize and Test Analytic Methods Were a JPAE to be established, it would also need to establish a more uniform and recognized set of training objectives for all analysts.One of the key attributes of the analytic profession is "how we do our work with what success-or failure-and why.Building up a body of SAT case studies would not only be a good training tool, but it would also permit more evaluation of the techniques themselves.Indeed, one of the current weaknesses of using SATs is that there is almost no research on whether these techniques result is more accurate judgments and forecasts, or even more insightfiul or useful analysis. 28It should, therefore, be the goal of the ODNI to support more research into effective analytic methods, more documentation of their utility and limitations, and consideration of how to further expand the set of analytic methods used by analysts. Recommendation 3: A More Robust Lessons-Learned Capability The currently modest Lessons Learned capability that CIA and the DNI have developed at CIA's Center for the Study of Intelligence, along with DIA's similar Knowledge Laboratory, have not been widely emulated elsewhere in the IC.And none has the stature of the lessons-learned organizations in the military.It is our distinct impression that this emerging capability has been hugely underutilized for learning about and improving analysis.Thus, there would seem to be ample opportunities for a "Lessons Learned" library of analytic cases. 29Case study writers could be assigned to an analytic team focused on a particular analytic challenge.The case writers would observe the analytic process from beginning to end, noting how the analysts collaborated, what analytic methods they employed, how they reached judgments, and finally how they delivered their findings to policymakers.They could also follow-up and record the analytic effort's accuracy and impact, and also collect whatever feedback policymakers might be willing to provide.This would be far superior to the past attempts to "evaluate" the quality of a product's analytic tradecraft after the fact, or solicit policymakers' general satisfaction levels with analytic support anecdotally and typically long after the policymaker has forgotten a specific analytical product. 30 Recommendation 4: A New Journal for Intelligence Analysis Few true professions exist in the absence of true professional journals.New findings, new research techniques, or controversial issues can be aired within a community of practice.Such could exist for the intelligence community as well.The ODNI has made good strides in developing more community-wide data bases of analysis and enabling greater collaboration among analysts across the community.The technology available today makes this much easier both to share as well as retrieve analytic products remotely across both time and distance.The I-Space and Library of National Intelligence analysis are two such examples of what is now possible.No doubt there can be additional such initiatives that further exploit technology to improve these data bases and make them more user-friendly to a larger number of analysts. Where the intelligence community might devote more attention, however, is in the development of a true "peer review" journal of analytic practices.Sherman Kent spoke of this more than 50 years ago.The closest that the CIA and intelligence community have come to this is the Center for the Study of Intelligence's Studies in Intelligence.This quarterly journal, long published in both classified and unclassified issues, has been the principal journal of record of what the CIA and other agencies have learned from their operations and analysis.Owing to its largely military interests and readership, the American Intelligence Journal, somewhat like the no-longer-published Defense Intelligence Journal, is likely to remain, at least for a while, a less well-known or cited publication.Outside the IC, there are two relevant academic journals that publish articles on a full range of intelligence topics, to include historical cases of operations, analytic issues, historical topics, and intelligence-policy challenges, namely the refereed Intelligence and National 30 A notable exception to this "after-the-fact" feedback, is the way PDB briefers daily present intelligence analysis and get new taskings as well as comments on those PDB items.This instant feedback is of course valuable, but seldom can put the contribution of such analysis into a broader context of ongoing support on a particular issue, which is what a case study might do more systematically. Bruce and George: Professionalizing Intelligence Analysis Produced by The Berkeley Electronic Press, 2015 Security and the International Journal of Intelligence and Counterintelligence. While these publications are important to the general field of intelligence, none is fully devoted to the study of analytic methods and practices.Such a journal can become a vehicle for exchanging views on the utility of different forms and methods of analysis, on new analytic challenges, or on important analytic findings and their implications for the intelligence community.This "Journal of Intelligence Analysis" could fill a gap that presently exists, becoming the discussion board for analysts who might take different positions on the utility of certain SATs, or have minority views regarding analytic judgments reached by most intelligence analysts or agencies.The periodic complaint that not enough research has been conducted on the effectiveness of SATs might be better addressed if such a journal were established to encourage analysts to share their own experiences using these methods. Most logically, such a journal could become part of the newly expanding National Intelligence University.Like the National Defense University which produces a variety of publications, including the Joint Forces Quarterly, NIU might direct its own academic press to support journals dedicated to analysis and possibly other fields of specialization.It could be a refereed journal published in hard copy and available on-line, and include blog-like discussions of analytic issues.Additionally, any classified studies that may address how analytic failures can be averted and successes achieved might be declassified to facilitate a wider circulation among uncleared researchers not in the IC whose "outsider" perspectives could bring value to the discussions. Like the current Studies in Intelligence, there would be value in producing unclassified issues in order to expose analysts' views to outside examination and commentary and, fostering outreach, to invite non-official participants into discussions of analysis.One continuing problem for analysis is its insularity owing principally to classification.Having more contact-another form of analytic outreach-with outside experts in both methodology as well as substantive expertise would be a desirable objective of such a journal.It would also support a number of university programs in intelligence studies which are eager to improve their curricula and make their courses more relevant to students aspiring to become intelligence analysts. Recomnmendation 5: Establish Analyst Entry and Certification Processes Intelligence analysis is an odd profession as it has historically not been one of those "callings" for which students in college take preprofessional (such as pre-law or pre-med) training.Across the country a wide range of courses is offered at both undergraduate and graduate levels on intelligence and analysis.While such offerings fall short of established professional degree programs, IC analysts can still augment their internal training and professional growth through select university curricular opportunities, especially at the graduate level. 31This "accidental" profession -as one colleague has described it 32 -could benefit if it became more purposeful earlier in an analyst's career development, including in the entry-level requirements as well as the standards one must maintain during in one's career.Given the broad scope of occupational disciplines within professional analysis-military, political, economic, S&T, leadership, and now targetting, to speak of the broader categories-the notion of a single set of preprofessional educational requirements for an incoming analyst is perhaps too narrow. A successful WMD analyst, for example, might have entered with a degree in chemistry, biology, or even political science depending on which aspects of WMD he or she might be following.However, any analyst expecting to focus on the foreign policy aspects of even a functional issue like WMD should be able to demonstrate an interest, if not a specialization, in national security affairs, foreign countries and languages.So, developing a profile of an applicant who might mature into a successful analyst could include not only their proficiency in their own academic discipline, but also in their general knowledge of the world and their analytic skills. 33Individual agencies now require online applications, possibly writing samples, and documentation of 31 The International Association For Intelligence Education (IAFIE, cited in note 20) was formed in 2004 bringing together several hundred scholars, practitioners and teachers of intelligence analysis.They represent colleges and universities whose offerings range from a single course on intelligence to a "minor" or "certificate" in intelligence studies.Analysis is often addressed in these programs. 32See Lowenthal, "The Education and Training of Intelligence Analysts," in George and Bruce, Analyzing Intelligence, 2 nd ed., 2014, pp.304. 33To get the right blend of general world affairs knowledge on top of an area specialization, agencies might consider a general "entrance exam" along the lines of the type currently used by the U.S. Foreign Service. Bruce and George: Professionalizing Intelligence Analysis Produced by The Berkeley Electronic Press, 2015 applicants' experience or skills that are appropriate to the analytic profession could support professionalization objectives. As mentioned above, few entry-level analysts are true "experts" in their fields when they are hired since they are neophytes in the discipline and still have much to learn.Thus, agencies need to know if such applicants have the capacity to deepen their expertise and have sufficient intellectual curiousity and drive that will ensure their success in the future.Some way of measuring such characteristics would be useful. 34Similar to the Foreign Service, applicants might be asked to take a standardized test to see what prospective analysts know about the world; this might be used in conjunction with any specific academic discipline that they would bring to intelligence analysis.Furthermore, the entrance exam could include questions regarding their research and work styles to give recruiters a better feel for their abilities to conduct research and collaborate in analytic teams. Testing analysts, once hired, has never really been part of the analytic culture.On-the-job training through "doing analysis" and being observed and evaluated by peers and supervisors has been the sole measure of whether an analyst is progressing in his or her development.This "trial" or "probationary" period of time is used to determine if an analyst has what it takes, but is often fairly subjective.Likewise, many training courses offered by intelligence agencies are still non-graded.That is to say, the analysts typically pass by merely showing up and signing in.There is little effort to determine whether they have learned anything.A more empirical basis for evaluating analysts' proficiency in conducting analysis is now in order. A first step is to adopt, as military service colleges and military intelligence curricula do, training programs that include evaluation standards.Some have letter grades, while others adopt the philosophy that a student has "met" the standards expected or was "above" or "below" them.Constructing course evaluation standards, which would be included in an analyst's annual fitness report, would incentivize more engagement in training and education opportunities as well as give supervisors a stronger basis for promoting or not promoting analysts.In skills-based training, there should be a way to measure whether an analyst can actually employ an analytic technique or not; similarly, in more seminar-style courses or simulations, instructors should be able to evalute how well or poorly an analyst contributes, collaborates, and leads in a group setting. Whatever system of standards is adopted, it should be tied directly to the kinds of tasks analysts are likely to face, and those standards should then drive the development of curricula.Some intelligence analysis schools believe they achieve this by sending "seasoned" analysts to become instructors in their basic analyst courses.However, such analysts may not necessarily be the best teachers, even if they have come from the analytic front lines.Instead, intelligence schools and the NIU should be looking for instructors who have had practical analytical experience but also who are both interested and talented in instructing. Once a set of standards in both training as well as in analytic performance is well established, a certification program will become more achievable and acceptable.Without micro-managing every agency, the ODNI should be able to articulate basic, journeyman, and senior analyst skill levels, which are also tied to the completion of a comparable set of training and education courses as well as to a production history that reflects progressively more sophisticated understanding of intelligence analysis in the analyst's occupational discipline.To this we add policy and operational impact when the analyst reaches that level. Conclusions: Analysis and Policy The foregoing discussion has suggested that professionalizing analysis will advance proficiency, expertise, and ultimately the quality of the analysis we provide to policymakers.Good analysts will have a "prepared mind" to deal with their own cognitive biases, and also pierce the shroud of secrecy and deception which adversaries use to obscure or distort their intentions and capabilities.Preparing both analysts and their organizations to overcome these hurdles to good analysis is the best way to avert new strategic surprises and intelligence failures, and better serve intelligence customers.prepares itself today, they will most certainly hold analysts and agencies accountable for tomorrow's surprises. Among the obstacles that face the recommendations urged here, two are prominent: Scarce resources, and organizational cultures which do not fully embrace the more rigorous training and education vital to professionalization.In the first case, budget cuts historically fall hardest on those elements deemed less critical or immediate.Perhaps inevitably, training and education throughout the U.S government-and most assuredly in intelligence-is usually an early victim of downturns in agency budgets, and any monies for new training programs are also slashed in favor of higher priority projects deemed to satisfy immediate needs or have greater visibility with policymakers.Unlike the military services that steadily assign a sizable proportion of their forces to training and education no matter the spikes in manpower demands, intelligence agencies typically view training as a nuisance or distraction rather than an investment in professionalization.In the IC, analysts often cannot be spared for training when they are in short supply relative to perceived insatiable consumer demands for greater production.This subordination of training and education to putative higher priorities is partly explained by organizational cultures which have not traditionally valued education. Since most analysts come to their jobs with some subject matter expertise, managers often presume they will learn whatever else they need "on the job" just as they did.On-the-job training throughout the IC historically trumps formal training and education both inside and outside the IC.This cultural bias reinforces a sense that training and education is properly a secondary priority.Moreover, agencies' perennial insularity from academe fosters poor understanding about educational opportunities to improve such professional skills as critical thinking and even subject matter expertise.Both of these hurdles, resource competition and cultural resistance, will need to be overcome if the professionalization of analysis is to advance. In the end, the measure of the analytic profession's performance is assessed by how its results are received and used.We hasten to suggest that without further professionalization the intelligence community is more at the mercy of partisan and bureaucratic politics, which can increase the misuse and misrepresentation of intelligence analysis.The intelligence controversies swirling around the 9/11 attacks and the 2002 Iraq WMD estimate painfully remind us how blame for policy failures can be left at the doorstep of intelligence analysts when their professional skills have been found wanting. In concert with demonstrated competence levels in the more mature professions such as medicine is the adoption of a code of ethics. 35Intelligence analysis needs such a code, not only to ensure the integrity and cognitive neutrality of analysis, but also to help shield analysts from later accusations when they've done their professional level best to deliver accurate, reliable, and objective results, no matter the policy stakes involved. While intelligence failures are sure to happen, the development of more professional skills, and standards of conduct that go with them, will mitigate the chances that poor analytic tradecraft or lapses in integrity will be at the center of those future controversies.As one scholar has put it, politicization of intelligence is most likely to occur when intelligence is important to national security policies. 36It is a safe bet that U.S. intelligence analysis on the current pressing issues apart from terrorism-e.g., Iran's nuclear program, along with that of North Korea, or the mess in Syria, the Middle East, and indeed, political unrest in any number of key countries-will also remain important and sometimes controversial as those judgments will be based on limited information and shrouded in the secrecy and deception used by such states.Often assessments must rest on important assumptions that analysts are required to make about those foreign actors and their activities. The more transparent, rigorous, and open-minded analysts can be with policymakers about the limitations of their knowledge and insight, the better informed will be U.S. decisions and associated risks regarding those programs. Similarly, the rise of China-potentially America's next peer economic, if not military, rival-will bedevil U.S. strategists, making them frustrated at times with the limits of what we can know about Beijing's intentions and capabilities.Most likely, the debate over China is going to heat up, placing intelligence at the center of those debates over the proper U.S. response (containment, engagement, or something inbetween).Thus, adopting the highest professional standards for analysis, maintaining analytic integrity, and being as candidly self-critical of our performance as our critics can sometimes be will help safeguard the intelligence community's credibility with the American public and future administrations.The future is too uncertain and too important to expect anything less from our intelligence community. Major strides in the tradecraft, i.e., methodology, of intelligence analysis have been made since 2000, especially in the development and training of structured analytic techniques, and in growing acceptance of their use in finished intelligence products.The use of tradecraft groups to support line analysis is also now gaining acceptance in some agencies, and others have expressed growing interest in this form of methodological advancement.Still, current training in analytic methods reflects a largely cookbook approach to practical application (how to do it).Courses in intelligence successes and failures have been offered over the years, but any "lessons" are still largely implicit and applied superficially to analysis, if at all, and are not yet institutionalized in a way to support learning organizations.Despite the classified publication of several relevant studies of analytic failures and successes, few practicing analysts seem aware of lessons learned from such studies of the successes and failures of their predecessors in their own agencies much less in others.More importantly, there is little research conducted routinely of what "best practices" were employed or should have been. The IC should move toward training programs that develop a deeper understanding of the epistemological rationale for such tradecraft.It should also bring into play the power of social and behavioral theories, now largely absent in intelligence analysis.Such theories can highlight hidden relationships, generate untested hypotheses, and help connect intelligence studies with other fields of social and political inquiry in building knowledge and understanding. 216.Learning organizations: Organizations must learn just as individuals do.Lessons-learned to identify best practices (and prevent bad ones) is only recent to the IC (CIA's formal effort began only 10 years ago), but this effort appears to have not yet reached critical mass.It has not yet been systematically adopted throughout the IC, nor has its potential value even begun to be realized by agencies. Many of the current programs, and indeed the expansion of the National Intelligence University campus (and its relocation to Bethesda, proximate to Washington, D.C.), would lend themselves to such a long-term objective.Unlike the profession of arms, the profession of analysis has no progressive set of training requirements through which all future senior analysts must move.It would be worth considering how the ODNI could develop such a career-long program of training and education that would both develop individual analysts' skills and expertise but also create more of a joint analytic culture. 22See Anne Daugherty Miles, "Thinking Holistically: PIE in the Sky?" a 2012 IAFIE award -winning paper, available at: http://www.iafie.org/resource/resmgr/2012_essay/miles_2012_iafie.pdf.Bruce and George: Professionalizing Intelligence AnalysisProduced by The Berkeley Electronic Press, 2015 As part of this training, analysts should be exposed to the power of a more science-based production of knowledge.The only proven method of correcting errors in judgment is one which relies on hypothesis testing, validation of information, transparency, and peer review. See James B. Bruce, "Making Analysis More Reliable: Why Epistemology Matters to Intelligence," in George and Bruce, Analyzing Intelligence, 2 nd ed., 2014, pp.135-155.Bruce and George: Professionalizing Intelligence Analysis Produced by The Berkeley Electronic Press, 2015Science-based analysis.24Analysisand Collection Disciplines.A vital area for analyst training will provide a deeper understanding of the collection sources on which intelligence judgments rest.As suggested above, too few analysts truly understand how they know what they know.Most are limited by inadequate understanding of the methods underlying HUMINT, SIGINT, and GEOINT.Too few analysts invest the time needed to grasp the complexities of these disciplines or appreciate the strengths, weaknesses, or biases that such information sources bring to the analytic process.More analysts should spend time working with the major collection agencies.Additionally, training is needed on how collection systems work, how analysts can best use them, and how much confidence to place in the raw intelligence reporting that each produces. 27 Analytic methods, techniques, and skills are often what set analysts apart from subject matter experts outside the intelligence community.Many structured analytic techniques already exist 26 and they should become more utilized across the intelligence community.Structured analytic techniques have also been developed for tactical-level military applictions.27This is happening-slowly and unevenly-but it could be further encouraged if the ODNI were to go beyond the community-wide standards now in ICD 203 by further developing structured analytic tradecraft curricular materials, and courses for those agencies not able to support their own analysis training.Workshops in using specific techniques should be ongoing, with the development of case studies on specific examples of how a Structured Analytic Technique (SAT) was used, Since 2001the United States has not suffered another attack on a scale of 9/11, or an intelligence blunder on the scale of the 2002 Iraq WMD NIE.But there is no guarantee that such events could not occur tomorrow.While most policymakers will take little interest in how the intelligence community Bruce and George: Professionalizing Intelligence Analysis Produced by The Berkeley Electronic Press, 2015
2018-12-12T23:39:35.148Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "4341d102db51962c1fdc117bf0f505c39d0a8f8d", "oa_license": "CCBYNC", "oa_url": "https://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1454&context=jss", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "cd645dd64c0fea34217800aaaa0ea563ecf5c71e", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
119395137
pes2o/s2orc
v3-fos-license
Electronic properties of the Dirac and Weyl systems with first- and higher-order dispersion in non-Fermi-liquid picture We investigate the non-Fermi-liquid behaviors of the 2D and 3D Dirac/Weyl systems with low-order and higher order dispersion. The self-energy correction, symmetry, free energy, optical conductivity, density of states, and spectral function are studied. We found that, for Dirac/Weyl systems with higher order dispersion, the non-Fermi-liquid features remain even at finite chemical potential, and they are distinct from the ones in Fermi-liquid picture and the conventional non-Fermi-liquid picture. The Landau damping of the longitudinal excitations within random-phase-approximation (RPA) for the non-Fermi-liquid case are also discussed. Introduction Different to the metals where the Fermi-liquid theory is valid, for the topological insulator or when near the quantum critical in modern condensate matter physics, the non-Fermi-liquid theory is required. In Fermi-liquid theory, the excitations near the Fermi surface (usually within the order parameter fluctuation gap) are Fermionic, that results in the uniform spin susceptibility [1] in contrast to the one in topological insulator [2], and also leads to the lineartemperature-dependence of the electronic specific heat rather that the logarithmic divergent one as found in heavy-fermion system as well as the superconductors. In this letter, we investigate the non-Fermi-liquid behaviors of the 2D Dirac system with first-order dispersion in continuum model. The exchange-induced Fermionic and Bosonic self-energy correction as well as other observable quantities are calculated and also for these of the Dirac/Weyl systems with higher dispersion. Considering the disorder effect origin from the impurities, the polaron as a excited quasiparticle are important when consider the many-body effect (many-electron effect), and the disorder-induced self-energy [3] describes the impurity-Fermions (for Fermionic polaron) or impurity-Bosons (for Bosonic polaron) interaction and with the impurity dressed by the corresponding particle-hole excitations. Besides, since the spin rotation is missing in the Dirac δ-type impurity field, the spin structure is fixed, and the spin of impurity and that of majority particles are usually opposite, that provies the opportunity to form the Cooper pair and the strongly bound dimer. Self-energy correction in 2D Dirac system The isotropic (2+1)D Dirac Fermions coupled to long-range Coulomb interaction can be described by the effective action where ψ is the Fermion field and φ is the Bosonic field which describes the long-range instantaneous Coulomb interaction and related to the order parameter, and the Fermions couples to the Bosons through the coupling constant g 2 = 2πe 2 /ǫ where ǫ is the background dielectric constant. The H 0 (k) is the non-interacting Hamiltonian which for 2D linear Dirac system reads whose eigenvalues can be obtained by solving det(H − E) = 0 as where θ = arctan ky kx . By defining the scattered momentum as k ′ = k + q, the exchange-induced Fermion self-energy is given by and it's independent of the Boson Matsubara frequency ω = 2mπT due to the instantaneous approximation of the Coulomb interaction in one-loop order. The instantaneous Coulomb interaction is given by the scalar potential which has the propagator T φ(t, r)φ(t ′ , r ′ ) = − iδ(t − t ′ ) 1 2 and that leads to the nonrelativistic features with the broken Lorentz invariance, as widely seen in the non-perturbative RG analysis [4] (while in perturbative RG analysis the instantaneous approximation is sometimes unadopted due to the effect of vector potential [5]) Ω+i0−H 0 is the bare Green's function (Fermion propagator). The infinitesimal quantity i0 (corresponds to the scattering rate or the Fermionic damping rate) is important for the convergence of the integral and its sign is the same as that of the frequency (here we assuming the positive frequency). The Pauli exclusion principle also enforces i0 → 0 in the static limit. D 0 (k) = 1/k 2 is the is the bare Boson propagator. Thus for static case, we have for a single Fermion species Unlike the momentum shell integration, we only applied the ultraviolet cutoff here to deal the non-Fermi liquid case with nearly zero gap (and chemical potential). Ultraviolet cutoff during the calculation is important to prevent the divergence of integral. At higher temperature, the repulsive Coulomb interaction is competes with the attractive electron-phonon coupling. The unscreened on-site Coulomb repulsion averts the double occupation of the lattice sites and thus closing the gap, while the electron-phonon coupling is opposite. The lowest-order contribution to the exchange-induced self-energy reads where U is the Coulomb repulsion potential and is the fluctuation exchange potential. Here we approximate the inreducible vertex function to the on-site Hubbard interaction, and the resulting exchange self-energy is obviously beyonds the instantaneous approximation while for the attractive phonon-mediated interaction, similarly, the self-energy reads with the phonon propagator (in lowest-order) [6] P (q, Ω) = Ω 2 ph where U e−ph is the electron-phonon coupling parameter and Ω ph is the phonon frequency. For strong enough on-site attractive Hubbard interaction, the charge-density-wave (CDW) phase or the gapless semimetal phase will unstable to the s-wave superconducting phase, and thus the symmetry described by ψ + |σ x/y |ψ − = 0 is broken (± refers to the up and down spin respectively) which don't consider the orbital degree of freedom. For 2D Dirac semimetal, due to the absence of the impurity scattering in the Dirac point with zero density of states, the short-range interaction is weak and insufficient to destabilize the Dirac Fermions. For superconducting phase without the Coulomb repulsion (instantaneous Coulomb interaction) and the disorder, the Lorentz invariance is possible with isotropic Fermion and Boson velocities (i.e., in case of the supersymmetry which interchanges bosons and fermions [7]), which can be realized at low-energy by a metallic (polarizable) superstrate. Through the minimum model in Eq.(2), the time-reversal symmetry can be shown as where Θ = iτ y K is the time-reversal operator. While for the 2D lattice model where d z (k) is the momentum-dependent gap function. For this lattice model, the particle-hole symmetry at half-filling can be revealed by where Ξ = τ y K is the particle-hole operator. Topologically, Θ 2 = 0, ±1 corresponds to the time-reversal symmetry and Ξ 2 = 0, ±1 corresponds to the particle-hole symmetry. Although the time-reversal symmetry and the inversion symmetry are broken in the presence of gap function or by the charge-density-wave (CDW) order formed by polarized electrons, the symmetry described by the product of Θ and the in-plane mirror reflection operator M x could be preserved [8], i.e., ΘM x which protect the semimetal nature against the weak short-range interaction. The weak short-range interaction can't be taken into account the RG analysis, while the frequency-dependent self-energy in a non-Fermi liquid system is proportional to the anomalous dimension and the RG parameter (the logarithmic term). The anomalous dimension also implies the missing of the pole struture of the Green's function, which correspond to the electron addition and removal energies in the noninteracting case [9]. In one-loop order, the bare Boson propagator (phonon or the photon) is modified as D(Ω, k) = (k 2 − Σ b (Ω, k)) −1 where Σ b (ω, k) is the Boson self-energy in density-density correlation form: where we use the discrete values of the frequency since otherwise the above integral becomes zero, and the formula is used. Here Ω ′ = Ω + ω and g s g v = 2 denotes the Fermion species (spin and valley degrees of freedom). N F (x) = (1 + e x/T ) −1 is the Fermi-distribution function. The above expression also implies that the Boson self-energy is related to the equation of motion for the bare Green's function. Through the Ward identity γ = ∂ ω Σ which is independent of both the external frequency and the scattering wavevector, the vertex function can be obtained as To simplify the calculation, we restrict us to the gapless case, then the Boson self-energy becmoes where ss ′ = −1 here due to the dominating interband transition. Then the vertex function at half-filling is 3 Disorder effect and free-energy in grand-canonical ensemble For the disordered system, the Fermion self-energy in lowest-order approximation reads Σ LO = n i V k,k ′ / where n i is the impurity concentration and V k,k ′ is the scattering potential. For localized potential, the disorder-induced self-energy could be momentum-independent due to the rotational invariance. In low-order perturbation theory, the self-energy matrix contains a gap function in the diagonal element, while the non-diagonal elements are missing. The random-phase approximation (RPA) results is valid only in the long-wavelength limit as well as the low-energy limit for large flavor number analysis, in which case the Eliashberg theory as well as the Migdal's theorem are valid. In this case the Boson propagator (as well as the Boson-frequency-related spin susceptibility) is overdamped due to the small Boson velocity and small external Boson momentum (compared to the Fermionic ones) with the Landau damping. The above results are correct for the low-energy Fermions excitations (within the band gap) for RPA which with chemical potential much larger than k B T . Inversely, the non-Fermi-liquid feature emerges for the case of µ < k B T . The strong screeing effect by polarized Fermions to the disorder also provides the possibility to recover the Fermi liquid within the spectrum gap of the order parameter fluctuations (of the order of D 2 /W where W is the Fermion bandwidth) which with coherent Bosonic excitations except when the disorder-induced linewidth [10] is larger than the excitation energy. In the mean time, the excitation gap gives rise to the dissipation effect which is related to the free energy and the conductivity. Further, the response function is nonzero even at q = 0 for the Bosonic frequency in the range ω < v F q < 2D < 2µ. While for the Bosonic frequency larger than v F q, the transverse spin excitation (still within the band gap) is the Goldstone spin wave and thus it's gapless in the long-wavelength limit, in contrast to the longitudinal excitations which is gapped even at q = 0. Unlike the weak short-range interaction, the electron-electron interaction mediated by the gapless Bosonic mode is long ranged at the quantum critical point and with the gapless critical fluctuation of the order parameter (about the Bosonic excitations) which can be described by the Ginzburg-Landau function. The Ginzburg-Landau function (the free energy) here describing the order parameter fluctuation does not contains the term φ * ∂ τ φ due to the particle-hole symmetry as state above, where φ is the two component complex amplitude. For bipartite system, the particle-hole symmetry suggests the exitence of the zero-energy modes which satisfy φ A = ±iφ B . In the presence of particle-hole symmetry the ac Hall conductivity vanishes while the dc conductivity is preserved [11]. At finite temperature, the free energy can be obtained by the following partition function base on the Fermion propagator as then the free energy density reads The above integral can be analytically solved as and for semimetal at half-filling it reduces to where lnΓ denotes the logarithm of the Gamma function and ψ (n) is the nth derivative of the digamma function. By using the aproximational relation [12] ln( then the free energy density can be rewritten as where Li n (x) is the polylogarithm function. Consider the many-body effect to the 2D Dirac system, the perturbations can be taken into account in the grand-canonical ensemble, where we rewrite the tight-binding model Hamiltonian as where ij is the nearest neighbor sites and t is the nearest neighbor hopping. n = c † i c j . U ij is the Coulomb interaction strength in second term which is the Coulomb exchange interactionrelated term (bilinear). V 0 is the impurity scattering potential (magnetic or nonmagnetic) in the third term which is the disorder-related term. The creation and annihilate operators are all the particle one here. Then the free energy density (grand potential) is still F = −T lnZ but with the partion function in interacting case as where the path integral runs over the Grassman variables, N is the particle number operator, and with the action S reads where ψ(iΩ) is the real Grassmann variable, and the first term is always positive summing over the Fermionic frequency. The perturbaed Green's function which satisfies the Dyson relation , can be obtained by the ratio of the partion functions as The case of G(iΩ, U ij , V 0 ) = 1 clearly indicates broken of the supersymmetry [13]. We consider the δ-type impurity potential in above disorder-term, which indicates the Born approximation. In such case the spin structure is fixed as also been observed in the surface state of the 3D topological insulators, thus the spin rotation is missing which can be observed in the nodalline semimetals, and the spin current operator becomes zero in the helicity basis. The Born approximation guarantees the sign-invariance of the momentum before and after the scattering, and the reversed scattering amplitude has the same value with the origin one [14], In grand canonical ensemble, the spin current vanishes in the thermodynamical limit (and thus with infinite µ) due to the vanishing spin density even when beyong the Born approximation. Beyong the δ-type impurity field, the extrinsic spin current emergents and the scattering of both the impurity and the majority particle (with opposite spins) create the Fermionic polaron, and the optical Hall conductivity (in fact this is the only case where the transverse conductivity equals to the Hall conductivity) of the polaron determines the current in direction orthogonal to the external force [15], is related to the current operator by σ xy = J x /(−∇ y V ) in linear response theory where V is the external potential. The current operator here is much smaller than the one in QED just like the group velocity operator which is much smaller than the speed of light. While for the Bosonic polaron for the system immersed in the Bose-Einstein condensates, the interaction is stronger than the Fermionic one due to the higher compresibility of the BEC compared to the Fermionic media. The Bosonic polaron is formed by the Fermionic impurity which dressed by the majority Bosonic excitations. Optical conductivity The destoryed Fermi-liquid behavior can be observed by the singular Bosonic susceptibility at the nesting wavevector, and it can't be found even at low-energy limit (far away from the quantum critical point) when the ultraviolet cutoff applied is infinite during the calculation, as could be found in the many-electron system, for example, when the Fermions coupled to the 1D Ising variable [16] or the fluctuation transverse gauge field [17] or the longitudinal Bosonic excitation [1]. The non-Fermi-liquid phenomenons are widely observed in the heavy-Fermi system and the cuprate materials [18,19], including the logarithmic divergent specific heat which related to the free energy by C V = −T ∂ 2 F ∂T 2 . In the presence of monochromatic light, the nondiagonal part of the optical conductivity can be obtained by summing over the eigenvalues: where only the retarded Green's function is used in contrast to the Streda one [20]. Here the identify is used. The velocity matrix elements are where the electron/hole indices have ss ′ = −1 during the optical transition due to the Pauli exclusion principle, while the spin indices σ z before and after transition are invariant when (both the intrinsic and extrinsic) Rashba-coupling are negligible, otherwise the spin index changes since it's nomore the good quantum number. In case that the Fermi level lies within the band gap, the diagonal elements of the conductivity are zero (and thus implies the C 4 symmetry of the system since σ xx = σ yy ), while the non-diagonal elements becomes independent of the Dirac-mass due to the vanishing classical term [20]. Note that the result of σ xx = σ yy as well as the σ xy = −σ yx also appear in the optical limit with q → 0 (also called the local limit) Taking into account the effect of chemical potential, then the Hall conductivity is composed of two parts: where the first part corresponds to the case that the chemical potential is small than the Diracmass while the second part is opposites. While the longitidinal optical conductivity reads where the intraband part vanishes unless at finite temperature and at infrared limit (nearly zero photon energy). Different to the hopping-current-related conductivity, the dissipation-current-related conductivity remains finite in static limit and proportional to i/ω + πδ(ω) [11] where δ is the Kronecker δ function here, but this part of the conductivity is negligible when under a magnetic field or at low-temperature. For the case of large band gap, the frequency about the optical tansition during the intraband process is much larger than the interband one as shown in the WSe 2 [21]. Observable quantities in Dirac/Weyl systems with higher-order dispersion Next we discuss the 2D topological insulator (TI) with higher-order dispersion (similar to the multi-Weyl semimetal) and small (but momentum-dependent) gap, whose Hamiltonian reads where we assume the Dirac-mass is momentum-dependent and controlled by the materialrelated constant c 1 and c 2 , the k 0 is another material-related constant in unit of momentum [22] and a is the lattice spacing [23], e.g., it equals to √ 3/2 times of the lattice constant in the graphene-like hexagonal lattice system. The term (k · σ) here only appears in the chiral systems with spin-momentum locking, while for the non-chiral systems, it's usually replaced by the spin operator σ z and the interband transition also vanishes in such case. The momentum-dependent mass term (c 1 + c 2 k m ) here is similar to the effect of next-nearest-neighbor (intrinsic) Rashba coupling. Here we note that, we try to present a discussion for in the 2D Dirac system extend to the generic order m which also related to the Chern number in gapless case, and it's not just applicable to the 2D TI, but also to the multilayer TI which with a single 2D Dirac node per surface Brillouin zone [24,25]. The order m controls the in-plane band dispersion, for example, m = 1 for the (topologically protected; which not exists currently in 3D real space [26]) linear Dirac dispersion, m = 2 for the quadratic dispersion, m = 3, 4 for the trigonal warping system as found in the monolayer MoS 2 [27]. In such case the velocity operators can be obtained by using the relation v α = where the relation ∂k ∂kα = kα k (α = x, y, z) is used. Here we write the third Pauli matrix as σ 3 to distinguish from the spin operator in z direction. For the above multi-node dispersion, the eigenvectors are where we pick the momentum component k x = (k m ) 1 m cosθ as a good quantum number, Φ ε (y) is the harmonic oscillation wave function. Then the velocity matrix elements can be obtained base on the above velocity operators, In noninteracting case, the Bosonic propagator is overdamped with the gapless longitudinal excitations (or order parameter fluctuation) by the Landau damping, it's also found that the Landau damping of the multi-Weyl semimetal is weaker than that of the marginal Fermi liquid [5] which is distinct from the normal non-Fermi-liquid states. The dispersion of the multi-node Dirac system can be obtained by solving above Hamiltonian as where we rewrite the term v F as the scale-depndent parameter ξ. At half-filling, the Fermi level crosses the multi-band touching point and the Coulomb interaction remains longranged as described by the Bosonic field, due to the poor screening to the electron-electron Coulomb interaction. The imaginary part of the exchange-induced self-energy is related to the quasiparticle relaxation time (lifetime), while the real part is related to the interaction strength and the quaisparticle weight. Beyond the instantaneous approximation induced by the scalar potential, the exchange-induced self-energy containing Bosonic frequency reads where the last term of the above expression is the dressed Coulomb potential. Σ m b (Ω, q) is the Bosonic self-energy (i.e., the dynamical polarization here) which reads then the multi-Dirac-node bare Green's function reads Base on this Green's function, the dynamical polarization is available by the above expression but it's too verbose to express which contains a hypergeometric function whose parameters are all related to the order m. We then turn to a more concise expression at zero-temperature limit which reads where b is the angle between k and k ′ and a is the angle between k and q. The dynamical polarization can be devided into the intraband and interband parts: (Ω, q), where the first term of the intraband part can be obtained after some algebra as The other three terms can be obtained through the same way, and then the exchange-induced self-energy can also be obtained. Note that for the non-chiral Fermions, like the ones in 2D electron gas, Σ m,inter b (Ω, q) vanishes since cosb = 1. While for the 3D Dirac semimetal [28,29,30] like the Na 3 Bi or Cd 3 As 2 , PtTe 2 , the the chiral anomaly emerges since in odd space dimensions the anti-commute relation about the γ 5 matrix is allowed, and each Dirac node resolved into two Weyl nodes [29] arrange along the z-direction of the momentum space and with opposite chirality. The Hamiltonian of the simplest 3D Dirac and Weyl semimetal are H = i v i k i σ i (i = x, y, z) and H = i v i k i σ i + χ v z (k z − χδk z ) (i = x, y), respectively. The chiral effect gives the sign ± to ξ. In 3D Dirac/Weyl semimetal, the perturbations can remove the nodal line and leave the nodes [31], while the nodes can not to be removed but can only to be shifted [24]. As we mentioned above, since the spin rotation is missing due to the Dirac δ-type impurity field, the rotational invariance is presented, which is also partly due to the disorder averaging [32,33], and thus the disorder-induced self-energy is independent of the external momentum, which reads where Γ 0 is the irreducible vertex function which doesn't contains the Levi-Civita symbol here unlike the one in Ref. [34]. The vertex correction vanishes when it contains only the exchangeinduced self-energy correction in instantaneous approximation, which can be obtained by the Ward identity ∂Σ(Ω,k) ∂ω = Γ(Ω, k), besides, the vertex correction also vanishes in the large speciescase (large g) or when the integration momentum shell vanishes (the RG flow parameter ℓ = 1). n i here is the impurity concentration, V 0 is the impurity scattering potential (a scalar potential when only with the nonmagnetic impurityand without thw magnetic impurity). For the 3D Dirac-system, the Dirac nodes can be divided into the Weyl nodes along the z-direction by using the projection operator [34], and the Hamiltonian reads where µ χ is the chemical potential with chirality χ = ±1 and µ + = µ − = µ in undoped case. δk z is the distance in momentum space removed from the previous Dirac node which explicitly breaks the time-reversal symmetry. v z = at ⊥ sin(δkz a) is the z-direction velocity. Here we define k x = kcosθ, k y = ksinθ, k = ksinϕ, k z = kcosϕ, and still use the defination The first term of the above Hamiltonian contains no out-of-plane components, which indicates the untilted type-I Weyl semimetal when the mass term is missing. The mass term is dominated by the momentum k z here rather than the in-plane momentum as shown in the previous model and it explicitly breaks the inversion symmetry. Then eigenvalues can be obtained as (47) The imaginary part and real part of the bare Green's function (Fermion propagator) G mT where the Sokhotski-Plemelj theorem and Kramers-Kronig relation are used. θ(x) here is the step function. The spectral function including the disorder-induced self-energy effect reads [5,36] A mT (Ω) = |ImΣ D (Ω)| (Ω + ReΣ D (Ω) − ε mT ) 2 + (ImΣ D (Ω)) 2 , which contains the informations about not only the dispersion but also the quasiparticle residue and the Fermion relaxation. In the presence of the screened long-range Coulomb interaction by the collective excitations in non-Fermi-liquid state (but with finite chemical potential), the above perturbed spectral function also related to the excitation damping, like the plasmon mode which damped into the particle-hole excitations due to the non-zero imaginary part of the polarization function (Bosonic self-energy) as we studied [37,38,39,40]. Conclusion In conclusion, we investigate the self-energy correction, symmetry, free energy, transverse optical conductivity of the 2D Dirac system in non-Fermi-liquid state. The non-Fermi-liquid behaviors of the 2D and 3D Dirac/Weyl systems with higher order dispersion are also studied and we found that the non-Fermi-liquid features remain even at finite chemical potential, and they are distinct from the Fermi-liquid picture and the conventional non-Fermi-liquid picture. In the presence of the impurity scattering, the Fermionic/Bosonic polaron formed by dressing the Fermion/Boson majority particles as widely found in ultracold Fermi gases [41,42] and BEC [43], respectively, and they are also important in studying the perturbation effect as done in this paper within the contact potential (Dirac δ-type impurity field) context. In the presence of the gapless order parameter fluctuation, the Landau damping of the longitudinal excitations within RPA for the non-Fermi-liquid case are also discussed.
2018-11-21T16:21:10.000Z
2018-11-21T00:00:00.000
{ "year": 2019, "sha1": "da2758ca2a3cbe3fb7a00822e5c9871ed16401a0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1811.08809", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "dc6c35281ed3b106d31a33e40a618e6bc3f7bc38", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
271674856
pes2o/s2orc
v3-fos-license
Hepatitis B serology testing and vaccination for Gambian healthcare workers: A pilot study Background Hepatitis B infection is a significant global health threat contributing to healthcare worker (HCW) harm, threatening already precarious health systems. Aim To document self-reported hepatitis B vaccination history and serology results. Setting A select group of high-risk HCWs in a tertiary care hospital in Banjul, the Gambia. Methods This was a cross-sectional pilot study conducted from 12 June 2023 to 16 June 2023. Participants were HCWs at high risk for blood exposure who completed a health history interview prior to serology testing for hepatitis B surface antigen (HBsAg) and hepatitis B surface antibody (anti-HBs) and vaccination. Results The pilot study enrolled 70 HCWs who were primarily female (n = 44; 62.9%). The majority of the participants, 43 (61.4%) reported having received at least one dose of the hepatitis B vaccine in the past. The overall prevalence of HBsAg positivity in this study was 4.3% (95% confidence interval [CI]: 1.5–11.9), all in older participants. Importantly, 60.0% (95% CI: 48.3–70.7) of participants had no anti-HBs detected. Conclusion This pilot study documents a higher prevalence of hepatitis B infection among older workers and the lack of anti-HBs across the majority of participants. This suggests a serious vulnerability for the individual health worker and indicates the need for a wider screening and vaccination campaign to assess the risk across the Gambian health workforce. Contribution This pilot study provides the first evidence to support a wider assessment of hepatitis B serology status of Gambian health workers to gauge the need for a broader vaccine campaign. Read online: Scan this QR code with your smart phone or mobile device to read online. Introduction Background Hepatitis B infection is a significant global health threat that contributes to the loss of healthcare workers (HCWs) and puts the health workforce at considerable risk. 1 According to the World Health Organization (WHO), viral hepatitis is responsible for approximately 1.34 million deaths annually. 2Healthcare workers have a four-fold increased risk relative to the general population for exposure to hepatitis B virus (HBV) from infected patients. 1,3As a result of this significant public health challenge, the WHO Assembly adopted the first global health sector strategy on viral hepatitis in 2016 to protect the global health workforce. In Africa, HBV is estimated to affect 15% -20% of the population. 4In a low-and middle-income country (LMIC) like the Gambia, HBV prevalence varies ranging from 13% among blood donors, 5 9% among pregnant women, 6 and between 8 and 17% among human immunodeficiency virus (HIV)-infected individuals. 5Although the prevalence of HBV varies in the Gambian population and is relatively high, HCWs are not systematically vaccinated against HBV due to a history of infant vaccine campaigns in the last 30 years and several cultural, political and socioeconomic factors. The loss of HCWs during both the Ebola crisis in West Africa and the coronavirus disease 2019 (COVID-19) pandemic 7,8,9 has demonstrated how indispensable HCWs are to a functioning and resilient health system.Healthcare worker protection must therefore become more strongly prioritised in countries where health systems are already fragile. 10 Health worker protections in the Gambia The Gambian Ministry of Health (MoH), with support from the University of Maryland, Baltimore's (UMB) WHO Collaborating Centre for Occupational Health and the University of the Gambia School of Medicine and Allied Health Sciences has collaborated since 2014 through a series of multi-day trainings, hospital and clinic site visits and key informant consultations to build capacity in basic occupational health services for health workers.These activities formed the basis of a National Occupational Health and Safety Policy for Healthcare Workers completed in 2018 and validated in 2020.One of its priorities was prevention of blood-borne hazards (e.g., hepatitis B, C and HIV) in the health workforce.Following the validation, an implementation plan was drafted.Due to the high prevalence of HBV in the general population in the Gambia, there was a concern for health worker risk, given that systematic vaccination of health workers had not been standardised in the Gambia.However, the younger population had likely received part or all of the vaccine series as infants.To clarify the need for vaccination, the decision was taken to assess hepatitis B serology markers in a pilot study of health workers. History of childhood vaccines in the Gambia Maintaining high immunisation coverage is a key component in reducing morbidity and mortality from vaccinepreventable diseases.In 1974, the WHO launched the Expanded Programme on Immunization (EPI) to make vaccines available to all children. 11Five years later, the EPI was established in the Gambia to target childhood diseases, including hepatitis B. From 1986 to 1990, the Gambia launched the nationwide Gambia Hepatitis Intervention Study (GHIS) which targeted infant HBV vaccination as part of the EPI. 12 The objective of the GHIS study was to evaluate the protective effectiveness of infant HBV vaccination on the incidence of hepatocellular carcinoma (HCC) in adulthood. 13hile infant vaccination was included in the GHIS, coverage may have been variable.In addition, the three-dose series means that a sizable number of eligible children may not be fully immunised.Final results from the study were estimated to take 30-35 years and complete elimination of infection was expected to take 20-30 years. 13When the trial finished in 1990, the national infant hepatitis B vaccination programme replaced GHIS.Given initial positive results in LMICs globally, the WHO recommended that all member states include the hepatitis B vaccine in their national childhood immunisation services by 1997. 14 Waning immunity While the Gambia has worked to expand childhood vaccination coverage, and studies have shown that the full three-dose primary hepatitis B vaccination series provides long-term immunity, it may not provide lifelong protection as immunity has been shown to wane over time. 15,16A previous study analysing HBV immunity 15 years postimmunisation concluded that one or more boosters are needed to protect individuals from breakthrough infections. 17nother study looked at serologic hepatitis B immunity in HCWs and found that 29% of workers who were vaccinated against hepatitis B showed no serologic evidence of hepatitis B immunity. 18The lack of response in a percentage of HCWs means that many are still at risk for infection. Barriers to hepatitis B vaccine uptake In addition to waning immunity, the three-dose vaccine schedule puts a strain on families that experience travelrelated barriers during infant vaccine campaigns, meaning that some children may not be fully covered.Barriers to vaccination in adult HCWs include financial costs of vaccine distribution, lack of hospital policy, low-risk perception, fear of side effects, lack of time, insufficient cold-chain storage and lack of trained community health workers. 15,16,19,20,21urthermore, a lack of awareness of the vaccine's effectiveness contributes to inadequate vaccine uptake. 22The combination of each of these barriers causes vaccination coverage to plateau. Interest in high-risk adults Although historically the WHO has supported vaccine campaigns for many vaccine-preventable diseases (VPDs), their focus has been almost exclusively on the paediatric population. 23Recently, however, the WHO has begun to expand its focus on the immunisation of vulnerable adult populations.Because HCWs are frequently exposed to infectious patients, they are considered an especially vulnerable adult population.In 2022, the WHO released an implementation guide for the vaccination of HCWs that outlined the latest recommendations and programmatic considerations for the vaccination of HCWs. 24Specific vaccination recommendations include hepatitis B, as well as influenza, measles, mumps, rubella, pertussis and varicella. The guide highlights the need to integrate HCW vaccination into existing occupational health and safety policies and suggests that, as part of a national comprehensive viral hepatitis response, countries may consider establishing a hepatitis B testing and vaccination approach for health workers at no cost to the employee.This report describes initial efforts undertaken to include hepatitis B serology testing and vaccination in the occupational health programme for the health workforce in the Gambia, West Africa. Study objective The objective of this pilot study was to document selfreported hepatitis B vaccination history and hepatitis B serology results in a select group of high-risk HCWs in a tertiary care hospital in the Gambia. Research methods and design This descriptive, cross-sectional pilot study was conducted at Edward Francis Small Teaching Hospital (EFSTH) in Banjul from 12 June 2023 to 16 June 2023.The EFSTH is the only tertiary hospital in the Gambia.The hospital serves as the primary referral centre for the nation and sees patients from across the country. All HCWs from the main laboratory, the dialysis unit, and the labour and delivery unit were invited to participate in this pilot study as they were likely to be at increased risk for blood and body fluid exposure because of the nature of their job duties.To publicise the effort and promote participation, two of the senior investigators met with the hospital administration prior to the planned start date to explain the project background and protocol.The hospital, through the human resource department, sent a memo to each of the hospital department heads.The memo was then shared in the WhatsApp hospital communication groups of the different departments to inform them about the study and the schedule date, time and place. The screening team was composed of principal investigators, clinical investigators, counsellors, data collectors, laboratory staff, vaccinators and nurses.Before the screening, the team met and determined roles and responsibilities of staff which were reiterated each morning before the start of the screening day. Once the team was set up each morning, the principal investigator and the clinical investigator visited respective work unit locations or offices to briefly refresh potential participants about the pilot and invite them for screening.This was done daily to serve as a gentle reminder for those yet to be screened. The study participants were first seen at the counselling unit for pre-test counselling and to obtain written informed consent (see Figure 1).After a participant consented, a study number was allocated, which became their study identifier to maintain confidentiality.The participants were then sent to the data collection room.The data collectors then administered the study questionnaire, which included questions about demographics, vaccination history and clinical department.Country of origin was also asked because the Gambia was an early adopter of the infant hepatitis B vaccine, permitting participant age to be a likely proxy for previous vaccine.Participants were then sent to the laboratory with a form where 2 mL of blood was collected in an ethylenediaminetetraacetic acid (EDTA) tube.The sample was spun for 5 min at 5000 revolutions per minute (rpm) and the plasma was used for rapid hepatitis B surface antigen Results were recorded in the form, sealed and handed over to the counsellor in the counselling room for post-test counselling. Those with negative HBsAg results were then referred to the vaccination room for the first adult hepatitis B vaccination dose.They were also scheduled for the subsequent 2nd and 3rd doses.Those with positive HBsAg results were referred to the EFSTH liver clinic for liver assessment (Figure 1).Due to delays in receiving anti-HBs test results and concerns about waning immunity even if an individual was vaccinated previously, all participants with a negative HBsAg result were offered the initial dose of the hepatitis B vaccine. Linkage to care Among HCWs who tested positive for HBsAg, linkage to care involved visiting the EFSTH liver clinic at least once after screening for a liver disease assessment.As part of the assessment, those HBsAg-positive participants had an ultrasound scan performed by the clinical investigator.Blood samples were also collected for haematology, biochemistry and hepatitis B virus deoxyribonucleic acid (HBV DNA). Data analysis Data were entered using a tablet and then imported and analysed using Kobo Collect.Simple proportions were calculated for discrete demographic characteristics and outcome variables.For the main outcome variables (proportions of positive and negative serology test results), 95% confidence intervals (CIs) were also determined.In analysing the serology results, the participants were also divided into two groups based on age.The first group includes younger HCWs (≤ 33 years) who were born after the introduction of the nationwide hepatitis B vaccine into the expanded programme of immunisation in 1990.The second group, the older cohort (> 33 years), were born before the introduction of the nationwide hepatitis B vaccination into the expanded programme of immunisation. Vaccination after screening Of the total, 65 (86.7%)participants, who all tested negative for HBsAg were vaccinated, and 5 (3.3%) were not vaccinated.Two of those who were not vaccinated tested negative for HBsAg and three tested positive for HBsAg.Of the three testing positive for HBsAg, two reported never being vaccinated and one reported being fully vaccinated. Discussion Vaccinating health workers is a cost-effective investment and a prerequisite for building a robust health workforce.As such, WHO recommends the development and implementation of national policies on vaccination of health workers. 24For hepatitis B, such a plan would include systematic assessment of serology markers for new workers, the provision of vaccine at no cost to the worker, if needed, a confidential health record system, and linkage to care for those who are already infected also at no cost, with on-going follow-up and treatment as needed. These actions bolster the stability and resilience of emerging health systems and directly impact the achievement of the United Nations Sustainable Development Goals (SDGs) targets. 25The Gambian Health Ministry has been working towards these goals for more than 10 years.Although the Gambia has been a location of infant hepatitis B vaccination trials since the late 1980s, with the lack of documentation, barriers to vaccine coverage, and the likely waning immunity over the 30 ensuing years, the immune status of the current health workforce is unknown. Examining first, those likely at highest risk for exposure to blood and body fluids, this pilot found a prevalence of hepatitis B infection of 4.3%.There were no positive HBsAg findings among the younger cohort.The prevalence in the older cohort was 7.1%.Although our population was not intended to be a representative sample, this result is similar to other studies done in the Gambia among the adult population which showed an 8.2% prevalence of hepatitis B. 26 All of the hepatitis B-positive cases in this pilot were in the older cohort.These HCWs were born before 1990 when nationwide HBV vaccination was introduced into the Gambia's EPI.Identifying these cases prior to presentation of clinical symptoms permits early treatment and may result in better health outcomes allowing them to continue to work. 27is study also showed that 60.0% of the HCWs tested were negative for anti-HBs.This finding demonstrates that a majority of HCWs in this pilot study were not protected against HBV infection, even as they are at high risk of exposure to potentially infectious patients.While these results may not be representative of the Gambian health workforce, it is the only estimate available and suggests the need to extend this serology testing nationally and vaccinate where needed. Limitations Limitations of this pilot include the use of self-reported data for the history of vaccination and the use of age as a proxy for the likely vaccinated and likely unvaccinated participant sub-groups.We also used rapid tests for antigen and antibody determinations which may not be as sensitive and reliable as laboratory-based enzyme-linked immunosorbent assay (ELISA) and immunoassays.However, the products that we used had very high sensitivity and specificity and the benefit of the rapid tests at the point of care permitted valuable onsite clinical decision-making. The significant prevalence of HBsAg-positive participants and the low prevalence of anti-HBs protection in this small pilot population may not be representative of the risk in the larger Gambian health workforce.The participants in this pilot were deliberately selected due to their high risk of exposure to blood and other potentially infections body fluids, and thus may represent a worst case.However, the duty station in Banjul may have afforded more opportunities for episodic vaccination than may have occurred in more remote areas. Conclusion This pilot study documents the lack of hepatitis B antibody protection in a large proportion of the HCW participants.These results emphasise the need to assess the larger HCW population in the already challenged Gambian health system to ensure protection against this vaccine-preventable disease.Thus, there is an urgent need for implementing robust policies for systematic HBV screening and vaccination among HCWs throughout the Gambia.This will provide benefits at both the individual and health systems strengthening levels. Disclaimer The views and opinions expressed in this article are those of the authors and are the product of professional research.It does not necessarily reflect the official policy or position of any affiliated institution, funder, agency, or that of the publisher.The authors are responsible for this article's results, findings and content. TABLE 2 : Prevalence of hepatitis B surface antigen. TABLE 3 : Hepatitis B antibody test results.
2024-08-04T15:03:48.458Z
2024-07-24T00:00:00.000
{ "year": 2024, "sha1": "285accd15b0bd8b91dd804a6c83dc0f426434551", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "09114adb0f577dfc7e683482c01a55e32cfe1ac9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
43626541
pes2o/s2orc
v3-fos-license
Implicit-OR tiling of deoxyribozymes : Construction of molecular scale OR , NAND , and four-input logic gates * We recently reported the first complete set of molecular-scale logic gates based on deoxyribozymes. Here we report how we tile these logic gates and construct new logic elements: OR, NAND, and the first element with four inputs (i1 i5) (i2 i6). Tiling of logic gates was achieved through a common substrate used for core deoxyribozyme; degradation of this substrate defines the output. This kind of connection between logic gates is an implicit-OR tiling, because it suffices that one componenet of the network is active for the whole network to give an output of 1. INTRODUCTION We recently reported a complete set of molecular scale logic gates 1 based on nucleic acid catalysts. 2These gates have oligonucleotides as both inputs and outputs and they were constructed by modular design, 3 combining stem-loop controlling elements of molecular beacons 4 and deoxyribozymes (DNA-based nucleic acid catalysts.) 5The concordance of inputs and outputs of these gates allows tiling of gates in solution 1 and potentially performing calculation of arbitraty Boolean formulae.In our initial report we presented an example of tiling two ANDNOT (also known as NOTIF or ONLY) gates into an XOR system.We now present following results: (1) tiling of two detector (YES) gates into an OR system; (2) tiling of two NOT gates into a NAND system; (3) tiling of two AND gates into the first ever reported four-input system (i 1 Ùi 2 )Ú(i 3 Ùi 4 ).The tiling is accomplished around a common substrate, i.e., the gates operate in parallel, preserving single-layer disjunctive normal form.This type of tiling is called implicit OR tiling, because activation of either constituent gate is sufficient to render the system active.Furthermore, with regard to activity of individual gates as inputs, a truth table describing the relationship between inputs and output corresponds to the truth table of an OR gate. A YES gate (Fig. 1, YESi 1 and YESi 2 ), initially reported as catalytic molecular beacons, 4 consists of a single stem-loop which inhibits the catalytic cleavage of substrates.Binding of the complementary input oligonucleotide to the loop region opens the stem and releases the substrate recognition region for binding with substrate and initiates the cleavage reaction.Thus, YES gates behave as sensors for the presence or the absence of oligonucleotides, with cleavage products as outputs.For convenient read-out of the output we introduced a fluorogenic substrate S to follow deoxyribozyme reactions. 4In this substrate a fluorophore (fluorescein, F, l em = 520 nm, l ex = 480 nm) is efficiently quenched by a "dark" quencher without any emission of its own (Black Hole 1, BH 1 ).Cleavage of substrate separates the fluorophore from the quencher, resulting in several-fold 6 increase in fluorescence. In this work, we tiled two YES gates with different inputs around a common substrate.This arrangement will produce a system that yields an output (cleavage product) if either of two YES gates is activated.Such a system would represent a quintessential implicit OR tiling, and it performs an OR function with either of two inputs producing an active-form deoxyribozyme, triggering the cleavage of the substrate and formation of the cleavage product.Figure 1 shows the structures of two gates tiled around the common substrate, and a schematic representation of the OR truth tables.The first gate YES i 1 is activated by the input oligonucleotide i 1 and does not sense the presence of second input oligonucleotide i 2 .The second gate YES i 2 has exactly the opposite be- havior, i.e., it is inert in the presence of i 1 and reports the presence of i 2 .Combined in solution these two gates show real time fluorescent changes (Fig. 2) consistent with performing molecular scale aÚb calculation: fluorescence increases rapidly in the presence of one or both inputs, while fluorescence is unchanged without inputs.Finally, the increase in fluorescence is fastest in the presence of both inputs, as the concentration of active deoxyribozyme species is the highest.NOT gates (e.g., NOTi 3 in Fig. 3) have a stem-loop attached to the catalytic core.Recognition of oligonucleotide input complementary to the stem opens up a loop, distorting the catalytic core and rendering the deoxyribozyme inactive. 1Two NOT gates (NOTi 3 and NOTi 4 in Fig. 3) can be tiled to share a common substrate, analogously to two YES gates.This implicit OR tiling leads to an active gate unless both inhibitory substrates are present.Presence of only one inhibitory oligonucleotide inhibits only one of the constituent gates, leaving the other one active.The two gates acting in unison perform a molecular-scale Ø(aÙb) Boolean calculation and the whole system behaves as a NAND gate, with a truth table given in Fig. 3. Interestingly, our ability to tile two NOT gate in implicit OR fashion (i.e., ØaÚØb) into NAND gate (i.e., Ø(aÙb)) is a remarkable demonstration of the validity of DeMorgan's laws on the molecular scale.In Figure 4b we present the actual experiment, in which changes in the presence of all combinations of inputs support NAND behavior. AND gates require presence of two input oligonucleotides to be fully active.They are constructed by adding one inhibitory stem-loop at the 5' end and a second inhibitory stem-loop at the 3'end of the deoxyribozyme.The length of stem-loops can be adjusted to reduce the background cleavage reaction that leads to imperfect digital behavior, i.e., 324 STOJANOVI], NIKI] and STEFANOVI] Fig. 5. Two AND gates (i 1 ANDi 5 and i 2 ANDi 6 ) tiled around common substrate (S) to yield (i 1 Ùi 5 )Ú(i 2 Ùi 6 ) system.Input oligonucleotides are complementary to loops (bold fonts).This system is active only when one or both of the constituent gates senses both input oligonucleotides. cleavage in the presence of only one input oligonucleotide.Two AND gates could be tiled together around a common substrate to achieve the first-ever reported molecular element with four inputs.We provide here an example of structure-optimized gates i 1 ANDi 5 and i 2 ANDi 6 (Fig. 5), which we tile in the (i 1 Ùi 5 )Ú(i 2 Ùi 6 ) system that is active if any of the constituent AND gates is active, i.e., matched inputs (i 1 ,i 5 ) or (i 1 ,i 6 ) must be present pairwise (Fig. 6), Using the same principles, we could now construct alternative systems, in which any combination of two inputs would active fluorogenic cleavage (not shown).For example, systems which would be active if any two or more out of four oligonucleotides is present could be as easily defined through the implicit-OR connection of six AND gates into: In conclusion, we demonstrated that implicit-OR tiling of individual gates around a common substrate is a valuable tool in constructing systems that perform Boolean calculations in solution.Some of our Boolean formulae are of unprecedented complexity in molecular-scale computations.We are now addressing remaining issues in our approach to perform Boolean calculation of arbitrary complexity with molecular-scale logic gates in solution, including intergate communication.EXPERIMENTAL All oligonucleotides were synthesized and PAGE purified by IDT DNA (Iowa, USA) and were used as received.Fluorescence measurements were performed on Perkin-Elmer Victor 2 plate reader and each well contained a solution of gates producing at total concentrations of 250 nM, fluorogenic substrate at 2.5 mM concentrations and 20 mM Mg 2+ ions in HEPES buffer (pH 7.4, 1 M NaCl).Corresponding inputs (or buffer for blanks) were added at a concentration of 2.5 mM to each well and measurement was started immediately.Fig. 6.Fluorescent changes of (i 1 Ùi 5 )Ú(i 2 Ùi 6 ) system in the presence of inputs (from top to bottom): (i 1 ,i 5 ), (i 2 ,i 6 ), (i 1 ,i 6 ), (i 1 ,i 2 ),i 1 ,i 5 ,i 2 ,i 6 .Increase is seen only when two matched inputs are present that activate any one of the constituent gates.Na{a grupa je nedavnao konstruisala prvi kompletan skup logi~kih kola od deoksinukleotidnih enzima (deoksiribozima).U ovom radu mi kombinujemo (sla`emo) ova logi~ka kola i konstrui{emo nove elemente: OR, NAND i prvi element sa ~etiri ulaza (i 1 Ùi 5 )Ú(i 2 Ùi 6 ).Kombinovawe logi~kih kola smo postigli time {to pojedina~ni enzimi dele supstrat, ~ija degradacija defini{e izlaz kola.Ovakvo slagawe enzima u rastvoru nazivamo implicitno OR slagawe, jer je dovoqno da makar jedan konstitutivan enzim bude aktivan, pa da celo kolo da izlaz 1. Fig. 1 . Fig. 1.Two YES gates (YES i 1 and YES i 2 ) tiled around common substrate (S) to yield OR system.Input oligonucleotides are complementary to loops (bold fonts).Presence of input oligonucleotides opens up the stem loop and allows substrate recognition process to complete.Consequent cleavage of substrate results in the increase of fluorescence (larger bold F). Fig. 2 . Fig.2.Fluorescence changes (in relative units FU) over time for i 1 OR i 2 (250 nM total concentrations of both components) gate in the presence of (from bottom to top): no inputs, i 1 , i 2 , both inputs. Fig. 3 . Fig. 3. Two NOT gates (NOTi 3 and NOTi 4 ) tiled around common substrate (S) to yield i 3 ORi 4 system.Input oligonucleotides are complementary to loops (bold fonts).Presence of input oligonucleotides opens up the stem loop and destroys the catalytic core.Consequent cleavage of substrate results in the increase of fluorescence (larger F). Fig. 4 . Fig.4.Fluorescence changes (in relative units) over time for i 3 NAND i 4 (250 nM total concentrations of both components) gate in the presence of (from top to bottom): no inputs, i 3 , i 4 , both inputs. SLAGAWE DEOKSIRIBOZIMA: KONSTRUKCIJA MOLEKULSKIH LOGI^KIH KOLA OR, NAND I ELEMENTA SA ^ETIRI ULAZA MILAN N. STOJANOVI] a , DRAGAN B. NIKI] a i DARKO STEFANOVI] b a Division of Experimental Therapeutics and Clinical Pharmacology, Department of Medicine, Columbia University, Box 84, 630W 168 th Street, New York, NY 10032, USA i b Department of Computer Science, University of New Mexico, Albuquerque, New Mexico 87131, USA
2018-03-19T16:31:18.769Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "1760de7ec4de44d7ed2c2aa1b11b5eea7663c8ce", "oa_license": null, "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0352-51390305321S", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "1760de7ec4de44d7ed2c2aa1b11b5eea7663c8ce", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
154242781
pes2o/s2orc
v3-fos-license
Right to Place: A Political Theory of Animal Rights in Harmony with Environmental and Ecological Principles The focus of this paper is on the“right to place”as a political theory of wild animal rights. Out of the debate between terrestrial cosmopolitans inspired by Kant and Arendt and rooted cosmopolitan animal right theorists, the right to place emerges from the fold of rooted cosmopolitanism in tandem with environmental and ecological principles. Contrary to terrestrial cosmopolitans—who favour extending citizenship rights to wild animals and advocate at the same time large-scale humanitarian interventions and unrestricted geographical mobility—I argue that the well-being of wild animals is best served by the right to place theory on account of its sovereignty model. The right to place theory advocates human non-interference in wildlife communities, opposing even humanitarian interventions, which carry the risk of unintended consequences. The right to place theory, with its emphasis on territorial sovereignty, bases its opposition to unrestricted geographical mobility on two considerations: (a) the non-generalist nature of many species and (b) the potential for abuse via human encroachment. In a broader context, the advantage of the right to place theory lies in its implicit environmental demands: human population control and sustainable lifestyles. INTRODUCTION Is it desirable to have different relational principles with animals based on their wild or domesticated status? Sue Donaldson and Will Kymlicka answer in the affirmative in Zoopolis: A Political Theory of Animals Rights (2011) and in "A Defense of Animal Citizens and Sovereigns" (2013). Their argument for animal rights is based on relational obligations involving a "group-differentiated citizenship" whereby animals are classified into three categories: citizens (domesticated animals), denizens (animals living on the outskirts of our communities, such as raccoons) and sovereign animals (wild animals). According to this model wild animals are to have territorial sovereignty and to live free from human interference, whereas domesticated animals are to have access to various rights such as shelter, food provision, and health care. A number of authors object to the above classification from a cosmopolitan perspective. Their central argument is that denying citizenship status (and hence the benefits that come with it) to wild animals is morally arbitrary. According to these authors, the interests of all animals should matter equally. Their supporting argument is that wild animals, on account of their "sovereign" status, risk acquiring the label of outsiders, which, from the viewpoint of an Arendtian conception of citizenship rights, carries risks of stigmatization and disregard. In place of group-differentiated citizenship, these authors propose common status and rights, including norms of Kantian-like universal hospitality (free mobility) and universal benevolence (including duties of humanitarian intervention). In the context of the debate on group-differentiated citizenship this paper will argue that what cosmopolitans perceive as unfair disadvantages facing wild animals under the sovereignty model are in fact benefits. In making this argument, this paper will draw upon recent work in environmental and ecological studies. Prior to delving into this discussion, I would like to point out that Donaldson and Kymlicka are not the first authors to explore animal theories from the perspective of political theory. 1 They are however the first to explore a theory of animal rights based on a group-differentiated citizenship approach. Zoopolis is receiving a substantial amount of interest. 2 From the ensuing debates emerge two visions of political theory animal rights: terrestrial cosmopolitanism and the right to place theory. Terrestrial cosmopolitanism is based on the notion of universal mobility and hospitality. The right to place is based on the notion of an "equal right, individual or collective, to possess a particular place." 3 An inquisitive reader might be tempted to ask: "Given that the term "rooted cosmopolitanism" 4 already exists, why do we need a new term, "right to place"? First of all, I would like to point out that it is Donaldson and Kymlicka (2013) who actually embraced this term within the context of animal rights. Given the fact that Kymlicka (2012) is thoroughly familiar with the term "rooted cosmopolitanism," 5 it is highly unlikely that he would have chosen to adopt a new term when an established one would have sufficed. This implies that the term "rooted cosmopolitanism" did not suffice for the purposes of developing a political theory of animal rights. Perhaps, when faced with the demands of terrestrial cosmopolitanism, rooted cosmopolitanism (being more of a moral view) had somewhat less "political bite" than the right to place theory. This is not the same as discrediting rooted cosmopolitanism. On the contrary. One could defend rooted cosmopolitanism by asserting that one of its features is that it paves the way for establishing something like a "right to place," which, in turn, has enough force to deter otherwise plausible cosmopolitan claims to unrestricted freedom of movement in the case of animal right theories. 6 The disagreement between the two camps divides into two key practical disagreements: (a) the right to exclude (mobility rights vs. territorial sovereignty) and (b) the duty to intervene (humanitarian interventions). For right to place theorists, unlimited mobility carries serious risks, especially for indigenous populations. To avoid misunderstandings it should be pointed out that, with some exceptions (migratory birds, etc.), the territorial mobility of animals is the result of anthropic interference and not the result of their own efforts or capacities. Human-enabled animal migrations-be they intentional or accidental (e.g., stowaways)-have been catastrophic for indigenous animal populations. The negative impacts of alien and invasive species are well-documented in the ecological literature and as such there is no need for further elaboration here. 7 In the case of human mobility into wildlife territory, however, more could be said. Human encroachment takes place as a result of human overpopulation and technological innovations (e.g., roads, bridges) and it leads to human-wildlife conflicts. Human-wildlife conflicts reveal the Achilles heel of the Kantian notion of hospitality as espoused by terrestrial cosmopolitans. It not only contradicts but also precludes Kant's principle of hospitality in the realm of human-wild animals interactions. This arises from the fact that terrestrial cosmopolitans invoke Kant's principle of universal hospitality when in fact Kant only endorsed peaceful visitation rights and not settlement rights. Universal-mobility-rights-becomingsettlement-rights are especially inapplicable in the case of wild animals because they settle and become established in new territories (the case of hippopotamus in Columbia) 8 and eradicate Native species (the case of Burmese pythons in the case of Florida's Everglades). 9 TERRESTRIAL COSMOPOLITANISM AND WILD ANIMAL RIGHTS What follows is a brief literature review for those not familiar with the debate between terrestrial cosmopolitans and right to place theorists. In his article "Perpetual Strangers: Animals and the Cosmopolitan Right" (2013), Stephen Cooke objects to Donaldson and Kymlicka's (2011) group-differentiated citizenship proposal by arguing that if wild animals (sovereigns) are not given the same rights as domesticated animals (citizens) they (wild animals) will be left in an inherently vulnerable position. Cooke's argument is heavily influenced by Hannah Arendt's Origins of Totalitarianism, where one encounters the argument that people without citizenship run the risk of marginalization because political rights are usually enforced via the mechanism of nation-states. Those without the protection of a state run the risk of being pushed outside the "sphere of moral concern" which is usually to be found within the boundaries of nation-states. According to Cooke a better alternative would be the adoption of a cosmopolitan approach based on Kant's "right of universal hospitality." In particular, Cooke suggests a ius cosmopoliticum (cosmopolitan right) whereby animals that conduct themselves peaceably should not face hostile treatment from humans. According to this "non-speciesist" hospitality duty, we should not harm animals straying into our livable spaces in search of either food or shelter-with the exception of dangerous predatory animals. While this is fine for harmless animals such as chipmunks and deer, it would be highly problematic in the case of predatory animals such as wolves and bears, who would not hesitate to attack if threatened. Apart from that, and given our evolutionary fears, few humans would be willing to tolerate predatory animals within their "livable spaces" even if such animals did not engage in hostile behaviour. Under such circumstances the "defence principle" would risk being misinterpreted or abused leading to a carte blanche to kill any and all wild predatory animals found wandering in human settlements. To be sure, this is the de facto policy of many human societies. A historical perspective reveals that Kant's ethics of hospitality, as articulated in Perpetual Peace, is central to cosmopolitan animal right theorists and as such it is worth quoting at length. It reads as follows: We are speaking here, as in the previous articles, not of philanthropy, but of right; and in this sphere hospitality signifies the claim of a stranger entering foreign territory to be treated by its owner without hostility. The latter may send him away again, if this can be done without causing his death; but, so long as he conducts himself peaceably, he must not be treated as an enemy. It is not a right to be treated as a guest to which the stranger can claim… but he has a right of visitation. This right to present themselves to society belongs to all mankind in virtue of our common right of possession on the surface of the earth on which, as it is a globe, we cannot be infinitely scattered, and must in the end reconcile ourselves to existence side by side: at the same time, originally no one individual had more right than another to live in any one particular spot. 10 In the above passage one discerns two distinct assertions: (1) hospitality ethics 11 and (2) universal mobility. The assertion of universal mobility comes across in Kant's statement that "originally no one individual had more right than another to live in any one particular spot." 117 Alasdair Cochrane in "Cosmozoopolis: The Case Against Group-Differentiated Animal Rights" (2013), 12 takes as a given the notion that the right of domesticated animals to live within the space of human society is derived "from their interest in a safe and secure environment conducive to their well-being. 13 This, in turn, leads him to the conclusion that "since all animals have a basic interest in a safe and secure environment" there is a prima facie case for recognizing that all animals have a right to the type of residency that ensures their safety. 14 Needless to say, Cochrane is critical of a group-differentiated theory of animal rights. His objection, similar to Cooke, stems from the belief that it denies to 'outsiders' (wild and liminal animals) their just entitlements while unfairly privileging the rights of 'insiders' (domesticated animals). A more ideal theory of animal rights, according to this author, would be one centred around a cosmopolitan model where the rights of all animal would be "better determined" because it would mean the attachment of rights to individual animals "according to their capacities and interests, as opposed to their membership in different groups." 15 In addition Cochrane advocates humanitarian interventions for a wide variety of natural disasters ranging in scope from predatory behaviour to territorial rivalry. Oscar Horta, in "Zoopolis, Intervention, and the State of Nature" (2013), is likewise in favour of humanitarian interventions in wild animal communities. Horta's advocacy for humanitarian interventions arises out of his concern for wild animals at the individual level. 16 For example, given the fact that the majority of animals are r-strategists, 17 an accurate portrait of life for animals living in the wild, according to Horta, would be a "humanitarian catastrophe" resembling that of failed states. 18 Consequently, this Hobbesian-like state of nature lies behind his objections to Donaldson and Kymlicka's defence of limited intervention. He argues that if autonomy and flourishing form the basis of Donaldson and Kymlicka' decision to assign sovereignty to communities of wild animals, then that is counterintuitive given the fact that one of the prerequisites to autonomy and flourishing is survival: an impossibility for countless wild animals without the benefit of humanitarian interventions. On the question of whether or not "excessive risk avoidance" is bound to impoverish the quality of life for wild animals, Horta invokes the concept of benevolent paternalistic intervention. Hence, and in reply to Donaldson and Kymlicka's analogy that sheltering children from risky activities impoverishes their quality of life, Horta responds by using his own analogy of children playing in waters filled with crocodiles. While the children might enjoy that activity we nonetheless remove them for their own safety. While Donaldson and Kymlicka (2013) reply to various points raised by their critics, for the purpose of this paper only two will be examined: 1) the nature and scope of sovereignty for wild animals and 2) the humanitarian intervention in wildlife communities. On the specific topic of universal territorial mobility (read: Kantian hospitality ethics) Donaldson and Kymlicka find the premise "that there are very strong individual rights to mobility, and only very weak collective rights to territory" problematic. 19 Such a view, according to them, implies that human and nonhuman animals alike possess an inherent right to global movement, which in turn implies that there are no inherent claims-individual or otherwise-to territorial possession. Such a view is termed "terrestrial cosmopolitanism" by Avery Kolers in "Borders and Territories: Terrestrial Cosmopolitanism vs. a Right to Place" (2012). Terrestrial cosmopolitans, according to Kolers, assume the "antecedent common ownership of the entire world" which is not "a common ownership thesis, but rather the thesis that, antecedently or presumptively, no one has any special claim to be, or be sovereign, or control territory, anywhere in particular." Kolers is critical of terrestrial cosmopolitanism and he goes as far as to claim that the "ideal of equality, understood as universal equal access to the entire world… [is] a sham." 20 A better alternative for this author would be the so-called "right to place" view, which is defined as an "equal right, individual or collective, to possess a particular place." 21 As to be expected, the right to place theory is seen by Kolers preferable to terrestrial cosmopolitanism because with the former people have at least a "claim to a place of their own." 22 Donaldson and Kymlicka (2013) identify themselves as right to place theorists and suggest that between them and terrestrial cosmopolitans lies a dividing line that happens to be one of the fundamental dividing lines in contemporary political philosophy. 23 They note that this division is not about whether or not we are obligated by principles of justice to consider the interests of non-members outside our territorial boundaries (that is taken as a given) but rather "what those interests are." 24 According to them, one the one hand we have interests as individuals in unhindered mobility, including the right to move out of our existing community and move into the territory of another community-an interest that can only be satisfied if we prevent communities from restricting in-migration. On the other hand, we have interests as members of bounded communities in being able to effectively govern ourselves and pursue our shared way of life on our territory-an interest that can only be satisfied if bounded communities are able to regulate entry into their territory. 25 For terrestrial cosmopolitans, the interest in individual mobility takes precedence over the interest in collective autonomy. Donaldson and Kymlicka object to this due to its potential for abuse. According to them, if the case for terrestrial cosmopolitanism is dubious in the case of human communities, it is entirely improbable in the case of non-human animals because if terrestrial cosmopolitanism were an "accomplice of injustice in the human case, it is an absolute catastrophe for most animals." 26 In the specific case of human communities, Donaldson and Kymlicka reference the European invasion and colonization of the Americas (although they could have easily added Africa) 27 as an example of RIGHT TO PLACE AND WILD ANIMALS' RIGHTS terrestrial cosmopolitanism gone terribly wrong. While they acknowledge that terrestrial cosmopolitanism does not ignore the interests of indigenous populations, they are of the mind that it does not protect them either. Simply put: "Without recognition of an antecedent right to place, these interests are all-too-easily trumped by the interests of larger or stronger groups seeking new territories for their pleasure or profit." 28 According to the terrestrial cosmopolitan paradigm, an individual wild animal would have universal mobility and the freedom to move to a different geographical location including into human communities. At the collective level, however, wild animals would lose their right to keep outsiders-including human settlers-out of their territorial boundaries. However, the scenario of allowing wild animals entry into human communities and humans entry into wildlife communities is seen by Donaldson and Kymlicka as an inherently unfair trade-off. While Donaldson and Kymlicka are correct in their evaluationwild animals do avoid human communities -more could be said in support of their position in lieu of recent ecological and environmental studies. What follows is such an undertaking. HUMAN-WILDLIFE CONFLICTS Unlike domesticated animals, which are dependent on humans for their survival, or liminal animals, which manage an independent co-existence, wild animals are neither dependent nor capable of co-existence with humans. When compulsion or events beyond their control force an encounter, they are either harmed or killed. Known as human-wildlife conflicts in the environmental literature, they are occurring with alarming frequency in places like Africa, South America, and India. 29 In the specific case of Assam, India, one eyebrow-raising report speaks of pythons entering bathrooms and bedrooms, sambar deers [sic] running through courtyards, clouded leopards sneaking into backyards at night and carrying off livestock or pets. Pangolins, jungle cats, civet cats, foxes and wild boars repeatedly stray onto the lanes and bylanes of Guwahati, the capital. Monkeys running amok in kitchens is a routine occurrence in hillside areas. Outside of the city, elephants, tigers, onehorned rhinos and gaur, the Indian bison, are occasionally spotted. 30 Not surprisingly, wilds animals suffer disproportionately during these encounters. The same report bears witness to various outcomes of human contact with leopards: leopards die from tranquilizer overdoses, are butchered by locals, taken to zoos or released back into the wild (where, one presumes, it would be a matter of time before the next unfortunate encounter). According to the same testimony, human overpopulation and urbanization are to blame. Population "swelled from 14 million in 1971 to 31 million in 2011," while "frenzied urbanisation gobbled up 30 percent of the state's forestland" in Assam. 31 To a large degree human-wildlife conflicts make a mockery of terrestrial cosmopolitan animal rights. A case in point is Cooke (2013) who, as we saw above, implied that cosmopolitan hospitality rights become null and void if wild animals are not peaceful. On the basis of a hierarchy of moral values, there can be little doubt that Cooke is correct. When one is faced with the dilemma of whether to save a human life or to save a non-human life, the hierarchy of moral values dictates that we favour our own. This speciesist favouritism stems from our membership in the human species. The above is also known as the "Burning House Dilemma." It involves the hypothetical scenario of a burning house with two rooms-one containing a human and the other a dog-but only enough time to save one. Whom would you choose to rescue? As Steven Best insightfully points out, this question is often asked of Animal Right Advocates (ARA) with the intent of finding inconsistencies in their values. Any answer would be a losing proposition, for if "you answer that you would save the human being, your interlocutor glibly and gleefully derides you as a hypocrite. If you answer you would save the dog, you are vilified as a miscreant and deviant misanthrope with warped values." 32 A similar ethical dilemma, I would argue, is to be found in human-wildlife conflicts cases which are often the result of human population growth and encroachment into wildlife habitat. Once a situation reaches a critical level-such as the one seen in India's Assam province-we find ourselves trapped in an ethical dilemma. Whom do we favour, humans or wild animals? This ethical trap can be avoided via holistic, preventative policies that respect the collective territorial rights of wild animals. These policies would also resolve Cooke's objection that his argument should not be misinterpreted as stemming from species membership or hierarchical valuing of life, but should be understood on the principle of self-defence. As he puts it, "when an innocent is threatened by an attacker, they have the right to defend themselves, even if that attacker is innocent." 33 True enough. However, I would still argue that in the specific case of human encroachment into wildlife territory the argument of selfdefence becomes null and void. A thief suing a homeowner for bodily injury is a laughable concept. 34 The same holds in cases where wild animals enter into human settlements. Therefore, the principle of self-defence is not applicable to cases of wild animals intruding into human settlements as a result of humancaused habitat loss (e.g., logging, dam-building, farming). If I may be allowed a short digression, I would like to point out that the Anthropocene Era has been anything but kind to nonhuman animals. To quote Edward Wilson, the founder of sociobiology, the "human species came into being at the time of greatest biological diversity in the history of the earth" but as we expand and modify the natural environment, we are "reducing biological diversity to its lowest level since the end of the Mesozoic era, 65 million years ago." 35 Adding validation to Wilson's argument is the latest report by the WWF stating that, as a result of anthropic activities, wildlife populations have been reduced by half in the last 40 years. 36 Worse, world population is growing at a faster rate than previously thought: 11 billion by 2100 according to the findings of the latest United Nations study. 37 Yet all is not lost. Edward Wilson calls for half of our planet to be set aside as permanently protected areas for wildlife. This idea has been circulating among conservationists for some time but is now slowly gaining momentum in the wider community. 38 Furthermore, human population control is no longer a taboo in political philosophy. Emerging literature such as Sarah Conly's One Child: Do We Have a Right to More? (forthcoming) questions the (liberal) opposition to human population regulation. The right to have a family and children, according to her, does not entail prima facie the right to have as many children as one wishes. If uncontrolled population growth is detrimental to our collective wellbeing, placing limits on individuals and their reproductive rights is justifiable. Moreover, the economic growth models are not the sacrosanct principles that they once were. Degrowth or décroissance-to use its original term as coined by French radical economists-is an emerging socioeconomic and political movement that challenges many prevalent consumerist and capitalist ideas from the perspective of ecological economics. 39 Critical works that support either the movement or some of its main premises are emerging. This body of literature includes works that argue that there are environmental limits to economic growth 40 and that our biosphere is unable to sustain the present-day global system of production. 41 There are scathing critiques of neo-classical economics, 42 arguing that economic degrowth is already here due to dwindling oil supplies, 43 and unless there is a controlled process of decreasing consumption we will soon be faced with an ecological disaster, 44 and an impending human catastrophe as our demands surpass the earth's natural resources. 45 Empirical evidence suggesting that past human societies have collapsed as a result of unsustainable practices adds an aura of urgency to this issue. 46 The notion that developed countries can continue consuming finite resources with environmental impunity is simply no longer acceptable. 47 *** I will now return to the topic of cosmopolitan animal right theorists to say a few words in their defence. Their Kantian-derived ethics of hospitality and their motivations are admirable. That being said, in Perpetual Peace Kant excludes hospitality as a right of residence (Gastrecht) and he limits it to the right of visitation (Besuchsrecht). 48 Was Kant's reluctance to admit to, or call for, a universal right of entry in some ways reflective of his experience with colonialism? 49 Whatever the case might be, while Kant allowed for hospitality, he also held that 'visitors' "should be allowed to stay until conditions for return" to their homeland were acceptable. 50 Hence, in the case of terrestrial cosmopolitans who advocate mobility rights for wild animals, Kant's premises are simply not met. To begin, 'mobility rights' for invasive species, are catastrophic for indigenous wild animals. Examples abound but that of the Burmese Python who is decimating native alligators in the Florida Everglades, is sufficient. Provided that the new territory is compatible to their old one, alien species go on to thrive and breed thereby marginalizing or driving to extinction native species. Again, examples abound, but the proliferation of the Small Indian Mongoose, which has become established in the islands of Mauritius, Fiji, and the West Indies is a case in point. 51 Secondly, mobility that leads into territorial entry occurs mostly, if not always, as a result of anthropogenic activities. Whether such introductions are intentional or accidental is beside the point; utimately humans are responsible for the resulting harm to the indigenous wild life populations. Consequently, I would argue, the removal of invasive species is the sole exception in which human interventions in wild animal communities are justified. That being said, and at the risk of misunderstanding, such interventions should not be occasions for the slaughter of 'alien' species. (On this point I find myself in agreement with cosmopolitan animal right theorists, who emphasize the inviolability of individual rights.) Unfortunately however, and all too often, environmentalists sacrifice this principle for the health of ecological regions. To recall the case of Burmese Pythons in Florida, an open hunting season was recently declared by that state's wildlife department, complete with financial rewards for their annihilation. 52 Worse yet, there is a growing movement in the hunting community (with the blessing of many ecologists and environmentalists) which not only allows but encourages and praises the hunting of invasive species as the "ultimate guilt-free diet." 53 This movement is problematic at many levels but especially insofar as the hunting of invasive species leads to the same animals altering "their behaviour in ways that make future encounters with predators less likely." 54 Put differently, they make capture and repatriation-the only option that would satisfy the tenets of environmentalists and animal right advocates alike-far more difficult. HUMANITARIAN INTERVENTION IN WILD ANIMAL COMMUNITIES Humanitarian intervention is another contested topic, especially in the arena of health care and safety rights. Whereas Donaldson and Kymlicka limit those rights to domesticated animals, terrestrial cosmopolitans want them extended to wild animal populations. Donaldson and Kymlicka resist such calls and defend their exclusion on the basis that protection from predation and natural food cycles will disrupt wild animals' way of life and impose radical restrictions on their freedom and autonomy. 55 They are of the mind that humanitarian interventions will require nothing less than "turning nature into a zoo, in which each species would have its own safe habitat and secure food supply at the price of having its mobility, reproduction and socialization tightly policed by human managers. 56 Their sentiment, similar to that of Nassim Nicholas Taleb, who argues, "Don't talk about 'progress' in terms of longevity, safety or comfort before comparing zoo animals to those in the wilderness," 57 speaks of the dangers associated with well-meaning but ultimately misguided animal right policies that harm the same wild animals they seek to help. V O L U M E 9 N U M É R O 3 A U T O M N E / F A L L That said, what if one made the counter-argument that the zoo-ification (for lack of a more appropriate word) of wild animals would be a small price to pay if that meant that wild animals could live long-lasting, pain-free lives? Even if zoo-ification were deemed both feasible and desirable, would it ensure the well-being of wild animals? I would argue that it would not. To begin, there is the law of unintended consequences, 58 which, at the risk of oversimplification, holds that intervention in complex systems leads to unforeseen consequences. 59 Ecosystems are extremely complex systems with intricate interspecies relationships that have evolved over the course of millennia. Given the Byzantine nature of those symbiotic interactions, the risk of negative unintended consequences from interventions is high. As a safeguard, those advocating humanitarian interventions should be required to demonstrate that those interventions will not have any detrimental effects and, barring that, they should abide by the precautionary principle. 60 This principle has received extensive coverage in the literature and, as such there is no need for further development. However, there is a small but significant exception involving human interference in the diet of wild captive animals, which, if terrestrial cosmopolitans have their way, would expand into mass humanitarian interventions for animals in distress. The precautionary principle can be traced back to the Latin primum non nocere (first, do no harm), which is itself traced back to the Hippocratic Oath which reads: "διαιτήμασί τε χρήσομαι ἐπ' ὠφελείῃ καμνόντων κατὰ δύναμιν καὶ κρίσιν ἐμὴν, ἐπὶ δηλήσει δὲ καὶ ἀδικίῃ εἴρξειν" This has been interpreted as "I will apply dietetic measures for the benefit of the sick according to my ability and judgment; I will keep them from harm and injustice" 61 and "I will follow that system of regimen which, according to my ability and judgment, I consider for the benefit of my patients, and abstain from whatever is deleterious and mischievous." 62 Granted that interpretations vary, the general gist of the above passage is something along the lines of "I will use diets for the good of the patients, and I will exclude diets which harm the patients" and/or "I will use those dietary regimens which will benefit my patients according to my greatest ability and judgment, and I will do no harm or injustice to them." 63 The relevance of the Hippocratic Oath to humanitarian interventions lies on the stress it puts on regimen or διαιτήματα, as Hippocrates puts it. 64 Human-provided 'diet' and 'regimen' for wild animals include things like: type, amount, temperature, and texture of food alongside with feeding frequency. Examples abound, but for the sake of our argument a single one should suffice. At the San Diego Wild Animal Park, a study involving a feeding experiment comparing commercial and carcass diets was carried out with 15 cheetahs. The study concluded that the cheetahs that were fed entire carcasses fared better, both psychologically and physically, than their counterparts, which were fed the 'traditional' commercial diet consisting of preprocessed horsemeat. The fact that the commercial diet was nutritionally balanced (i.e., contained added vitamins) further highlighted the study's findings. 65 In the specific case of oral health, it was discovered that the cheetahs that were fed processed foods did not incur sufficient 'wear and tear' on their teeth. Consequently, insufficient wear and tear on the teeth is said to lead to "focal palatine erosion, a disorder that occurs when an underused molar chips away at the upper palate, eventually boring a hole through the bone, which can then become infected." 66 In another study, this one involving lions, it was discovered that feeding following "gorge and fast" patterns was superior to that following frequent, daily patterns, both in terms of nutrition and behaviour effects. 67 (Not surprisingly, the frequent, daily feeding pattern of carnivores was and, in some zoos, still is the status quo.) The working hypothesis behind "gorge-and-fast" regimens is that they are beneficial because they mimic the feeding patterns found in nature, in which carnivores have evolved. 68 The above two examples illustrate some of the perils to be found in well-meaning interventions. The same concerns are applicable to natural predation. While there is nothing wrong with aiding an individual wild animal-imagine yourself intervening to save a chipmunk from a hawk while on a hike 69 -the same type of intervention at the collective level would have deleterious effects for the entire ecosystem. Also, we should not forget that "removing predators has a cascade of effects on other populations, down to the plant life." 70 *** By the same token, reintroducing predatory animals into an ecosystem would have beneficial effects. This is something that is best illustrated using the example of Yellowstone National Park's grey wolves which were reintroduced in 1995/1996 after a 70 year absence. Since their return, wolves have been hunting elks, which in turn has allowed for the rejuvenation of aspens and willows, which in turn made possible the return of beavers. 71 As a matter of fact, the ongoing heated debate regarding rewilding efforts in UK is centred not on ecological concerns, but on agricultural, hunting, and fishing ones. To wit, farmers, hunters, and fishermen object to the reintroduction of wolves, bears, and lynxes on account of egoistical reasons. 72 One possible objection to the above could be made from the perspective of scientific progress. Namely, "now we know what interventions to make when we intervene." A counterargument would be to state that there are simply no limits to the ways in which we are "outsmarted" by nature; sometimes the negative downsides are simply too great for justification. 73 Ironically enough, as the final editing touches were put on this paper, I became aware of new scientific studies which debunk earlier studies hailing the ecological benefits of reintroducing wolves into the Yellowstone National Park. 74 Apparently after humans exterminated wolves nearly a century ago, elk grew so abundant that they all but eliminated willow shrubs. Without willows to eat, beavers declined. Without beaver dams, fast-flowing streams cut deeper into the terrain. The water table dropped below the reach of willow roots. Now it's too late for even high levels of wolf predation to restore the willows. 75 In other words, earlier studies began (correctly) reporting ecological improvements, but (mistakenly) assumed that those improvements were going to continue until the system was completely restored. (In all fairness to them, they did not have the benefits of the hydrological studies.) While some willows began regenerating and some beavers began returning to the park, reestablishment will not be possible: changes to the fluvial system make full restoration of the riparian ecosystem an impossibility. 76 Following the initial disappointment, not only for Yellowstone but also for countless other places where rewilding efforts are under way, these studies should serve as a further warning against interventions into complex systems-with nature being one of the most complex. 77 This sentiment is echoed by the authors of the recent Yellowstone study, who claim that we know very little about the consequences of restorations simply because we do not know enough about the (negative) feedback that reinforces the effects of removing predatory animals from an ecosystem. 78 The same study should also serve to highlight the importance of granting wild animals a sovereign status and a right to place consisting of an "equal right, individual or collective, to possess a particular place." 79 *** On a related note, there is the ethics of wildlife research (e.g., tagging, marking, etc.). According to Donaldson and Kymlicka's (2013) sovereignty model, it is not clear if such research should cease to exist. In an ideal scenario it should. Be that as it may, at the present time wildlife research is being guided by the Three Rs (Replacement, Reduction, Refinement)-a concept that was originally conceived and applied to laboratory-based research. This concept, however, is highly problematic, given the fact that it prioritizes data collection over the welfare of individual animals. 80 What is needed is the articulation of a new concept that does not sacrifice the welfare of animals for the sake of scientific knowledgeeven if such knowledge is solely for the goal of species conservation. The only attempt (that I am aware) of formulating such a moral theory is that by Curzer et al. (2013), where the so-called Nine R theory is articulated. While not perfect, it's nonetheless a vast improvement over the current Three Rs guideline system. CONCLUSION While advocates of terrestrial cosmopolitanism are motivated by benevolent considerations, a better option for the well-being of wild animals would be a right to place theory. Such a theory is a preferable alternative not only because it protects wild animals against human encroachment, but also because it is better suited to the majority of wild animals, who are specialists (as opposed to generalists) and thus dependent on small ecological niches for their survival. This is something that is best illustrated using the example of the Spanish hogfish (Belize barrier reef) and the swift fox (short grass prairie of Saskatchewan). As Donaldson and Kymlicka point out, the "right to universal mobility and a universal commons is meaningless" because the lives of such "specialist animal species is dependent on very specific ecological niches." 81 That is to say, the principle of ecological niches makes a mockery out of the "individual mobility rights" as advocated by terrestrial cosmopolitans. With regards to humanitarian interventions in wild animal communities, we lack sufficient knowledge to intervene without the risk of unintended consequences. Furthermore, even if we were to obtain ecological and biological omnipotence via ongoing scientific research, the fact remains that predatory and carnivorous animals cannot become vegetarians. Even if such a thing were possible, it would not be ecologically or environmentally desirable: the cycle of predation is a crucial element in the proper functioning of the biosphere even though, as Horta (2013) would claim, it is 'unfair' to r-strategist species. If any humanitarian interventions are to be made, they should be made in the protection of indigenous wild animals against alien species and predatory, domesticated animals (e.g., cats). In the case of alien species-considering that this is a man-made problem-they should be repatriated. 82 When one takes into account the fact that domesticated animals, such as house cats, are responsible for the deaths of approximately 20.7 billion mammals (e.g., mice, rabbits), 83 1.4-3.7 billion bird deaths in the USA, 100-350 million bird deaths in Canada,84 and are implicit in the extinction of several bird species, human intervention is not only desirable but ethically and morally dictated. In both cases, however, it is imperative that intervention occur within the framework of an animal-friendly and environmentally friendly" paradigm. To quote Seyla Benhabib, we have "moral obligations" toward all animals including domesticated ones "and they have moral claims upon us." 85 We should not be killing Burmese pythons-the official policy of Florida Fish and Wildlife Conservation Commission-any more than we should be killing our house cats. As I write this, an authorial confession weighs on my mind: I dislike snakes. No doubt my dislike stems from an evolutionary fear. Snakes have been responsible for many deaths in human history and, in the case of hunter-gatherer societies such as the Agta (Philippines), pythons are still responsible for one in 20 human deaths. 86 However, taking a cue from Charles Blattberg, who writes that when it comes to politics, "one does not have actually to like the person or persons one is conversing with, only to recognize that there are good reasons for caring about them." 87 I argue that the same applies to animals. We do not have to like any of these animals in order to care for them. Again, this is not to advocate a paternalistic management system in which we take responsibility for protecting and feeding wild animals, thereby turning "wilderness into a zoo." 88 On the contrary. The billions that are now being spend on duplicating natural habitats in zoos 89 and cryobanks-whether the Smithsonian's Global Genome Initiative 90 or China's National Genebank-would be better spend in preserving existing wildlife habitats. If the concept of caring for wild animals is unpalatable to the average person on intrinsic grounds, there is-in the face of the spreading Ebola epidemic-an argument to be made on instrumental grounds. The Ebola virus has, at the time of this writing, the potential of becoming a global epidemic 91 with immense financial, 92 social, and health costs. The Zaire ebolavirus, one of five known species of Ebola virus, 93 exists in three species of fruit bats that are only found deep in the Gabon and Congo rainforests. 94 In other words, Ebola's natural reservoirs, similar to other unknown, deadly viruses, are found in wild animals inhabiting inaccessible forests. Ebola is transmitted via the eating of bushmeat-namely, of bats and other 'accidental' secondary hosts such as primates, rodents, and duikers. 95 The destruction of virgin rainforests (through logging, mining, agriculture, human settlements) in combination with the exploitation of wild animals (in poaching for food, traditional medicine and ceremonies, zoos, medical research, and private exotic animal trade) increases the probability of deadly viruses being transmitted to humans. 96 A right to place theory would protect these animals, their habitat, and ultimately humanity, 97 for it is the only political theory that entails territorial sovereignty. NOTES 1 E.g., Garner (2005) and (2013). 2 At the time of this writing, the list included Horta (2013) (2013), not to mention Donaldson and Kymlicka's replies to Horta and Cochrane in their article and to Svärd, Nurse, and Ryland in (2013b). 3 Kolers as cited by Donaldson & Kymlicka, 2013, p. 146. 4 There appears to be some confusion surrounding the identity of the scholar who coined the term "rooted cosmopolitanism." Some claim it was Kwame Anthony Appiah (Darieva, 2013, p. 26;Freedman, 2005), while others claim it was Mitchell Cohen (Webner, 2012, p. 154;Tarrow, 2005). That being said, Alan Ryan suggests that the term predates Appiah and Cohen alike and it can be traced back to Isaiah Berlin in the 1950s who is said to have been defending some form of "rooted cosmopolitanism" in response to Stalinist anti-Semitic denunciations of "rootless cosmopolitanism" (personal correspondence, September 28, 2014). Regardless of its origins, Kymlicka and Walker (2012, p. 1) argue that the term was popularized by Appiah and has now been adopted in "various forms by a range of political theorists and philosophers." 5 Kymlicka is, along with Walker, the editor of Rooted Cosmopolitanism, where one reads the following definition: "Rooted cosmopolitanism attempts to maintain the commitment to moral cosmopolitanism, while revising earlier commitments to a world state or a common global culture, and affirming instead the enduring reality and value of cultural diversity and local or national self-government. Even as rooted cosmopolitanism affirms the legitimacy of national self-government, however, it also entails revising our traditional understanding of 'nationhood.' For many rooted cosmopolitans, the nation can no longer be seen as the locus of unqualified sovereignty, exclusive loyalty, or blind patriotism. People's attachment to their ethnic cultures and national states must be constrained by moral cosmopolitan commitments to human rights, global justice, and international law. Rooted cosmopolitanism, in short, attempts to redefine our traditional understandings of both cosmopolitanism and nationhood" (2012, p. 3). 6 With thanks to Avery Kolers for this clarification (personal correspondence, September 30, 2014). 7 For a comprehensive review see Pimentel (2011). 8 See Kremer (2014). 9 See Childs (2011). 10 Kant, 1972, pp. 137-138. 11 The first assertion, that of hospitality ethics, holds that if a stranger does not pose a danger, then he or she is not to be treated with hostility-with or without any claims to hospitality. It should be noted that this type of hospitality ethics is closely related to the ancient Greek view of hospitality as embodied in the concept of φιλοξενία (philoxenia) Philoxenia was central to the divine figure of Zeus Xenios whereby hospitality was not only a sacred duty, but the harming of a stranger a divine digression (Newlands and Smith, 2010, pp. 30-32). Hence, from a comparative perspective, Kant's hospitality ethics are weaker than their classical counterpart. 12 Cochrane is of the mind that what is unique about Zoopolis is not so much the synthesis of animal ethics and political theory, but the articulation of a specific position in political theorynamely, that of "group membership" and the "relational position." Consequently, this author states that the appropriate question of concern should not be whether a political theory of an- 48 Derrida, 2010, p. 421. 49 Mau et al., 2012 Ibid., my emphasis. 51 Lowe et al., 2000. 52 Python Challenge, 2013, http://www.pythonchallenge.org/. 53 Landers, 2012;Discovery News, 2012;Gilli, 2012. 54 Cote et al., 2014. The detrimental effects of human intervention, including the limitations of human knowledge and the perverse consequences of human intervention in nature, are discussed at length in Zoopolis's chapter six, "Wild Animal Sovereignty." More recently, Kymlicka expanded on the same theme during an interview with Adriano Mannino (2014b). 56 Ibid. 57 Taleb, 2010, p. 7. 58 For an in-depth discussion of unintended consequences, see Merton, 1949. That being said, and apart from its genesis in sociological circles, the same topic has found fertile ground in the environmental and economic literature. 59 Taleb, 2012. 60 The precautionary principle is concerned with the prevention of harm, and according to Article 191 of the Treaty on the Functioning of the European Union, the same principle may be invoked when a process is judged to have a dangerous effect, but whose risks "cannot be determined with sufficient certainty." European Union, "The Precautionary Principle." For the history of the precautionary principle, albeit only from the seventeenth century onwards see UNESCO, 2014. 61 Edelstein, 1943. 62 Hippocrates, 1923. 63 Thanks to Rebecca Futo Kennedy, John Ma and Michael Nafi for their insightful comments regarding the interpretation of this challenging passage. 64 I would like to thank the anonymous referee who pointing out to me that the Hippocratic Oath has been revoked in the (French) literature to justify animal laboratory testing (Susanne, 1996). On this subject, I would add that one encounters two distinct camps: welfare animal advocates (humane treatment of medical research laboratory animals) and animal right advocates (banning all animal testing). The use of the Hippocratic Oath, it would seem to me, is utilized by welfare animal advocates even if they do not self-identify as such. 65 Bond and Lindburg, 1990, p. 373. 66 Ibid., Goldman, 2014. 67 Altman, Gross and Lowry, 2005, p. 47. 68 Consequently, were the regular feeding patterns found in zoos the result of anthropocentric (blind) prejudice on our part? 69 Interestingly enough, a real life non-interventionist drama was recently played out on Twitter. Award-winning filmmaker Dereck Joubert was tweeting the story of two lion cubs in the Selinda reserve that were unable to cross a deep river to join their four siblings and their mothers. Their Twitter followers were urging Joubert to intervene and save the cubs from spending a night alone with potentially catastrophic consequences (i.e., being eaten by hyenas). Joubert kept resisting the various calls for interference. The next morning, the cubs' hunger and fear of being left alone for another night overcame their fear of the deep water and they eventually swam across the river. At that point Joubert replied by saying: "If we had stepped in two cubs would have been abandoned because of our smell, or taken to a zoo." Later the same day Joubert reported that the two cubs were doing fine and their mothers had just killed a buffalo, leading one follower, journalist Alan Mairson, to comment: "The buffalo, though…not really his best day, is it?" (October 8, 2014), https://storify.com/Wildlife-Films/lion-drama-at-great-plains-selinda-reserve 70 Lovgren quoting Terborgh, 2005. 71 Ripple and Beschta, 2011, p. 2.
2019-05-15T14:32:03.823Z
2015-03-12T00:00:00.000
{ "year": 2015, "sha1": "1d0c3d79ddb7e06409487b7d3872f50a04f39993", "oa_license": "CCBY", "oa_url": "http://www.erudit.org/fr/revues/ateliers/2014-v9-n3-ateliers01748/1029062ar.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fd407001d0389eedb83194d41e8d1eed82ac4609", "s2fieldsofstudy": [ "Environmental Science", "Political Science", "Philosophy" ], "extfieldsofstudy": [ "Political Science" ] }
54040910
pes2o/s2orc
v3-fos-license
Delphi poll to assess consensus on issues influencing long-term adherence to treatments in cystic fibrosis among Italian health care professionals Purpose The aim of this study was to determine the level of consensus among Italian health care professionals (HCPs) regarding factors that influence adherence to cystic fibrosis (CF) treatments. Methods A Delphi questionnaire with 94 statements of potential factors influencing adherence was developed based on a literature review and in consultation with a board of experts (n=4). This was distributed to a multidisciplinary expert panel of HCPs (n=110) from Italian CF centers. A Likert scale was used to indicate the level of agreement (1= no agreement to 9= maximum agreement) with each statement. Three rounds were distributed to establish a consensus (≥80% of participant ratings within one 3-point region) and, at the third round, assign a ranking to each statement with a high level of agreement (consensus in the 7–9 range) only. Results Of 110 HCPs (from 31 Italian CF centers who were surveyed), responses were obtained from 85 (77%) in the first, 78 (71%) in the second, and 72 (65%) in the third round. The highest degree of agreement (95.8%) was reached with the statement that the HCP needs to build a relationship with the patient to influence adherence. A high level of agreement was not reached for statements that morbidity and mortality are influenced by the level of adherence to therapy, and no consensus was reached on the statement that age of the patient influences adherence to treatment. Conclusion We found that Italian HCPs endorsed a strong relationship with the patient as being a key driver in improving adherence. There were several areas, such as the influence of adherence on morbidity and mortality, where the consensus of Italian HCPs differed from the published literature. These areas require investigation to determine why these discrepancies exist. Introduction Cystic fibrosis (CF) is a lifelong, complex multisystem disease with significant challenges in treatment management. Treatments can be burdensome, time-consuming, and costly; 1,2 the daily regimen can require ingestion of as many as 40-50 pills, inhalation treatments lasting up to 2 hours, and 2-3 airway clearance sessions of 30 minutes each. 3 Equipment maintenance and preparation of medications, in addition to administrative barriers to maintaining access to medications, add to the time burden. 4 As reported in the World Health Organization (WHO) document, 5 poor adherence to long-term therapies severely compromises the effectiveness of treatment, making adherence a critical issue in the management of patients with chronic diseases. Poor adherence is considered the single greatest cause of treatment failure, 6 and results in increased morbidity and mortality, a reduction in quality of life, and increased health care use and costs. 7,8 Adherence rates for CF treatments are generally below 50%; 9,10 however, objective assessment tools are rarely used, with the vast majority of CF centers relying on clinical impression. 11 In both children and adults with CF, adherence decreases when the complexity of the regimen increases. 12 Rates of adherence are higher with oral medications, lower with nebulized treatment and pancreatic enzymes, and lowest with vitamin treatment, dietary changes, exercise, and physiotherapy. [13][14][15][16][17][18] In children whose parents strongly believe the treatment is necessary, better adherence is more likely. 19 With improved patient survival, long-term management of CF has become an important focus, but treatment demands become repetitive and burdensome over the course of the disease, making long-term adherence challenging. 13 Patients often carry out a personal cost-benefit analysis, assessing costs against the perceived necessity for, and their concerns about, their treatment regimen. 19 In addition, patients with CF are faced with new challenges as they age, such as the transfer of responsibility for their medical treatment (from parents/caregivers to themselves) during the transition from adolescence to adulthood. 20,21 In the recent update to the European Cystic Fibrosis Standards of Care: Best Practice Guidelines in the treatment of CF, the core components to addressing adherence were determined to be: team ethos with respect to patient care, collaboration with patients, identification of the barriers to adherence, and active support of patients' efforts. 22 Adherence to treatment regimens can be influenced by many emotional factors, including dependency, feeling different, embarrassment at taking drugs in public places, effect on personal freedom, and significant influence on lifestyle. Usually, the focus is on patient-related factors, while the provider-and health system-related determinants of nonadherence, which can have a major effect, are neglected. 23 The aim of this multicenter study was to investigate the level of consensus among Italian health care professionals (HCPs) on issues identified in the literature as influencing adherence to treatments in patients with CF. The study sought to identify areas of consensus and disagreement with the literature. The goal was to identify educational needs among Italian HCPs in order to develop a program of instruments, actions, and operational modalities (applicable in clinical practice) to support and enhance patients' long-term adherence to treatment. Methods The Delphi process, developed in the 1950s, 24 is a communication process widely used to establish consensus among experts when there is insufficient evidence to determine an objective answer. The process has been widely applied to health-related research, 25,26 and involves a panel of experts anonymously completing a series of structured questionnaires, with the responses provided to the participants between rounds and amended in subsequent rounds, until a consensus is reached. The structure of the process is designed to allow group consensus without direct confrontation and to allow participants to gather opinions and react in subsequent rounds. Review and approval of this study by an institutional review board or ethics committee were not required as no patient data were obtained. By completing and returning the questionnaire, each participant consented to being involved in the study. We first established a multidisciplinary expert board (two physicians, one psychologist, and one physiotherapist) and then implemented a bibliographic search of articles in PubMed, published in English language journals after January 1, 1995, using the keywords "compliance" OR/AND "adherence" AND "cystic fibrosis" OR/AND "cystic fibrosis therapy" AND "motivational interviewing" OR/AND "physician-patient relations". Thirty-one papers were identified (one randomized controlled trial, 17 observational studies, six reviews, five systematic reviews, one opinion, and one state-of-the-art review) and sent to the members of the expert board. Our review of this collection of manuscripts identified 165 statements related to treatment adherence among patients with CF, which were used to create a questionnaire (in Italian). No pilot testing was conducted; however, each assumption was evaluated three times, first independently by each member of the expert board via e-mail, followed by two collective teleconferences, with a final meeting to refine and validate each assumption. At the end of the selection process, duplications and redundancies were eliminated, and 94 statements were considered for the Delphi questionnaire and divided into three areas: "General Aspects" (six categories), "Roles and Relational Aspects" (three categories), and "Management Aspects" (four categories). A Likert scale was used by the respondents to evaluate the level of agreement with each of the statements in the questionnaire (1= no agreement to 9= maximum agreement). All activities were coordinated by a facilitator. All Italian specialized centers dedicated to care and management of patients with CF were contacted and asked for volunteer participation from different HCPs who met 2235 HCP consensus on factors influencing adherence to CF treatment the following criteria, as identified by the multidisciplinary expert board: • At least 2-years' experience in the field (for physicians) • In the last 2 years, at least 50% of their weekly work time dedicated to patients with CF (for non-physicians) The questionnaire was sent by e-mail with a maximum of three reminders; the answers arrived via e-mail, fax, or postal mail. Definitions for consensus and no consensus were decided a priori based on prior literature. 27,28 Consensus was defined as $80% of participant ratings within one 3-point region (1-3= low level of agreement; 4-6= borderline; 7-9= high level of agreement). Disagreement was defined as $90% of participant ratings within one of two wide ranges (1-6 or 4-9). Results outside the ranges for consensus and disagreement were defined as no consensus. The collected assessments were evaluated for internal consistency and aggregated to obtain a composite judgment. The HCP panel was consulted three times in total. In the first round, the questionnaire was distributed and the level of agreement among HCPs in relation to each statement was determined. In the second round, statements for which there was disagreement during the first round were shared with the HCP panel, who were allowed to alter their responses from the first round. A third round of consultation was conducted to apply rankings to each of the statements that had a high level of agreement following rounds one and two. Rankings were assigned within each of the three areas with one statement defined as the highest rank. The flow chart of the analysis is presented in Figure 1. Calculations for the analysis were performed using Microsoft Excel 2007 software package (Microsoft Corporation, Redmond, WA, USA). Results The analysis was conducted in Italy from January 2015 to June 2015. All 32 CF centers in Italy were contacted and a total of 110 HCPs from 31 centers participated in the study. The distribution of HCP categories responding in the first round (n=85) is presented in Figure 2; the inclusion criteria for volunteers resulted in the participation of a multidisciplinary expert panel of HCPs. Although participation in the study was purely voluntary, the response rate remained high throughout. Responses were obtained from 85 HCPs (77%) in the first round and 78 (71%) and 72 (65%) HCPs in the second and third rounds, respectively. Participation was lower among physicians (64%) and higher among nurses (93%), physiotherapists (95%), and psychologists (100%). The distribution of respondents in the second and third round did not vary greatly when compared with the first round, with physicians' participation at 59% in the second round and 55% in the third round, nurses' participation at 86% in both rounds, physiotherapists' participation Tables S1-S3 show results for all 94 statements evaluated in the first and second round, and the rankings assigned (for statements with high-level agreement). A high level of agreement was obtained from the first round on 37 statements ($80% of responses in regions 7-9) and a low level of agreement ($80% of responses in regions 1-3) only on one statement. After the second round, level of agreement was similar to the first round; therefore, a third round was conducted only to assign a rank to the 40 statements with a high level of agreement (seven in "General Aspects", 12 in "Roles and Relational Aspects", and 21 in "Management Aspects"), representing 45% of all statements. Figure 3 presents the ranked scores for statements in the "General Aspects" area. The highest level of agreement (first-level ranking) was the statement "Adherence means agreeing to one's own treatment plan and committing to follow it" (83.1%). Of interest, the treating team's communication skills were deemed more important (in terms of influencing adherence to treatment in patients with CF) than a physician's ability to communicate. Statements assigned a rank in the "Roles and Relational Aspects" area are presented in Figure 4. A very high level of agreement (95.8%) was reached on the statement affirming that building a relationship with the patient is necessary to influence treatment adherence, which was assigned a firstlevel ranking. All the statements in the "Roles and Relational Aspects" area confirmed the following: the perception of the important role of individual HCPs as well as treating the HCP team as a whole; the accuracy of information conveyed to the patient; the patient's comprehension of that information and the patient guiding change; the ability to listen and to discuss; the need to share the interventions dedicated to supporting adherence and for interventions to be consistent; and the need to define treatment objectives step by step. Statements assigned a rank in the "Management Aspects" area are presented in Figure 5. Of note is the high level of agreement among all HCPs on almost all statements in the "Management Aspects" area (21 out of 29 statements). The quality of communication and the need to have a structured 2237 HCP consensus on factors influencing adherence to CF treatment and consistent approach through a personalized, collaborative, open dialogue with the patient were unanimously evaluated as important. All statements, including those that did not reach a consensus, are presented in Tables S1-S3. In addition, differences in opinion between HCPs (as identified in the first round) are presented in Tables S4-S6. The level of agreement for statements regarding morbidity and mortality being influenced by adherence to therapy was lower than for other items (such as those describing both internal and external factors influencing adherence); different levels of agreement were found between the different professionals in the treating team. Physicians tended to agree more than other HCPs that morbidity is influenced by adherence. Consensus was not reached for the statements that the age of the patient influences adherence to treatment, or that adherence is a problem for the adolescent patient. Among the "Management Aspects", 14% of the HCPs surveyed reached a low agreement (responses in regions 1-3) on the issue of "understanding without judging the patient". Discussion Various barriers to adherence to CF treatments have been described, including lack of time, forgetfulness, unwillingness to take medication in public, high level of polypharmacy, poor patient-HCP communication, lack of disease-and treatment-related knowledge, and the patient's or caregiver's beliefs. 29 The Italian HCPs surveyed in our study reached a high level of agreement with almost half of the factors that influence adherence identified in the literature. These findings are important in order to provide suggestions about new interventional studies, educational materials needed for HCPs, and operational modalities applicable in clinical Table S2. 2238 Colombo et al practice to support and enhance patient's long-term adherence to treatment. A strong relationship between the patient and the CFtreating team was endorsed as a key driver of improved adherence in previous studies 30,31 and by the Italian HCPs surveyed. Establishing effective communication and interaction between patients, their families, and caregivers is one of the most important, and potentially one of the simplest, approaches to increase adherence in CF. Even considering the distinct functions of the different HCPs in treating CF, focusing on the patient rather than the disease can reduce differences in the perceptions of what each HCP can do to encourage and support the patient's adherence to treatments in the long term. In a large meta-analysis of studies conducted between 1949 and 2008, the probability of adherence was 2.1 times greater for patients treated by a physician who was classified as a good communicator. 32 A collaborative approach centered on patient care was also found to be Table S3. 2239 HCP consensus on factors influencing adherence to CF treatment important in motivating patients. 31,33 New approaches to training and training activities to increase competencies in the use of novel patient-centered tools, such as the motivational interview, 34 are important, and needed, but should be investigated through interventional studies incorporating them into daily standard clinical activities. Although a high level of agreement was reached between HCPs after the second round, representing 45% of all statements, agreement was not obtained for the remaining statements (eg, "The mortality/ morbidity of patients with cystic fibrosis is influenced by the level of adherence to the therapeutic prescriptions received" and "Adherence is influenced by the age of the patient"). It is useful to determine on which statements HCPs did not reach an agreement, as this can stimulate interest and inform future discussions and the development of focused training plans. However, in some instances, the low level of agreement among HCPs could be a result of differences in the interpretation of questions or slight semantic differences. There were also some notable discrepancies between the level of agreement among the Italian HCPs surveyed and the published literature. The influence of adherence on morbidity and mortality has been well described, 8 as has the fact that younger patients tend to have higher rates of treatment adherence than adolescents and adults. 35 However, in our study of Italian HCPs, a high level of agreement was not reached on these statements. These discrepancies could be explained by the differences in roles within the treating team; physicians oversee the management of patients with CF and seem more conscious of morbidity outcomes and the overall consequences of low adherence, while other HCPs, such as physiotherapists and psychologists, are focused on specific aspects of the management of CF patients. Furthermore, the views of Italian HCPs could be influenced by local factors, such as the Italian education system and cultural beliefs and/or values, which could account for the discrepancies observed. Additional research will be necessary to determine why these discrepancies exist between the opinions of HCPs and the published literature and to develop educational programs and training materials to ensure that HCPs are aware of the influence of these factors on adherence to treatment. Limitations Our descriptive, non-interventional study has several limitations. First, the multidisciplinary expert board consisted of only four HCPs; however, one of these HCPs is head of the Società Italiana Di Fibrosi Cistica (SIFC, a national scientific society) working group on adherence, of which the remaining three HCPs are all members. This was agreed to be sufficient for the purposes of this study. Second, the HCPs surveyed represent a self-selected population that is engaged enough to commit to three rounds of surveys and may not be representative of other HCPs. Although HCPs were engaged, participation rates decreased slightly between rounds, which may be due to reasons such as attrition, reduced personal motivation, or the amount of time required to complete the requested rounds of consultation. While the participation rate was lower among physicians than with psychologists, the physician participation rate was still relatively high, with the difference observed between professions possibly due to greater motivation among psychologists to complete questionnaires on this topic. We also only included HCPs from Italian CF centers; therefore, the results cannot necessarily be applied to other European or North American CF centers. Adherence was not explicitly defined, although several statements addressed how the HCPs defined adherence (eg, "agreeing to one's own treatment plan and committing to follow it"; "an individual behavior comprising the degree of concordance with the medical advice received"). Furthermore, the Delphi poll measured HCPs' beliefs about what affected long-term medication adherence. However, medication adherence is a patient behavior, and not a HCPrelated behavior. Therefore, the actual driving force behind medication adherence may be different to that considered by HCPs in determining why patients continue to take their medications, or not. An additional limitation is that there is no universally agreed definition of consensus, with several factors, such as the number of respondents, aim of the research, and resources, influencing the cutoff. 26 Even with these limitations, the Delphi technique has been established as a valuable means for structuring group discussion among experts and raising issues for debate. Conclusion We have identified important areas of consensus and disagreement regarding factors that influence adherence to CF treatments among Italian HCPs. To Italian physicians, adherence generally means the patient agrees to and commits to following a specific treatment plan; a strong physicianpatient relationship is a key factor in influencing adherence. More standard measures of adherence (medication possession ratio or proportion of days covered) rely solely on a patient's medication refill history and do not account for additional factors that might influence adherence (eg, the patient taking the medication in the correct way). These results are a first step in developing training tools and educational materials to work with HCPs to improve the overall adherence to treatment, which can ultimately lead to improved long-term outcomes.
2018-12-02T20:30:56.230Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "515800080841a92843f4ec5621ff4a60bdead5e7", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=45668", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "adc0babd8c0b22bd67cce02f044e607f1b824e71", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251302750
pes2o/s2orc
v3-fos-license
Towards System State Dispatching in High-Variety Manufacturing This study proposes a paradigm shift towards system state dispatching in the production control literature on high-variety manufacturing. System state dispatching lets the decision on what order to produce next be driven by system-wide implications while trading of an array of control objectives. This contrasts the current literature that uses hierarchical order review and release methods that control the system at release, whilst myopic priority rules control order dispatching based on local queue information. We develop such a system state dispatching method, called FOCUS, and test it using simulation. The results show that FOCUS enables a big leap forward in production control performance. Specifically, FOCUS reduces the number of orders delivered late by a factor of two to eight and mean tardiness by a factor of two to ten compared to state-of-the-art production control methods. These results are consis-tent over a wide variety of conditions related to routing direction, routing length, process time variability and due date tightness. This study argues for a paradigm shift towards system state dispatching in the Production Planning and Control (PPC) literature on high-variety manufacturing. System state dispatching is a novel concept that focuses on controlling the manufacturing system at dispatching. High-variety manufacturers are typically Make-To-Order companies that face the challenge of variability in demand, process time and routing (Stevenson, Hendry, & Kingsman, 2005). To ensure that high performance can be achieved despite these challenges, PPC decisions are of vital importance to coordinate complex order flow in real-time. Traditionally, PPC decisions are made using myopic priority rules (i.e. sequence each queue individually, Conway, Maxwell, and Miller 1967) using only local information. Today's literature uses Order Review and Release (ORR) methods that assume a strict decision hierarchy, where centralized release decisions use global information to set boundaries for decentralized priority rules (Chakravorty, 2001;Thürer, Fernandes, & Stevenson, 2020;Thürer, Land, & Stevenson, 2015;Thürer et al., 2014;Thürer, Stevenson, Silva, Land, & Fredendall, 2012). While this was an important advantage in the (not so recent) past, Industry 4.0 developments, including the Internet of Things and novel sensing technologies, increasingly enable decision making based on real-time information from anywhere in the manufacturing process (Chen, Gong, Rahman, Liu, & Qi, 2021;Lee, Azamfar, & Bagheri, 2021;Olsen & Tomlin, 2020;Yao et al., 2019). This questions the need to decompose PPC decisions into strict hierarchies since all system-relevant information can be evaluated in a single decision. We argue that the current stochastic PPC literature needs a paradigm shift towards system state dispatching whereby dispatching -the decision which order to select next for processing -is driven by systemwide implications. This overcomes myopia, as the value of order characteristics in the local queue is evaluated based on the system state. To our knowledge, there is no prior study in the literature on highvariety manufacturing that includes real-time and system state information into dispatching. We use discrete event simulation to accurately represent the complex dynamics and stochastics of high-variety In the 1970s, scholars increasingly started to realize that control over the entire system was needed to avoid myopic control decisions (Gelders & Kleindorfer, 1974;Hax & Meal, 1975). In response, scholars started to develop hierarchical PPC methods where centralized decisions set the boundaries for decentralized decisions (Bertrand & Muntslag, 1993;Bertrand & Wijngaard, 1986). For high-variety manufacturing systems, the most common approach is to add a central 'release' decision before dispatching (Kingsman, Tatsiopoulos, & Hendry, 1989;Land & Gaalman, 1996;Melnyk & Ragatz, 1989). Release decides to release or withhold an order from the manufacturing system by keeping it in a preprocess order pool until the next release opportunity. This decision is thought to be an important control mechanism to improve on-time delivery performance (Melnyk, Ragatz, & Fredendall, 1990;Thürer, Fernandes, Stevenson, Qu, & Tu, 2019) and allows using simple priority rules for dispatching (Bechte, 1988;Land, Stevenson, & Thürer, 2014). The underlying logic was that limiting the number of orders in the queue through controlled order release reduced the myopic effects of priority rules (Bechte, 1988;Ragatz & Mabert, 1988). Of these hierarchical ORR methods, the concept of Workload Control (WLC) received the most attention. WLC includes a Work-In-Progress (WIP) balancing mechanism to ensure stable but short queue lengths in the entire manufacturing system. Today's most advanced WLC methods combine highly sophisticated ORR methods with relatively simple priority rules (e.g., Fernandes, Thürer, Pinho, Torres, & Carmo-Silva, 2020;Fernandes, Thürer, Silva, & Carmo-Silva, 2017;Haeussler & Netzer, 2020;Kundu, Land, Portioli-Staudacher, & Bokhorst, 2020;Portioli-Staudacher & Tantardini, 2012;Thürer & Stevenson, 2021). For instance, Fernandes et al. (2020) uses FCFS and MODD as priority rules for dispatching, while using a real-time optimizing ORR method. Key Objectives: Average & Dispersion of Lateness The key control objectives of any PPC method are to ensure high on-time delivery performance and avoid very late deliveries (Kellerer, Rustogi, & Strusevich, 2020;Thürer et al., 2020). This can be achieved by keeping the average lateness and the dispersion of lateness among orders low (Land, 2006;Thürer et al., 2015). Figure 1 shows the distribution of lateness and illustrates the effects of reducing the average lateness (left-hand side) or its dispersion (right-hand side), showing that both lead to a reduction in the number of orders that are late (also known as tardy orders). Throughout the years, a vast array of 'control mechanisms' have been published in the literature that can reduce the average lateness or dispersion of lateness. The best understood control mechanisms are discussed below, starting with the mechanisms associated with average lateness. (Baker, 1974). Reduce Average Lateness In the literature, three control mechanisms can be distinguished to reduce the average lateness; reducing average throughput time using an 'SPT-mechanism', preventing starvation using 'WIP balancing', and responding to starving work centres using a 'starvation response'. The SPT-mechanism favours orders with a short process time over orders with a long process time (Bai, Tang, & Zhang, 2018). Prioritizing orders with a short process time has the benefit, on a system level, that successive work centres are quickly replenished, which in turn avoids potential throughput losses (Thürer et al., 2015). Besides the priority rule SPT, the ORR literature uses pool sequencing rules that include an SPT-mechanism such as Capacity Slack (Enns, 1995) which implicitly prioritize orders with short process times for release. WIP balancing can reduce the average throughput time similar to the idea of line balancing or heijunka (Thürer et al., 2012). The aim is to prevent starving work centres by distributing WIP equally over the queues (and thus avoiding potential throughput losses). This is typically achieved by ORR methods that fill WIP up to a target -although a pre-defined WIP target is not strictly required (Irastorza, 1974;van Ooijen, 1996). A popular implementation is Kanban, which enforces balance by limiting WIP levels at each work centre (Berkley, 1992;Ohno, 1988). The WLC literature developed ORR methods that balance the workloads -i.e., WIP for each work centre measured in process time units -to account for process time variability (Kundu et al., 2020;Land & Gaalman, 1998;Portioli-Staudacher & Tantardini, 2012;Thürer & Stevenson, 2021;Thürer et al., 2012). Arguably, priority rules such as Work in Next Queue (WINQ) control WIP balance by prioritizing queues with lower WIP levels. While WIP balancing aims to prevent starving work centres, they can still occur. In such cases, quickly reacting by sending orders using a starvation response mechanism is important (Land & Gaalman, 1998). Reduce the Dispersion of Lateness The current literature uses the two distinct control mechanisms 'slack timing' and 'pacing' to reduce the dispersion of lateness. Slack timing favours orders with less slack time, which is the time left that can be spent on nonprocessing activities. This idea is integrated into many priority rules (e.g., SLACK or EDD) and pool sequencing rules such as Periodic Release Date (Thürer et al., 2015). Pacing ensures that orders move through their routing with relatively equal intervals. This avoids orders getting stuck for too long, risking that the order might never be able to complete all its operations before its due date. This is especially important for orders with a longer routing. Pacing is integrated into priority rules such as the Number of Remaining Operations, Operational Due Date (ODD), MODD or Slack for each Operation (Baker & Kanet, 1983;Conway et al., 1967;Kanet & Kayya, 1982). Evaluation of Control Mechanisms While multiple control mechanisms have been discussed in isolation, many proposed PPC methods deploy a combination of various control mechanisms. For instance, ORR methods typically evaluate orders in a sequence dictated by slack timing, while the final selection of orders to be released is based on WIP balancing criteria. Also, the priority rule MODD switches between control mechanisms slack timing (using ODD) and the SPT-mechanism (using SPT) in periods of low and high workloads respectively (Land, Stevenson, Thürer, & Gaalman, 2015). Thus, both the dispersion of lateness and average lateness are supposed to be controlled (Thürer et al., 2015). Furthermore, WIP balancing and a starvation response have been monitored by ORR methods on a manufacturing system level. This is in contrast to the control mechanisms related to the dispersion of lateness which have been used myopically. For instance, ORR methods frequently use an order pool sequence rule to reduce the dispersion of lateness (Thürer et al., 2015) but this rule neglects the urgency of orders in the manufacturing system in comparison with orders in the pre-process order pool. This is in contrast to WIP balancing, where ORR methods make order release dependent on the WIP balance in the entire manufacturing system. Discussion: System State Dispatching To our knowledge, there is no systematic investigation into dispatching based on the state of the full manufacturing system -and thus looking beyond the order queue at dispatching. While hierarchical ORR methods take a system-wide overview when controlling order release, dispatching must correct for order flow disturbances -especially downstream (Land et al., 2014). However, dispatching is controlled by priority rules that base their decision only on local information. To our knowledge, only the priority rule WINQ, and its closely related variants, partly include system information by considering the WIP of the next downstream work centre. Nonetheless, is this rule ineffective in situations where orders all have the same downstream work centre e.g., a pure flow shop. Moreover, it neglects: (i) the system developments beyond the next downstream work centre, (ii) characteristics of orders in the queue and (iii) the need for multiple control mechanisms for effective control of the manufacturing process. Though not including system state information, another set of priority rules introduce the orders queuing time as real-time information in their decision process (e.g., Chang, 1997;Vepsalainen & Morton, 1987). However, using the order's queuing time faces inherent circularity; queuing time is used as a decision variable but the queuing time depends on the dispatching decision itself. The resulting queuing time is therefore notoriously difficult to predict (Sabuncuoglu & Comlekci, 2002). Therefore, authors have either used constant queueing time estimates (i.e. neither real-time nor system state information) by introducing a constant 'look ahead' scaling parameter (Morton & Pentico, 1993;Vepsalainen & Morton, 1987), which makes the resulting rule again myopic as decisions are solely based on information form orders in the local queue. The need to avoid local myopia was identified as far back as Conway and Maxwell (1962), who already concluded -regarding dispatching -that "we still believe that a superior (nonlocal) rule can be advised". However, in those years researchers foresaw data availability problems in practice (Bertrand & Wijngaard, 1986;Conway & Maxwell, 1962;Melnyk & Ragatz, 1989). This shifted the literature's attention towards ORR methods to reduce myopia whilst the debates on dispatching dimmed down (one notable exception being Land et al., 2014). Recent developments such as the Internet of Things and sensing technologies allow for more data to be collected and makes system-wide information available at a local level (Chen et al., 2021;Lee et al., 2021;Olsen & Tomlin, 2020;Yao et al., 2019), offering an opportunity to avoid myopia and increase performance. Therefore, we call for a paradigm shift in the stochastic PPC literature on high-variety manufacturing towards system state dispatching. System State Dispatching Method FOCUS We define a system state dispatching method, referred to as Flow and Order Control Using System state dispatching (FOCUS) to illustrate the effect of our proposed paradigm shift. FOCUS includes all five main control mechanisms that have been discussed in Section 2.2. Each control mechanism is embedded in a 'projected impact function' that returns a 'projected impact' value between [0, 1]. For a given order, the projected impact represents the value of a control mechanism, which is obtained by comparing an order characteristic -e.g., process time -with a system state variable -e.g., WIP balance. This comparison is executed by a projected impact function. Whenever selecting an order for dispatching, FOCUS uses the weighted average projected impact of all five functions to trade-off multiple control mechanisms. As this average will be dominated by those mechanisms that have the most impact on either average lateness or the dispersion of lateness given the system state, FOCUS dynamically switches between the mechanisms with the most projected impact over time. To formalize this, we introduce some notation. Orders are denoted with i ∈ I and work centres are denoted with j ∈ J. The set of orders in the system are denoted by O ⊂ I (i.e. orders that arrived but did not yet complete their operations). In turn, orders in the (virtual) queue of j are denoted with Q j ⊆ O and the orders that are being processed are denoted by H j ⊆ O. Then the orders that are located at work centre j are denoted by W j = Q j ∪H j . To accurately represent high-variety manufacturing systems, we treat process times, routing and order inter-arrival time as continuous random variables where process times and routing become known upon order arrival (cf. Thürer et al., 2020). As a consequence, order dispatching takes place in continuous-time t whenever a completed order leaves the work centre while the queue is not empty, or when an order arrives at an idle work centre. Therefore, we can safely assume that two dispatching decisions never take place at exactly the same time. FOCUS selects one order for dispatching from all candidate orders in the queue Q j of work centre j that awaits a dispatching decision. The formalization of FOCUS starts by outlining the five projected impact functions. Thereafter, the weighted average projected impact and the order selection process of FOCUS are defined. Since we use FOCUS to illustrate our proposed paradigm swift, we translate existing control mechanisms to the system state dispatching paradigm. As a consequence, since the literature for some control mechanisms (e.g., WIP balancing) is far more developed than other mechanisms (e.g., starvation response), the projected impact functions have varying degrees of complexity. Projected Impact Functions SPT-mechanism π : We consider the process times p ij of all remaining operations from all orders i ∈ O as the relevant system state, which extends the typical approach in the ORR and priority rule literature of only considering the process times in the queue Q j of j where the dispatching decision is taken. We define P = {(i, j), . . . } as the set of pairs (i, j) of orders i with remaining operations (thus i is in set O) and work centres j that execute these remaining operations. We evaluate order i ∈ Q j for dispatching using the projected SPT-mechanism impact function π(·), which is defined as The projected impact returned by π is between 0 and 1, and that it is close to 1 if the process time of an order is small relative to the largest process time of some order somewhere in the system. This allows to overcome local myopia since π compares the orders within and beyond the queue. At the same time, π remains versatile to the global system state by comparing the orders in the queue with the order that can better be used to implement a control mechanism -albeit by a dispatching decision in the near future. WIP balancing β : Similar to the WLC literature, WIP is measured in process time units -called workload -to account for process time variability. Before the projected WIP balancing impact function can be defined, we must determine how to: (i) measure the workload at each work centre, (ii) compute the change in workload if an order would be dispatched and (iii) evaluate the impact on WIP balance if i would be dispatched. (i) We measure workload l(·) that is located at a work centre j as (ii) When considering an order for dispatching, we evaluate the change in workload l + ij for any j ∈ J if i would leave its imminent work centre k − i ∈ J. Let k + i ∈ J indicate the first downstream work centre to which i moves after leaving k − i , then the changed workload l + ij for i given any j is defined as (iii) Ideally, the workload is perfectly balanced if a fraction 1/|J| of the total workload in the system is located at each work centre j ∈ J after selecting order i for dispatching. Therefore, we seek a measure that attains the highest value when a perfect WIP balance (i.e. l + ij / j∈J l + ij = 1/|J|) is achieved by selecting i. In contrast, the measure must return the lowest value whenever a single work centre contains all the workload (i.e. l + ij / j∈J l + ij = 1) indicating the ultimate WIP imbalance. This is captured by the entropy function e(·), which is defined as (Shannon, 1949) where the maximum entropy e max = ln(|J|) and the minimum entropy e min = 0 correspond with the perfect WIP balance and the ultimate WIP imbalance, respectively. At order selection, we want to know the ability of an individual order to change the existing WIP balance. Let e − be the entropy of the WIP balance before dispatching, then we define the change in entropy c(·) as Now we define projected WIP balancing impact function β(·) as The projected impact function β gives a positive projected impact to orders that can improve WIP balance whilst the selection amongst orders that cannot improve WIP balance is driven by other criteria. Starvation Response ξ : Work centres that are starving (defined as work centres without waiting orders in the queue) are included in the starvation set S = {j ∈ J | Q j = ∅}. We define the projected impact equal to projected SPT-mechanism impact π (Equation 1) if an order moves to a starving work centre. Therefore, the projected starvation response impact function ξ(·) is defined as Formalizing ξ in such a way, we give the highest impact if the process time of i is short, so the order can quickly move to a starving work centre. Slack timing τ : Let R i ⊆ J be the set of work centres in the remaining routing of i and d i the due date of i, then the slack s(·) is defined as Slack represents the time an order can still spend on non-processing activities from time t until its due date d i and is used by the projected slack timing impact function τ (·), which is defined as Using τ , we provide an increasingly higher projected impact to orders closer to their due date whilst orders that passed their due date receive the highest projected impact to encourage selection. The ultimate selection amongst these late orders is driven by other criteria than slack timing. Pacing δ : If |R i | is the number of remaining routing steps, then the slack per remaining operation v(·) is defined as Correcting slack for the number of remaining operations allows us to dictate the pace at which the orders' remaining operations need completion. Thus, we define the projected pacing impact function δ(·) as Note that the projected impact is higher if the time for each remaining operation becomes shorter. For already late orders, the ultimate selection is driven by other criteria than slack timing by setting the projected impact at one. Order Selection FOCUS selects the order z from the queue Q j for dispatching that has the highest weighted average projected impact for the five projected impact functions. We denote the weights by w 1 , . . . , w 5 and define weighted average projected impact I(·) of each order i at j as Hence, the selected order z ∈ Q j is defined as Simulation Model Similar to existing ORR methods and priority rules, the performance effect of FOCUS in a stochastic high-variety manufacturing system is analytically intractable given the inherent complexity of such systems. Therefore, we use discrete event simulation to obtain a Monte-Carlo estimate of FOCUS' performance. Since system state dispatching is a novel concept, FOCUS is tested in a wide variety of manufacturing systems. The included PPC methods, to which FOCUS is compared, are described after the manufacturing system and order characteristics have been outlined. Thereafter, we discuss the performance measures and experimental design. Manufacturing System and Order Characteristics To aid generalizability, six stylized manufacturing systems are used to test FOCUS in a wide variety of settings. The selected stylized systems have been used extensively in prior literature on PPC decisionmaking in high-variety manufacturing Thürer et al., 2020Thürer et al., , 2015Thürer et al., , 2012. These models are kept as parsimonious as possible to avoid unwanted interaction effects. Therefore, this study assumes no machine breakdowns, infinite raw materials and setups are included in process times. Furthermore, the orders' routing and process times are known upon arrival. An overview of the order and manufacturing system characteristics is provided in The manufacturing systems have six or twelve work centres, each consisting of a single capacity source, to vary the size of the system state. To allow for a wide variety of products to be produced, high-variety manufacturing systems are frequently organized in various layouts. Therefore, the routing length -i.e. the number of operations to be executed -and direction are varied (Oosterman, Land, & Gaalman, 2000). At one extreme is the Pure Flow Shop (PFS) for which the routing length is fixed and directed (i.e. all orders have the same routing). Conversely, the Pure Job Shop (PJS) -also known as a randomly routed job shop (Conway et al., 1967) -has a random routing length and random routing direction (i.e. routing is order specific). In between is the General Flow Shop (GFS), which uses a directed routing but a random routing length. For the PFS, routing length equals the number of work centres (six or twelve) in the manufacturing system. For the PJS and GFS, the routing length is uniformly distributed between one and the number of work centres, whilst each work centre has an equal probability of being included in the routing set. In the case of the GFS, this routing set of work centres is sorted in an ascending manner to create routing direction. Re-entry at the same work centre is allowed for none of the systems. Process times p ij are distributed following a 2-Erlang distribution with a mean of one after truncation (cf. Oosterman et al., 2000;Thürer et al., 2020;Thürer & Stevenson, 2021). The distribution is truncated at four-time units to avoid orders having a process time larger than workload targets of the ORR method discussed below. Orders arrive continuously whilst the inter-arrival times follow an exponential distribution to implement a stochastic process with independent arrivals. Similar to previous works (Thürer et al., 2015(Thürer et al., , 2012, the mean inter-arrival time is set to achieve an average utilization level of 90%. For the GFS and PJS, this implies a mean inter-arrival time of 1/λ = 0.684 and 1/λ = 0.602 for six and twelve work centres respectively. For the PFS, the mean-inter arrival time is 1/λ = 1.111 for six and twelve work centres. Due dates are obtained using the Total Work Content (TWK) procedure (Enns, 1995;Harrod & Kanet, 2013). Let t a i be the time at which order i arrives and K is a constant hyperparameter, then d i are defined as Recall that R i is the remaining routing set of i (and thus equal to the full routing set at the time of arrival). Appropriate values of K are highly dependent on the manufacturing system characteristics. To obtain results in the same performance range, hyperparameter K was tuned using pre-tests in such a way that the priority rule ODD achieves a percentage tardy around 15% in an uncontrolled release setting. This allowed obtaining reliable and relevant results across all experimental factors and performance measures discussed below. This implies that K is 8.74, 9.31 and 8.16 for six work centres and 8.08, 8.66 and 7.25 for twelve work centres in the PJS, GFS and PFS respectively. Experimental Setup FOCUS The weights w 1 , . . . , w 5 from FOCUS are all set to 1/5 to make no a-priory assumptions of the importance of one of the control mechanisms. Additionally, we want to study the contribution of each of the five control mechanisms. Therefore, we added five FOCUS configurations where one (of the five) control mechanism was removed. For instance, 'FOCUS -π' implies that FOCUS is used without π by setting its weight w 1 = 0 while the other weights w 2 , . . . , w 5 are set to 1/4. Benchmark Production Planning and Control methods FOCUS is compared with an array of PPC methods published in the literature. The priority rules FCFS, ODD, SPT and MODD are used in an immediate release setting. In addition, an ORR method -called LUMS COR -is used to control the manufacturing system hierarchically, as this is the common approach in the state-of-the-art literature (Fernandes et al., 2017;Kundu et al., 2020;Thürer et al., 2020;Thürer & Stevenson, 2021). Priority rules While the rules FCFS and SPT are straightforward, multiple versions of ODD are published in the literature. The priority rule ODD uses the operational due date o ij for order i at work centre j. This study uses the best performing and parameter-free version of o ij as outlined by Land et al. (2014). Let t r i be the release time and r ij is the routing step number, then ODD is defined as Recall the that |R i | indicates the number of routing steps and equals the total number of routing steps at release. In experiments without controlled release, note that t r i = t a i as orders are immediately released upon arrival. If o ij is used in conjunction with a ORR method, then generally t r i = t a i since orders remain in the pre-process order pool before release. MODD is defined as max{o ij , t + p ij } to dynamically switch between ODD (o ij > t + p ij ) and SPT (o ij < t + p ij ). In our experiments, we test the priority rules FCFS, SPT, ODD and MODD without hierarchical control of the system via an ORR method. ORR method The hierarchical ORR method LUMS COR (Thürer et al., 2012) is included for two reasons. Firstly, LUMS COR is an established ORR method that is compared to various alternatives using highly similar manufacturing systems as used here (e.g., Fernandes et al., 2017). Therefore, the inner workings and performance explanations of LUMS COR are well documented (Fernandes et al., , 2017Thürer et al., 2012). Secondly, compared to LUMS COR, no other ORR method in the current literature shows a clear performance advantage for all relevant performance indicators in a wide variety of manufacturing systems (cf. Fernandes et al., 2020). LUMS COR periodically evaluates orders for release by assessing if the workload contribution of an order fits within the workload target of each work centre. If an order does not fit within the targets of any work centre, then it is withheld in a pre-process order pool until the next release period. Besides periodic release, LUMS COR includes a continuous release trigger which releases an order to an idle work centre, even if it violates workload targets of other work centres. A pool sequence rule is used to determine the sequence in which orders in the pool are evaluated for release. See Thürer et al. (2012) for an elaborate description. LUMS COR requires setting additional parameters. Since the manufacturing systems studied here are the same or very similar as in previous studies, we adopt the overall best-performing parameters (Thürer et al., 2012). Therefore, the workload targets for each work centre are varied between 4.95, 5.85 and 6.75, whilst the periodic release interval is set to four-time units. The pool sequence rule EDD is used since the due date setting method TWK already includes information on the relative size of the order. The priority rule MODD is used for order dispatching since the current literature generally regards it as the best priority rule for ORR methods Kundu et al., 2020) as it is adapted or ORR methods. Throughout the remainder of this study, we refer to LUMS COR as ORR together with the used workload target. For instance, ORR (4.95) refers to LUMS COR using a workload target of 4.95. Performance Measures Delivery performance is the main performance objective in high-variety manufacturing (Sterna, 2021;Teo, Bhatnagar, & Graves, 2012;Thürer et al., 2020). Percentage tardy provides the most general indication of delivery performance. But we include other delivery performance measures based on lateness L i , which is negative if orders are delivered early, and tardiness T i = max{0, L i }. Previous work used mean tardiness, mean lateness and the standard deviation of lateness as measures for delivery performance (e.g., Haeussler & Netzer, 2020;Sterna, 2021;Yan et al., 2016). However, these measures tend to neglect extreme late deliveries as the tail of the lateness distribution can be very long. Mean squared tardiness T 2 i is used to capture this form of undesirable delivery performance. Similar to Thürer et al. (2020), we consider the combination of percentage tardiness and mean tardiness as the key criteria, whilst mean throughput time, the standard deviation of throughput times, mean lateness, the standard deviation of lateness and mean squared tardiness are used to support our conclusions. Experimental Design The above model was implemented in Python using the SimPy module. The full factorial experimental design includes thirteen PPC methods in six manufacturing systems. The included priority rules are FCFS, ODD, MODD, and SPT. The ORR method has three different workload targets. Besides the full FOCUS model, the experimental design includes five FOCUS configurations where one of the five control mechanisms is excluded. All these methods are tested in a PJS, GFS and a PFS with six and twelve work centres. This results into 13 × 6 = 78 main experiments. Besides the main experiments, we added a set of 'sensitivity experiments' with tighter due dates and increased process time variability to check if our conclusions are not unique to specific numerical settings. Tighter due dates were based on a reduction of hyperparameter K that increased the percentage tardy for ODD from 15% to 20%, leading to an additional 78 experiments. For process time variability, the 2-Erlang distribution was replaced with an untruncated Log-normal distribution to be able to vary the coefficient of variation between 0.5 and 1. In these experiments, we had to exclude three ORR methods as these methods cannot handle untruncated distributions, leading to another 10 × 6 × 2 = 120 experiments. So, we consider 78 main experiments and 78 + 120 sensitivity experiments, and so 276 in total. Each experiment is carried out over 10, 000 time units and replicated 100 times. For each experiment, an additional warm-up period of 3, 000 time units is used to avoid the initialization bias. This keeps the computational time within reasonable limits while still obtaining an accurate estimate of performance. Common random numbers are used to increase the significance of the performance differences between experiments. These parameters are in line with other studies (Thürer et al., 2012) and were found to be sufficient for our experiments. Results To obtain a first impression from the results of our 78 main experiments, we use an ANOVA to statistically analyse the impact of our main experimental variable PPC method (PPCM) in all six manufacturing systems (MFS). The statistical results for mean tardiness and percentage tardy can be found in Table 2 whilst the statistical results of our supportive measures can be found in Table 5 in Appendix A.1. For all performance measures, both the main and interaction effects are statistically significant at pvalue < 0.05. For percentage tardy and mean tardiness, the main effect PPCM has the highest F -ratio, suggesting that choosing an appropriate PPC method is more influential for on-time delivery than the different characteristics of the six manufacturing systems. The averages for our two most important performance measures, mean tardiness µ(T i ) and percentage tardy %(T i ), are presented in Table 3 for all 78 main experiments. The results of all performance measures can be found in Appendix A.2 (Table 6 and Table 7 for the systems with six and twelve work centres respectively). Reducing the Average & Dispersion of Lateness The results in Table 3 show that FOCUS considerably outperforms all benchmark priority rules and ORR methods on percentage tardy and mean tardiness. To further investigate these results, Figure 2 presents the performance frontier (grey line) between mean tardiness (x−axis) and percentage tardy (y−axis), where priority rules have red dots, ORR has blue dots, and the FOCUS versions have green dots. We remark that not all PPC methods are depicted in Figure 2 since some -e.g., FCFS -are located too far from the performance frontier or show almost the same results (in the case of the FOCUS versions). When specifically looking at FOCUS, FOCUS -β (FOCUS excluding WIP balancing) and FOCUS ξ (FOCUS excluding a starvation response), the results indicate that the frontier is fully defined by versions of FOCUS. Compared to SPT (the second-best policy on percentage tardy), FOCUS -β can reduce the percentage tardy by a factor of two in a six work centre PJS up to a factor of ten for twelve work PJS. At the same time, FOCUS also dominates the performance on mean tardiness by realizing reductions compared to ORR (6.75) of at least 63% and compared to MODD of at least 47% in all studied manufacturing systems. These performance improvements are often obtained by FOCUS -β which is consistently best in the six and twelve work centre PJS and GFS. The performance frontier, shown in Figure 2, suggests that FOCUS is highly effective in adhering both key control objectives. When looking at our supportive performance measures for a reduction in the average lateness, the results in Appendix A.2 indicate that FOCUS can reduce the mean throughput time and mean lateness further compared to ORR and MODD. Only SPT is able to realize a slightly lower mean throughput time and mean lateness. Typically, successfully reducing the average lateness amplifies the dispersion of lateness (Thürer, Stevenson, Land, & Fredendall, 2019), which would result in deteriorated performance on mean tardiness and mean squared tardiness. Compared to FOCUS, all ORR variants, SPT and MODD have a higher mean squared tardiness. Only ODD has a lower mean squared tardiness than FOCUS in PJS and GFS without a lower mean tardiness. Therefore, the best policy that achieves synergies between both key control objectives is FOCUS by mutually reducing the mean throughput time, mean lateness, mean tardiness and mean squared tardiness. Figure 3 presents an overview of all five FOCUS configurations where one control mechanism is removed compared to the full FOCUS configuration. We only show the systems with six work centres, as the twelve work centre systems show the same pattern. The vertical dotted lines show the performance on percentage tardy and mean tardiness of the full FOCUS configuration. If a version of FOCUS is outside the dotted line, this shows that leaving out the indicated control mechanism weakens performance. The most influential control mechanisms are the SPT-mechanism π and slack timing τ as shown by the results of FOCUS -π and FOCUS -τ , respectively. When one of these two control mechanisms is left out, performance deteriorates on both percentage tardy and mean tardiness. As can be seen by FOCUS δ, performance also deteriorates when pacing is left out although the effect is less severe. In contrast, WIP balancing (see FOCUS-β) negatively influence performance in a PJS and GFS, whilst its influence in a PFS is minimal. This suggests that pure WIP balancing to prevent starvation is not effective at dispatching, especially not if other control mechanisms (such as the SPT-mechanism) can already reduce the mean throughput time and mean lateness. This result contrasts with the WLC literature, which argues that WIP balancing is a key mechanism to reduce throughput times (Thürer et al., 2014) or control the manufacturing system at release (Thürer, Fernandes, et al., 2019). In a similar vein, a starvation response ξ (see FOCUS -ξ) seems to negatively influence performance, especially if routing becomes less directed (i.e. GFS and PJS). In Section 6, we use the above observations to evaluate if we can leave out more control mechanisms. Sensitivity Analysis This section summarizes the results for the sensitivity experiments. Detailed results can be found in Appendix A.3. Due date tightness: When due dates become tighter, our conclusions remain qualitatively the same as FOCUS keeps outperforming all other PPC methods in all six manufacturing systems. One exception is the result that the control mechanism starvation response ξ starts to contribute positively in both PFS systems. Process time variability: When process time variability increases, FOCUS -β remains best in all PJS and GFS manufacturing systems. For the PFS systems, FOCUS is Pareto efficient by trading-off a higher percentage tardy for a lower mean tardiness. In these systems, the priority rules SPT (all systems) and MODD (only PFS) can reduce the percentage tardy further than FOCUS at the cost of increasing -in the case of SPT even doubling -mean tardiness. Similar to increased due date tightness, we find that a starvation response ξ has a positive performance contribution in a PFS. Since the truncation point of the process time distribution is removed in this setting, the results indicate that FOCUS' performance is robust to extremely high process times. Discussion of FOCUS' Performance To explain FOCUS' performance, we use time series data instead of the steady-state averages (presented earlier), because the latter is important for reliable statistical estimates but fails to show the interaction between control decisions and developments in the system state (Land et al., 2015). We focus on the results of a six work centre GFS, as this system is argued to be most realistic (cf. Enns, 1995) and because our observations are the same in the other systems. Over time, we collected WIP levels and relate these to lateness performance. Figure 4 illustrates the system state developments under FOCUS -β compared to MODD, ORR (6.75), as these are the most competitive methods from each literature stream. Time is shown on the x-axis whilst the y-axis shows lateness L i and the WIP level in terms of load ( j∈J i∈Wj p ij ) in the manufacturing system. The results in Figure 4 show that MODD and ORR (6.75) have extreme late deliveries, particularly in periods of peak loads. While this is a known outcome of MODD (Land et al., 2015), we can also see that ORR cannot prevent extreme late deliveries even though peak loads are buffered in the pre-process order pool -explaining the lack of peak loads for ORR (6.75) in the system. FOCUS -β also delivers some orders very late but this is less common and less extreme in comparison with MODD and ORR (6.75). Note how MODD generates higher loads than FOCUS -β, which becomes especially visible during peak loads, for example, at time 2, 100 till 2, 500. To better understand how FOCUS takes decisions over time, we are mainly interested in the decisions of FOCUS -β in low load vs. peak load periods. Therefore, we specifically look at time 2, 100 till 2, 500 and collect additional system state information, which is presented in Figure 5. We gather the output of projected impact functions π, ξ, τ and δ of the selected order (i.e. z) for every dispatching decision. To get a general impression, graph A in Figure 5 shows the moving average of these projected impacts of the imminent and 200 preceding and 200 successive dispatching decisions. At the same time, we collect system state information: the entropy in the system e − (right y-axis, graph B), the load (left y-axis, graph B), the mean and max of process times p ij (graph C), slack s(·) (graph D) and slack per operation v(·) (graph E). As loads (graph B, Figure 5) increase, we can see that the mean slack (graph D) and mean slack per operation (graph E) decrease, indicating that more orders get close to their due date. At order selection, this leads to a higher projected impact from τ (slack timing) and δ (pacing), as seen in the graph A. However, as -by definition -τ and δ are fixed at (close to) 1 for all (almost) late orders in the queue, this makes selection amongst (almost) late orders increasingly based on the effectiveness of the SPTmechanism π. This switch to the SPT-mechanism is particularly important in periods of peak loads (Land et al., 2015). Unlike MODD, this switch by FOCUS -β is not myopic as it depends on the system state; π is neglected if none of the (almost) late orders in the queue has a short process time, compared to other orders somewhere in the system. In such a manner, FOCUS -β considers the characteristics of orders in the queue but remains versatile to the system state by neglecting a control mechanism if it can better be applied in a near-future dispatching decision. We found earlier that the role of starvation responding ξ is mixed. Graph A in Figure 5 shows that ξ -on average -becomes less important when loads increase (graph B). We can also see that the entropy values indicate an increasingly balanced system (graph B) as fluctuations in entropy become less frequent and less severe (recall that maximum entropy e max = 1.79 for a six work centre GFS). Thus, starvation becomes increasingly unlikely during peak loads, resulting in a minor influence of ξ on mean tardiness and percentage tardy. When we compare FOCUS logic with ORR logic, a major difference is that ORR assumes a hierarchical sequence of control mechanisms. ORR logic is that the system must be controlled at release using WIP balancing and thereby limiting the ability for priority rules to select non-urgent orders. This logic was primarily discussed at the inception of the ORR literature (Bechte, 1988;Kingsman et al., 1989;Melnyk & Ragatz, 1989;Ragatz & Mabert, 1988) and, to our knowledge, has not been challenged since. For instance, Ragatz and Mabert (1988) mentioned that "jobs released to the shop floor too early will compete for resources (machine time) with more urgent jobs and may interfere with the progress of those jobs". As can be seen in Figure 4, ORR's ability to reduce extreme late deliveries is marginal, indicating that ORR's performance is heavily influenced by the ability of priority rules to handle late deliveries. Although not explicitly noted, ORR's dependence on priority rules is also reported by more recent theoretical (Kundu et al., 2020;Land et al., 2014) and empirical work (Soepenberg, Land, & Gaalman, 2012). As we explained above, FOCUS uses projected impact to measure the effectiveness of each control mechanism and adapts to the system state. This overcomes myopic behaviour at dispatching, making the need to use ORR for control of delivery performance limited since non-urgent orders do not compete for resources with urgent ones. Conclusion This study argues for a paradigm shift in the stochastic production control literature towards system state dispatching. This is in contrast with the existing literature where a hierarchical order review and release (ORR) method controls the system by releasing orders whilst priority rule dispatch orders from the queue. Instead, system state dispatching integrates system-wide information into order dispatching decisions by trading-off an array of control mechanisms. We illustrated the effectiveness of system state dispatching by developing a novel production control method called FOCUS that is comprised of five control mechanisms; Shortest Process Time (SPT) mechanism, Work-In-Progress (WIP) balancing, starvation response, slack timing and pacing. Using a simulation experiment, FOCUS was tested in six different manufacturing systems and considerably outperformed the priority rule SPT, Modified Operational Due Date and ORR method LUMS COR. Compared to these methods, FOCUS reduces the percentage tardy and the mean tardiness with at least a factor of two. These results are robust over all considered manufacturing systems types, regardless of due date tightness or the (maximum) routing length. When assessing FOCUS' excellent performance, we found that not all five control mechanisms of FOCUS are effective. Specifically, WIP balancing -aiming to prevent starving work centres by spreading WIP equally over the work centres -does not or sometimes even negatively influences performance, despite being a key mechanism of the ORR approaches to production control. These findings strongly support our claim that a paradigm shift towards system state dispatching is needed in the PPC literature on high-variety manufacturing. Managerial Implications Under the name of Industry 4.0 or Smart Industries, practitioners advocate the use of advanced data collection and sharing technologies such as sensor networks and autonomous communication via the Internet of Things, enabling the use of system-wide and real-time information (Chen et al., 2021;IBM, 2021;Lee et al., 2021;McKinsey, 2020;Olsen & Tomlin, 2020). In this paper, we show how to make use of system state information in control decisions in specifically high-variety manufacturing. Our results indicate that managers should indeed integrate state information in the deployment of control mechanisms at dispatching to avoid local myopia. More specifically, we found that the combination of control mechanisms needed dependents on the state of the manufacturing system. Therefore, even if system state information is not available, managers should find ways of 'looking beyond the queue' in the deployment of control mechanisms, as this substantially contributes to better delivery performance. Limitations & Future Research A limitation of this study is the character of the stylistic manufacturing systems assumed in our simulation model. We believe this is justified by the explanatory nature of this study and enables us to gain experimental control over important parameters such as capacities, arrivals and process time variability. However, future research can test FOCUS in more complex settings, where e.g., machine failure, capacity changes or seasonal demand changes are considered; as well as empirical settings. A second limitation is that we did not consider controlled release in FOCUS, as release can reduce WIP levels in the system (Thürer et al., 2012). This was done to keep or study focused on the inclusion of state information at dispatching and to evaluate the effect on delivery performance. However, the short mean throughput time of FOCUS already suggest that, even in an uncontrolled release setting, average WIP levels are quite low. This might even become lower if future research adds controlled release to FOCUS by including a trade-off between selecting an order from the pre-process order pool or queue. This potentially allows reducing WIP while maintaining the benefits of system state information at dispatching. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Data statement All data and the simulation code are available upon request by the corresponding author. Appendices A Detailed Results & Main Experiments Some tables in the appendix use abbreviations of performance measures which are listed in Table 4, where t a i is the arrival time, t c i is the completion time and d i is the due date of order i. PERFORMANCE MEASURES Performance Measure Notation Measure Formulation A.3 Details & Results Sensitivity Analysis Due Date Tightness: The results from the main experiments might be unique to our due date allowance and, therefore, we increase due date tightness by decreasing the due date hyperparameter K such that the percentage tardy for ODD increases from 15% to 20%. Detailed results can be found in Table 8 and Table 9. Table 9: Tight due dates for twelve work centre manufacturing systems. Process Time Distribution: We replace the truncated 2-Erlang distribution (used in our main experiments) with an untruncated Log-normal distribution and varied the coefficient of variation (CV = σ/µ) between 0.5 and 1 to increase from moderate to high variability, respectively, while keeping the mean at 1 time unit. The results are presented in Table 10 and Table 11 for moderate variability while the results for high variability are shown in Table 12 and Table 13
2022-08-04T15:10:30.544Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "525bf53eaa62720e479cf89b2b9c16ff667b8f42", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.omega.2022.102726", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1f5b5735a9044d515b266f78050d36c384155ddb", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
41662499
pes2o/s2orc
v3-fos-license
The Peptide Repertoires of HLA-B27 Subtypes Differentially Associated to Spondyloarthropathy (B*2704 and B*2706) Differ by Specific Changes at Three Anchor Positions* HLA-B*2704 is strongly associated with ankylosing spondylitis. B*2706, which differs from B*2704 by two amino acid changes, is not associated with this disease. A systematic comparison of the B*2704- and B*2706-bound peptide repertoires was carried out to elucidate their overlap and differential features and to correlate them with disease susceptibility. Both subtypes shared about 90% of their peptide repertoires, consisting of peptides with Arg2and C-terminal aliphatic or Phe residues. B*2706 polymorphism influenced specificity at three anchor positions: it favored basic residues at P3 and PΩ-2 and impaired binding of Tyr and Arg at PΩ. Thus, the main structural feature of peptides differentially bound to B*2704 was the presence of C-terminal Tyr or Arg, together with a strong preference for aliphatic/aromatic P3 residues. This is the only known feature of B*2704 and B*2706 that correlates to their differential association with spondyloarthropathy. The concomitant presence of basic P3 and PΩ-2 residues was observed only among peptides differentially bound to B*2706, suggesting that it impairs binding to B*2704. Similarity between peptide overlap and the degree of cross-reaction with alloreactive T lymphocytes suggested that the majority of shared ligands maintain unaltered antigenic features in the context of both subtypes. The molecular basis for the very strong association of HLA-B27 with ankylosing spondylitis (AS) 1 (1), reactive arthritis (2), and other spondyloarthropathies remains a major unsolved problem. Hypotheses on the pathogenic role of HLA-B27 fall into three main categories. The classical "arthritogenic peptide" hypothesis assumes that a self-peptide presented by HLA-B27 would be the target of autoimmune CTLs activated by external antigen, such as bacteria (3). The occurrence of arthri-tis in HLA-B27 mice lacking ␤ 2 -microglobulin (4) and the fact that the HLA-B27 heavy chain can form homodimers in vitro (5) suggested that HLA-B27 might act as a noncanonical peptide-presenting molecule, perhaps leading to activation of unusual T-cell responses (6,7). It has also been suggested that misfolding of HLA-B27 heavy chains, perhaps exacerbated by infection or other environmental factors, might lead to endoplasmic reticulum stress responses and inflammation independent of antigen presentation (8,9). Other effects of HLA-B27 on modulating bacteria-host interactions have also been proposed (10,11). Supportive evidence for the arthritogenic peptide hypothesis comes from population studies showing differential association of some HLA-B27 subtypes with AS. Like most other class I antigens, HLA-B27 shows extensive polymorphism in human populations. As many as 25 HLA-B27 subtypes have been described thus far (see www.ebi.ac.uk./ imgt/hla). The apparently low frequency of many of these precludes a statistical analysis of their putative association with AS. At least B*2702, B*2704, B*2705, and B*2707 are linked to this disease (12). In contrast, B*2706 and B*2709 are not associated or are weakly associated with AS (13)(14)(15)(16). B*2706 has been found with significant frequency in Southeast Asia and the Pacific and at much lower frequency in continental China. A population study carried out in Thailand initially showed that whereas B*2704 was strongly linked to AS in this population, no B*2706 AS patients could be found, despite the significant frequency of this allele among healthy controls (13,17). This differential association of B*2704 and B*2706 in a same population was subsequently confirmed in two additional studies carried out in Indonesia (14) and among Singapore Chinese (15). Moreover, in segregation studies carried out in families in whom both B*2704 and B*2706 occurred, AS was observed only in B*2704-positive individuals (18). Two B*2706positive AS patients were found in China (12), suggesting that lack of association of this subtype with AS is not absolute and might be modulated to some extent by additional genetic factors. However, the very low frequency of B*2706 in China has thus far precluded case-control or family segregation studies in these populations (17). B*2704 and B*2706 differ by only two amino acid changes: H114D and D116Y (19 -21). Both of these changes are located in the same strand of the ␤-pleated sheet floor of the peptide binding site of HLA-B27 and are therefore not accessible to direct contact by the T-cell antigen receptor (22). However, due to their location, they can influence peptide specificity. Previous studies from our laboratory showed that a major difference between B*2704 and B*2706 is the more restricted specificity of the latter subtype for peptides with nonpolar C-terminal residues, including only aliphatic and Phe residues, whereas B*2704 also binds peptides with C-terminal Tyr (23). Peptide binding studies using poly(Ala) peptide analogs (24) suggested that B*2704/B*2706 polymorphism could have more complex effects than those revealed by pool sequencing by modulating peptide specificity at secondary anchor positions. These previous studies suggested a direct relationship between the lack of or weak association of B*2706 with AS and its more restricted peptide specificity, relative to B*2704, but they failed to answer some relevant questions. First, to what extent does B*2706 polymorphism change the peptide repertoire of B*2704? That is, how many of the peptides presented by B*2704 fail to bind in vivo to B*2706? Second, besides the known effect on C-terminal residue specificity, are there other differential features between B*2704 and B*2706 ligands? Third, are the antigenic features of shared peptide ligands different when presented in the context of either B*2704 or B*2706? To address these questions, we have carried out a systematic comparison of the B*2704-and B*2706-bound peptide repertoires to determine their degree of overlap. In addition, we have used mass spectrometry (MS) to sequence a sufficiently large set of natural ligands to assess the differential structural features of the peptides bound to B*2704 and B*2706. Finally, we have used an extensive panel of alloreactive CTLs to compare the degree of antigenic similarity between B*2704 and B*2706 with the overlap of their peptide repertoires. MATERIALS AND METHODS Cell Lines and Monoclonal Antibodies-HMy2.C1R (referred to hereafter as C1R) is a human lymphoid cell line with low expression of its endogenous class I antigens (25,26). B*2704-and B*2706-C1R transfectant cells were described elsewhere (23). C1R cell lines were cultured in Dulbecco's modified Eagle's medium supplemented with 7.5% fetal bovine serum (both from Invitrogen). RMA-S is a transporter associated with antigen processing-deficient murine cell line (27,28). RMA-S transfectant cells expressing B*2704 or B*2706 and human ␤ 2 -microglobulin have been described previously (29). These cells were cultured in RPMI 1640 medium supplemented with 10% fetal bovine serum. Isolation of B*2704-and B*2706-bound Peptides-This was carried out using 10 10 C1R transfectant cells lysed in 1% Nonidet P-40 in the presence of a mixture of protease inhibitors, after immunopurification of HLA-B27 with the W6/32 monoclonal antibody and acid extraction, exactly as described elsewhere (32). HLA-B27-bound peptide pools were fractionated by HPLC at a flow rate of 100 l/min as described previously (33), and 50-l fractions were collected. Mass Spectrometry Analysis and Sequencing-The peptide composition of HPLC fractions was analyzed by matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) MS using a calibrated Kompact Probe instrument (Kratos-Schimadzu) operating in the positive linear mode, as described previously (33). Alternatively, a Bruker Reflex™ III MALDI-TOF mass spectrometer (Bruker-Franzen Analytic GmbH, Bremen, Germany) equipped with the SCOUT™ source in positive ion reflector mode was also used, as described previously (34). Peptide sequencing was carried out by quadrupole ion trap nanoelectrospray MS/MS in an LCQ instrument (Finnigan ThermoQuest, San Jose, CA), exactly as detailed elsewhere (35,36). In some cases, peptide sequencing was also done by post-source decay (PSD) MALDI-TOF MS, as described previously (34). In all cases, peptide-containing HPLC fractions were dried and resuspended in 5 l of methanol/water (1:1) containing 0.1% formic acid. Aliquots of 0.5 or 1 l were used for MALDI-TOF or nanoelectrospray MS analyses, respectively. Synthetic Peptides-Peptides were synthesized using the standard solid-phase Fmoc (N-(9-fluorenyl)methoxycarbonyl) chemistry and purified by HPLC. The correct composition and molecular mass of purified peptides were confirmed by amino acid analysis using a 6300 Amino Acid Analyzer (Beckman Coulter, Palo Alto, CA), which also allowed their quantification. Epitope Stabilization Assay-The epitope stabilization assay used to measure peptide binding was performed as described previously (29). Briefly, B*2704-or B*2706-RMA-S transfectant cells were incubated at 26°C for 22 h in RPMI 1640 medium supplemented with 10% heatinactivated fetal bovine serum. They were then washed three times in serum-free medium, incubated for 1 h at 26°C with various peptide concentrations without fetal bovine serum, incubated at 37°C, and collected for flow cytometry after 2 h (B*2704) or 4 h (B*2706). HLA-B27 expression was measured using 50 l of hybridoma culture supernatant containing the monoclonal antibody ME1. Binding of the RRYQKSTEL peptide, used as a positive control, was expressed as C 50 , which is the molar concentration of the peptide at 50% of the maximum fluorescence obtained at the concentration range used (10 Ϫ4 to 10 Ϫ8 M). Binding of other peptides was assessed as the concentration of peptide required to obtain the fluorescence value at the C 50 of the control peptide. This was designated as EC 50 . Isolation of HLA-B*2704-specific CTL Clones and Cytotoxicity Assay-B*2704-specific CTL clones were obtained from five unrelated HLA-B27-negative donors as follows. About 10 6 peripheral blood mononuclear cells from each donor were stimulated for a week with a mixture of 10 5 B*2704-positive lymphoblastoid cell lines and 10 6 autologous peripheral blood mononuclear cells irradiated at 80 and 50 grays, respectively. About 300,000 responder cells from the primary mixed lymphocyte cultures were subsequently stimulated weekly under the same conditions in the presence of 30 units/ml recombinant interleukin 2 (a kind gift of Hoffmann-LaRoche). Alternative stimulation of mixed lymphocyte cultures with two different B*2704-positive lymphoblastoid cell lines, KNE (A1, A2, B8, B*2704, DR2, DR3) and WEWAK I (A11, A24, B62, B*2704, Cw2, Cw4, DR2) was used to improve the yield of B27specific CTLs by minimizing restimulation of T cells specific for non-B27 alloantigens. T-cell clones were obtained by limiting dilution, seeding serial dilutions of stimulated T cells in 96-well plates containing 2,000 irradiated stimulator lymphoblastoid cells/well and 20,000 irradiated feeder peripheral blood mononuclear cells/well in the presence of 30 units/ml recombinant interleukin 2. Cells in wells growing below the statistical limit for clonality were screened for HLA-B27 reactivity using a standard 51 Cr release cytotoxicity assay (37) against B*2704-C1R targets, using untransfected C1R cells as a negative control. Mixed lymphocyte cultures and T-cell clones were grown in Iscove's modified Dulbecco's modified Eagle's medium with glutamax I (Invitrogen), supplemented with 100 units/ml penicillin, 0.1 mg/ml streptomycin sulfate, and 0.05 mg/ml gentamicin (all from Sigma) and 15% of myoclone (Invitrogen). T-cell clones were restimulated weekly, as described above, in the presence of recombinant interleukin 2. The reactivity of T-cell clones with B*2706 was assessed with B*2706-C1R transfectant cells, usually at an effector:target ratio of 1:1, using the same 51 Cr release cytotoxicity assay as described above. B*2704 and B*2706 Bind Largely Overlapping Peptide Repertoires-The B*2704-and B*2706-bound peptide pools were isolated from the corresponding C1R transfectant cells and fractionated by HPLC under identical conditions in consecutive runs. Peptide-containing fractions were analyzed by MALDI-TOF MS. The MS spectrum of each HPLC fraction from one subtype was compared with the MS spectrum of the correlative, previous, and following HPLC fraction from the other subtype. This was done to account for slight shifts in retention time between consecutive chromatographic runs. Ion peaks with the same (Ϯ1) mass/charge (m/z) among the HPLC fractions compared were considered to be identical peptides shared by both subtypes. Ion peaks in one HPLC fraction not found in the counterpart from the other molecule were considered to be peptides differentially bound to one subtype. Of a total of 969 ion peaks from B*2704 and a total of 943 ion peaks from B*2706, 849 (88% and 90%, respectively) were common to both subtypes, 120 ion peaks from B*2704 (12%) lacked a detectable counterpart in B*2706, and 94 ion peaks from B*2706 (10%) lacked a detectable counterpart in B*2704 (Table I). These results indicate that B*2704 and B*2706 share about 90% of their peptide repertoires, and each subtype binds about 10 -12% of peptides that are not found in the other subtype. To identify peptides common to both molecules but much more abundant in one of them, we selected ion peaks in each HPLC fraction whose intensity was Ͼ50% of the maximum signal intensity in that fraction. Their amount was measured as the total number of millivolts corresponding to each ion peak in all HPLC fractions in which it was detected. When the total intensity of a given ion peak was more than 10 times higher in one molecule than in the other molecule, the corresponding peptide was assigned as a quantitative difference. Of 218 peptides compared using these criteria, 15 (7%) predominated in B*2704, and 13 (6%) predominated in B*2706. This result suggests that, in addition to determining differential binding of some peptides, B*2704/B*2706 polymorphism also influences the amount of bound peptide in at least an additional 13% of the shared ligands. The size of B*2704-and B*2706-bound peptide ligands showed a very similar Gaussian distribution, with a mean peptide mass ([M ϩ H] ϩ ) of 1139 and 1128 Da, respectively (Table I). However the size distribution of peptides differentially bound to each subtype showed some significant differences: whereas in the lower molecular mass range (850 -1,100 Da), B*2706-specific peptides predominated over B*2704bound peptides, the opposite was observed in the higher molecular mass range (Fig. 1). Thus, the mean molecular mass ([M ϩ H] ϩ ) of B*2704 and B*2706 peptide differences was 1,280 and 1,200 Da, respectively. The 80-Da difference is compatible with slightly longer length of peptides differentially bound to B*2704 and/or bulkier amino acid side chains at some position(s). Peptides with Arg 3 Residues Are Disfavored in B*2704 and Suitable for B*2706 -P3 is an important anchor position for peptide binding to HLA-B27, second only to P2 and P⍀ (22,24,38). To confirm the differential suitability of basic P3 residues in B*2704 and B*2706 suggested by peptide sequencing, three B*2706-specific ligands with Arg 3 and peptide analogs containing Val 3 or Phe 3 were tested for binding to B*2704 and B*2706 in an epitope stabilization assay using RMA-S transfectant cells. For all three ligands, substitution of Val 3 or Phe 3 for Arg 3 improved binding to B*2704, but not to B*2706 ( Fig. 3; Table II). The magnitude of the effect on B*2704 was somewhat variable depending on each particular ligand, presumably reflecting the contribution of other anchor positions in each peptide. Phe 3 was slightly disfavored in B*2706, relative to Arg or Val in this position. These results indicate that Arg 3 is disfavored in B*2704, but not in B*2706, relative to nonpolar residues. High Allospecific Epitope Sharing between B*2704 and B*2706 -Because most alloreactive CTLs recognize peptides naturally bound to the alloantigen molecule, another way to assess the overlap between the peptides bound to B*2704 and B*2706 was to test their cross-reactivity with allospecific CTLs. Thus, 56 CTL clones were raised from five unrelated HLA-B27negative donors against B*2704 and tested for their recognition of B*2706-CIR target cells (Table III). Of the CTLs tested, 77% cross-reacted totally (Ͼ60% relative lysis) or partially (30 -60% relative lysis) with B*2706, whereas 23% of the CTLs showed little or no cross-reaction (Ͻ30% relative lysis) with this allotype. The results correlate well with the peptide overlap estimated by direct biochemical analysis. In addition, they suggest that most of the shared ligands between B*2704 and B*2706 are antigenically similar in both contexts, as assessed with allospecific CTLs. DISCUSSION Previous studies from our laboratory, based largely on pool sequencing (23), showed that both B*2704 and B*2706 bind peptides with aliphatic or aromatic C-terminal residues. However, whereas B*2704 could accept C-terminal Tyr, B*2706 specificity was restricted to C-terminal aliphatic and Phe residues. This was confirmed in the present study by showing that all shared ligands between both subtypes had C-terminal aliphatic or Phe residues, whereas B*2704-specific ligands had Tyr or Arg. The presence of Arg as a C-terminal peptide motif of B*2704 had gone undetected in previous studies, probably because it is present in a small proportion (in our study, in 3 of 35 sequenced ligands) of the B*2704 peptide repertoire. However, peptides with C-terminal Arg accounted for 33% (3 of 9) of the natural ligands differentially bound to B*2704. Previous sequencing studies also failed to detect any other differential peptide motif between B*2704 and B*2706 but suggested an increased frequency of Lys 3 and Lys 7 among B*2706 ligands. In vitro binding studies using poly(Ala) peptide analogs also revealed that these two subtypes differed in their P3 residue specificity, with better acceptance of basic P3 residues by B*2706 (24). However the effect of this modulation on subtype-bound peptide repertoires in vivo could not be assessed from these studies. Our results now clearly establish that B*2706 polymorphism positively selects for peptides with basic P3 and/or P7 residues. Thus, of the five sequenced B*2706-specific peptides, three had basic residues at both P3 and P7, and all five had a basic residue in at least one of these two positions. Among shared ligands, only 5 of 25 peptides had a basic P3 or P7 residue, and none had basic residues at both positions. Similarly, only two of the B*2704-specific peptides had a basic P7 residue, and none had a basic residue at P3. These differences presumably account for the bigger mean size of B*2704-specific peptides. For instance, the mean residual mass of the C-terminal residues was about 47 Da higher for the nine sequenced B*2704-specific peptides (three with Arg and six with Tyr) than for the five sequenced B*2706-specific ones (all with Leu or Ile). Thus, our data do not support the possibility that B*2704 might have some specific preference for unusually long peptides. Indeed, eight of the nine characterized B*2704 ligands absent from B*2706 had the canonic length of major histocompatibility complex class I ligands: 9 or 10 amino acid residues. However, the mean molecular mass of these differentially bound peptides (1,249 Da) was 188 Da higher than that of the five sequenced peptides differentially bound by B*2706 (1,061 Da). The differential binding of a 13-mer to B*2704 can be explained just by the presence of a C-terminal Arg, which is disfavored in B*2706, rather than by differential size preferences. Recent evidence indicates that B*2704 and B*2706 do not differ from B*2705 or from each other in their tapasin dependence for peptide binding, 2 thus supporting the view that B*2704 does not have a particular preference for suboptimal peptide ligands. The molecular basis for the restrictions in peptide specificity imposed by B*2706 polymorphism can be deduced from previous crystallographic and peptide binding studies. The greatly increased preference for nonpolar C-terminal residues is explained by the loss of an acidic charge in the F pocket as a consequence of the D116Y change in B*2706. In particular, C-terminal Leu was greatly favored over Arg or Tyr for in vitro binding of peptide analogs to a B*2705 mutant carrying the D116Y mutation (29). Different mutations at this same position in HLA-B27 also increased the preference for nonpolar C-terminal residues (39 -41). In contrast, introducing acidic charges by the H114D mutation in B*2706 has a rather moderate effect on C-terminal residue specificity (29). However, residue 114 takes part in both the D and E pockets, which bind P3 and P7 residues, respectively, an observation that easily explains the 2 A. W. Purcell, personal communication. Table II. increased allowance for basic residues at these positions in B*2706. This bias does not impair binding of other residues at these positions due to the plasticity of interactions in these pockets, conferred in part through involvement of water molecules (42). Several questions concerning the peptide specificity of B*2704 and B*2706 may be relevant to the differential association of these subtypes with AS. First, how do the differences in residue specificity translate in the degree of overlap of the peptide repertoires? Second, what structural features of B*2704 ligands impair binding to B*2706? Third, do the ligands common to B*2704 and B*2706 maintain their antigenic features in the context of both subtypes? The observation that B*2706 binds in vivo about 90% of the B*2704-bound peptide repertoire suggests that putative arthritogenic peptides may be confined to a relatively small portion of B*2704 ligands. Our study indicated that the major feature of B*2704 ligands that impairs binding to B*2706 is the presence of C-terminal Arg or Tyr. In contrast, peptides with basic residues at both P3 and P7 do not bind B*2704, despite appropriate P2 and P⍀ motifs. Moreover, some peptides with Cterminal Leu and a basic residue at only P3 or P7 did not bind B*2704, revealing an additional contribution of other residues. An important issue when trying to correlate peptide specificity with disease association is whether shared ligands between subtypes differentially associated with AS maintain their antigenic features in the context of both subtypes. It is possible that particular peptides can be differentially recognized by CTLs when presented by either B*2704 or B*2706. However, the level of cross-reaction of alloreactive CTLs raised against B*2704 with B*2706 (77%) was only about 13% lower than the overlap of peptide repertoires, suggesting that a majority of the shared ligands maintain their antigenic features on both subtypes. It was reported previously that B*2707, a disease-associated subtype, was unable to bind peptides with C-terminal Tyr (43), which questioned the importance of this motif for determining susceptibility to AS. This conclusion was based on the presence of Tyr 116 in B*2707, the absence of a C-terminal Tyr motif by pool sequencing, and limited sequencing of individual ligands. However, none of these features per se rule out the possibility that some peptides containing C-terminal Tyr may bind B*2707. Much more extensive sequencing of individual B*2707 ligands would be required to assess this issue. In conclusion, this study allowed us to correlate the lack or low association of B*2706 with AS to failure of this allotype to bind a relatively small portion of the peptide repertoire bound by the structurally closest disease-associated allotype B*2704 and to determine the major structural features of the differentially bound peptides. No other known structural or functional feature of B*2704 and B*2706 can be correlated with differential association of these subtypes with AS.
2018-04-03T05:06:26.011Z
2002-05-10T00:00:00.000
{ "year": 2002, "sha1": "2e403c944ca33426365def8b85784c3e3fc19839", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/277/19/16744.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "7f94722f713118ed68ad22dbb803f4725bad28ac", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
254532200
pes2o/s2orc
v3-fos-license
Cardiovascular disease risk factors in autistic adults: The impact of sleep quality and antipsychotic medication use Approximately 40% of American adults are affected by cardiovascular disease (CVD) risk factors (e.g., high blood pressure, high cholesterol, diabetes, and overweight or obesity), and risk among autistic adults may be even higher. Mechanisms underlying the high prevalence of CVD risk factors in autistic people may include known correlates of CVD risk factors in other groups, including high levels of perceived stress, poor sleep quality, and antipsychotic medication use. A sample of 545 autistic adults without intellectual disability aged 18+ were recruited through the Simons Foundation Powering Autism Research, Research Match. Multiple linear regression models examined the association between key independent variables (self‐reported perceived stress, sleep quality, and antipsychotic medication use) and CVD risk factors, controlling for demographic variables (age, sex assigned at birth, race, low‐income status, autistic traits). Overall, 73.2% of autistic adults in our sample had an overweight/obesity classification, 45.3% had high cholesterol, 39.4% had high blood pressure, and 10.3% had diabetes. Older age, male sex assigned at birth, and poorer sleep quality were associated with a higher number of CVD risk factors. Using antipsychotic medications was associated with an increased likelihood of having diabetes. Poorer sleep quality was associated with an increased likelihood of having an overweight/obesity classification. Self‐reported CVD risk factors are highly prevalent among autistic adults. Both improving sleep quality and closely monitoring CVD risk factors among autistic adults who use antipsychotic medications have the potential to reduce risk for CVD. INTRODUCTION Cardiovascular disease (CVD) is the leading preventable cause of morbidity and mortality in the Unites States (Benjamin et al., 2017) and globally (Kaptoge et al., 2019;Roth et al., 2018), and we have reason to believe that autistic people are disproportionately affected and at risk (Bishop-Fitzpatrick, Movaghar, et al., 2018;Croen et al., 2015;Hirvikoski et al., 2016;Mandell, 2018). The annual direct and indirect cost of CVD in the United States is $316.1 billion, and it accounts for a greater proportion of total health expenditures than any other category of diagnoses (Benjamin et al., 2017). Combined, CVD and its risk factors (e.g., high blood pressure, high cholesterol, diabetes, and overweight or obesity) affect approximately 41.5% of Americans (Benjamin et al., 2017), and CVD is responsible for nearly 18 million annual deaths worldwide (Roth et al., 2018). Multiple groups have identified high prevalence of CVD and its risk factors in several population-based and convenience samples of autistic people (Bishop-Fitzpatrick, Movaghar, et al., 2018;Croen et al., 2015;Hirvikoski et al., 2016). For instance, Croen et al. (2015) found that 37% of primarily young autistic adults, compared to 23% of primarily young non-autistic adults have CVD. Bishop-Fitzpatrick and colleagues (2018) also found high prevalence of CVD in a primarily middle aged and older adult sample: both valvular disease (27% compared to 16%) and congestive heart failure (37% compared to 26%) were elevated in autistic people compared to non-autistic people. In a national sample of Medicare beneficiaries aged 65+, Hand et al. (2020) found elevated prevalence of heart disease (54% compared to 37%) and cerebrovascular disease (12% compared to 8%) in autistic older adults compared to nonautistic older adults. In a study by Bishop-Fitzpatrick and Rubenstein (2019), prevalence of CVD (49.0%) and its risk factors (46.2%) were elevated among middle aged and older autistic Medicaid beneficiaries. These figures from previous research described above, all of which are derived from representative claims datasets of autistic adults in the United States, suggest that the prevalence of CVD is elevated in autistic people. These findings are echoed by recent work that assesses the prevalence of self-reported non-communicable physical health conditions (including CVD) among a convenience sample of autistic adults who reside primarily in the United Kingdom and finds increased risk for CVD in autistic adults, although percentages were not reported (Weir et al., 2021(Weir et al., , 2022. Indeed, CVD is highly prevalent and accounts for the largest attributable fraction of deaths in autistic people (Hirvikoski et al., 2016;Mandell, 2018). Although a growing body of research suggests that CVD and its primary risk factors are elevated in autistic people, limited research has focused specifically on potential mechanisms underlying this high prevalence of CVD or its risk factors among autistic people. A mechanistic understanding of the high prevalence of associated conditions like CVD in autistic people is necessary to develop primary and secondary prevention strategies that have the potential to extend life expectancy and improve quality of life . In the general population, social, psychosocial, and lifestyle determinants of health impact risk for CVD by increasing risk for primary CVD risk factors, including high blood pressure, high cholesterol, diabetes, and overweight or obesity (Dimsdale, 2008;Giurgescu et al., 2019;Mozaffarian et al., 2008;Richardson et al., 2012). Modifying social, psychosocial, and lifestyle factors that are associated with increased prevalence of primary CVD risk factors may be most important for primary prevention (Mozaffarian et al., 2008), and it is imperative to understand how autistic people are affected by both primary and secondary risk factors. Research in non-autistic adults suggests that both poorer sleep quality and higher levels of perceived stress are associated with increased CVD risk factors, and both are elevated in autistic relative to non-autistic adults. A recent meta-analysis that reviewed data from 23 studies that included 118,696 adults in the general population found that high levels of perceived stress increased risk of incident coronary heart disease by 27% (Richardson et al., 2012). Autistic adults experience higher levels of perceived stress than non-autistic adults (Bishop-Fitzpatrick, DaWalt, et al., 2017;Bishop-Fitzpatrick, Mazefsky, et al., 2018a;Hirvikoski & Blomqvist, 2015), which may increase risk for CVD in autistic people . A 2011 metaanalysis that reviewed data from 15 studies that included 474,684 adults from the general population found that both short and long sleep duration-both markers of poor sleep quality-increased risk for coronary heart disease mortality by 48% and 38%, respectively (Cappuccio et al., 2011). A growing body of evidence suggests that sleep quality is poorer in autistic adults compared to nonautistic adults (Baker & Richdale, 2015;Hohn et al., 2019;Jovevska et al., 2020;McLean et al., 2021). Like autistic people, people with serious mental illness-including schizophrenia, schizoaffective disorders, and mood disorders with psychotic features-are also at risk of early mortality and have heightened prevalence of CVD and its risk factors (Walker et al., 2015). Although the mechanisms underlying this high prevalence of CVD and its risk factors in people with serious mental illness are multifactorial, one factor that increases risk of CVD and its risk factors for this population is antipsychotic medication use. A body of research suggests that antipsychotic medications may, both directly and indirectly, increase risk for developing CVD risk factors (Kahl et al., 2018;Kovacs & Arora, 2008;Mwebe & Roberts, 2019;Walker et al., 2015). Second-generation antipsychotics, specifically, have the potential to increase the risk of developing CVD risk factors such as overweight and obesity, high cholesterol, diabetes, and metabolic syndrome (American Diabetes Association, American Psychiatric Association, American Association of Clinical Endocrinologists, & Obesity, 2004;Saari, 2004). Antipsychotics are often used to treat irritability and emotion dysregulation in autistic people. The median international prevalence of antipsychotic drug use in autistic adults among studies included in a recent systematic review with predominantly adult samples is high (42.8% with a range from 28.7% to 55.6%; Jobski et al., 2017) and accordingly, we have reason to believe that autistic adults may experience the downstream cardiovascular effects of these elevated levels of antipsychotic medication use. Our goal was to fill these gaps in the literature by investigating the prevalence of self-reported CVD risk factors and testing the association between known predictors of CVD risk factors in the general population and in people with serious mental illness in autistic adults. Specifically, we aimed to: (1) describe self-reported CVD risk factors in a sample of autistic adults without intellectual disability; and (2) test the associations between CVD risk factors and antipsychotic medication use, sleep quality, and perceived stress. We hypothesized that autistic adults would have a high prevalence of CVD risk factors and that antipsychotic medication use, poor sleep quality, and high perceived stress would be associated with elevated CVD risk factors in autistic adults. Data and sample We recruited participants aged 18+ through the Simons Foundation Powering Autism Research (SPARK; (Feliciano et al., 2018)) Research Match service, and we compensated participants $25 for completing a series of questionnaires. Data for this study came from a broader study of adult development, and participants were recruited to the broader adult development study, not a study focused specifically on CVD risk factors. The study was approved by The George Washington University institutional review board, and all participants provided informed consent. A more in-depth description of the broader SPARK study can be found in Feliciano et al. (2018). For this study, we included only data from the 545 autistic adults with complete data on medication use, out of a total possible sample of 899. We chose to use listwise deletion instead of multiple imputation because of the likely inaccuracy of imputed medication use data and the likelihood that medication use data are not missing at random. The autistic adults in our sample were mostly assigned female sex at birth (N = 350; 64.2%) and white (N = 437; 80.2%), and they ranged in age from 18 to 77 years (Mean = 41.0 years, SD = 13.45). Participants were geographically representative: based on U.S. Census Bureau designations, 31.6% (N = 172) were from the South, 26.5% (N = 145) were from the West, 23.9% (N = 130) were from the Midwest, and 16.0% (N = 87) were from the Northeast. A minority of participants (N = 11) did not provide geographic information. The geographic distribution of participants in our study mirrors the distribution of the US population (U.S. Census Bureau, 2021). All autistic adults in our sample were legally able to provide informed consent (did not have a legal guardian), and no autistic adults in our sample reported a cooccurring intellectual disability on their health history questionnaire. Like all participants in the SPARK registry, participants in this study self-disclosed a professional autism spectrum disorder diagnosis, which is highly likely given that SPARK partners with specialty autism clinics throughout the United States for recruitment (Feliciano et al., 2018). A recent study provides added validity for this supposition-98.8% of participants in a large subsample of SPARK participants had a confirmed ASD diagnosis as ascertained via electronic medical records (Fombonne et al., 2022). Finally, and consistent with participants' self-disclosed autism diagnoses, more than 95% of autistic adults included in our sample scored above the screening cutoff (>65) for autism spectrum disorder on the Autism Spectrum Quotient (AQ)-28 (Fombonne et al., 2022;Hoekstra et al., 2011). Demographic information We collected self-reported demographic data on age, race, income, sex assigned at birth, and autistic traits as measured using the AQ-28 (Hoekstra et al., 2011). For analyses, we dichotomized sex (male or female), race (white or non-white), and income (low-income versus not, as defined by household income less than $20,000 per year; approximately 150% of the U.S. federal poverty guideline for a single adult). Race, sex, and income were included in analyses due to the body of literature that suggests that women, people from minoritized racial groups (in the United States, people who are not white), and people who are low-income have a higher prevalence of CVD risk factors (Clark et al., 2009;McWilliams et al., 2009;Mosca et al., 2011;O'Neil et al., 2018;Thomas et al., 2005). Cardiovascular disease risk factors Participants self-reported the presence or absence of a history of hypertension, high cholesterol, and diabetes. We assessed the presence or absence of overweight and obesity by calculating body mass index (BMI) based on self-reported height and weight, with overweight or obesity indicated as scores greater than or equal to 25 using the age-and sex-based norms developed by the Centers for Disease Control and Prevention for adults younger than age 65. For autistic adults who were 65 years or older, overweight was indicated by a score greater than or equal to 31 given BMI guidelines for older adults; there is no obesity category for older adults (Winter et al., 2014). For the purpose of analyses, we collapsed "overweight" and "obesity" into a single category because there is no "obesity" category for older adults. We computed a summary score of CVD risk factors that ranged from 0 to 4, with a score of 0 indicating the presence of no CVD risk factors and a score of 4 indicating the presence of hypertension, high cholesterol, diabetes, and obesity/overweight. Antipsychotic medication use We asked participants to report any medications that they were taking at the time of data collection in a free text field. If participants typed into the free text field that they were taking too many medications to list or they typed that they preferred not to say which medications they were taking, we listed their current medications as unknown and excluded them from the sample used for this analysis. We then broadly classified medications using MedlinePlus. Because of the known association between antipsychotic medications and some cardiovascular disease risk factors, we created a "antipsychotic medication use" variable coded as yes (2) or no (1) for typical (e.g., haloperidol and loxapine) or atypical (e.g., clozapine and risperidone) antipsychotics. Perceived stress We measured self-reported perceived stress using the Perceived Stress Scale , a 10-item scale rated on a 5-point Likert scale where higher scores indicate greater perceived stress. Questions that assess perceived stress include: "In the last month, how often have you been upset because of something that happened unexpectedly?"; "In the last month, how often have you felt nervous and 'stressed'?"; "In the last month, how often have you found that you could not cope with all the things that you had to do?"; and "In the last month, how often have you been angered because of things that were outside of your control?", among others. Cronbach's alpha reliability ranges from 0.78 to 0.91 in numerous national surveys in non-autistic people (Cohen & Janicki-Deverts, 2012;Cohen & Williamson, 1988), and research has found strong reliability in both autistic adults without intellectual disability (α = 0.87; Bishop-Fitzpatrick, Mazefsky, et al., 2018) and autistic adults with and without intellectual disability (α = 0.76; Hong et al., 2016). In the current study, the internal consistency of the PSS is strong (α = 0.89; McQuaid et al., 2022). Sleep quality We measured self-reported sleep quality using the Pittsburgh Sleep Quality Index (PSQI; Buysse et al., 1989) which provides indices across seven domains. For this study, we used the global sleep quality domain. Questions included in this domain are: "When have you usually gone to bed?"; "How long has it taken you to fall asleep at night?"; "During the past month, how often have you had trouble staying awake while driving, eating meals, or engaging in social activity?"; and "During the past month, how much of a problem has it been for you to keep up enthusiasm to get things done?". The PSQI has strong internal consistency (α = 0.83) in non-autistic adults (Buysse et al., 1989), and adequate internal consistency (both α = 0.68) in two recent samples of autistic adults without intellectual disability (Baker & Richdale, 2015;McLean et al., 2021). In the current study, the internal consistency of the PSQI is strong (α = 0.74). Analyses We first conducted preliminary analyses to ensure that parametric tests were appropriate. Next, we summarized prevalence of CVD risk factors using descriptive statistics. We tested differences in CVD risk factors based on sex assigned at birth using chi-square tests. We then used multiple linear regression to examine the associations between our key independent variables (perceived stress, global sleep quality, and antipsychotic medication use) and number of CVD risk factors, controlling for demographic variables (age, sex assigned at birth, race, lowincome status, autistic traits). We chose to use multiple linear regression rather than ordinal logistic regression because our CVD risk factors variable violates the assumption of proportional odds because CVD risk factors are sometimes cumulative (e.g., metabolic syndrome), and individuals therefore do not have the same odds of having one CVD risk factor compared to four CVD risk factors. We generated standardized regression coefficients (β) as an effect size metric to assess the strength of these associations (Nieminen et al., 2013). Finally, a series of four exploratory logistic regression models tested associations between key independent variables included in Model 1 and individual CVD risk factors (hypertension, high blood pressure, diabetes, and overweight or obesity), controlling for the same demographic characteristics used in Model 1. RESULTS Descriptive findings and differences based on sex assigned at birth Overall, 73.2% (N = 399) of autistic adults in our sample had co-occurring overweight or obesity status (BMI ≥25), while 45.3% (N = 247) had high cholesterol, 39.4% (N = 215) had high blood pressure, and 10.3% (N = 56) had diabetes. The average BMI in our sample was 31.70 (SD = 9.32) for autistic women and 30.00 (SD = 7.23) for autistic men, although autistic men and autistic women did not differ significantly on BMI. Within the overweight or obesity category, 44.6% (N = 243) had a BMI score consistent with obesity status while 28.6% (N = 156) had a BMI consistent with overweight status. Only 2.9% (N = 16) of participants in our sample had a BMI consistent with underweight status. In terms of number of cardiovascular risk factors, only about one eighth of autistic adults (12.7%; N = 69) had no CVD risk factors, while 33.4% (N = 182) of autistic adults had one CVD risk factor, 31.4% (N = 171) had two CVD risk factors, 18.2% (N = 99) had three CVD risk factors, and 4.4% (N = 24) had all four CVD risk factors. Autistic men were more likely than autistic women to have high blood pressure (χ 2 = 15.17, p < 0.001), but autistic men and women had similar rates of high cholesterol, diabetes, and overweight or obesity, even though autistic men had a higher average number of CVD risk factors than autistic women (t = À2.65, p = 0.004) and autistic women had BMI scores that were higher, on average, compared to autistic men (t = 2.209, p = 0.014). A minority of autistic adults in our sample (15.00%; N = 82) reported taking one or more antipsychotic medications. Autistic women (17.4%; N = 61) were significantly more likely than autistic men (10.8%; N = 21) to report antipsychotic medication use (χ 2 = 3.35, p = 0.037). Autistic women had significantly higher levels of perceived stress (M = 24.14, SD = 7.00) compared to autistic men (M = 21.83, SD = 7.38; t = 3.62, p < 0.001), and autistic women also had significantly poorer sleep quality (M = 10.41, SD = 4.36) compared to autistic men (M = 8.67, SD = 4.39; t = 4.46, p < 0.001). Both autistic men and autistic women experience high levels of perceived stress and poor sleep quality based on standardized norms in the general population for the PSS (Cohen & Janicki-Deverts, 2012) and PSQI (Buysse et al., 1989), respectively. Of note, 86.5% (N = 475) of autistic adults in our sample exceeded a cutoff score of 5 on the PSQI, which is indicative of possible sleep disorders (Buysse et al., 1989). Descriptive findings are detailed in Table 1. Prediction of number of CVD risk factors Results of our multiple linear regression analysis (Table 2) revealed a significant, positive association between sleep quality and number of CVD risk factors, β = 0.123, p = 0.011, sr 2 = 0.011, when controlling for age, sex, race, low-income status, autistic traits, perceived stress, and antipsychotic medication use, such that better sleep quality (indicated by a lower score on the PSQI) was associated with fewer CVD risk factors. Older age was associated with a greater number of CVD risk factors, β = 0.286, p < 0.001, sr 2 = 0.078, when controlling for sex, race, low-income status, autistic traits, perceived stress, sleep quality, and antipsychotic medication use. There was a negative association between sex assigned at birth and CVD risk factors, β = 0.123, p = 0.003, sr 2 = 0.014, such that women had fewer CVD risk factors than men, when controlling for age, race, low-income status, autistic traits, perceived stress, sleep quality, and antipsychotic medication use. Race, low-income status, autistic traits, perceived stress, and antipsychotic medication use were not significantly associated with number of CVD risk factors. Diabetes Results of our exploratory logistic regression models (Table 3) indicate that autistic adults who took antipsychotic medications had a 106.5% increased likelihood of having diabetes, B = 0.73, χ 2 (1) = 4.14, p = 0.04, exp(B) = 2.065. The likelihood of having diabetes was increased by 3.6% for each additional year of age, B = 0.04, χ 2 (1) = 10.33, p > 0.001, exp(B) = 1.036. Participants who were classified as having a low income had an 88.9% increased likelihood of having diabetes compared to participants who were not classified as having a low income, B = 0.64, χ 2 (1) = 4.39, p = 0.04, exp(B) = 1.889. Race, sex, autistic traits, perceived stress, and sleep quality were not significantly associated with diabetes. High cholesterol The likelihood of having high cholesterol was increased by 6.7% for each additional year of age, B = 0.07, χ 2 (1) = 68.53, p > 0.001, exp(B) = 1.067. Sex, race, low-income status, autistic traits, antipsychotic use, sleep quality, and perceived stress were not associated with the likelihood of high cholesterol. High blood pressure The likelihood of having high blood pressure was increased by 2.8% for each additional year of age, B = 0.03, χ 2 (1) = 15.33, p > 0.001, exp(B) = 1.028. Autistic men were 119.5% more likely than autistic women to have high blood pressure, B = 0.79, χ 2 (1) = 16.19, p < 0.001, exp(B) = 2.195. Finally, a oneunit increase in autism symptomatology was associated with a 2.2% increase in the likelihood of having high blood pressure, B = 0.02, χ 2 (1) = 5.87, p = 0.02, exp(B) = 1.022. Race, low-income status, perceived stress, sleep quality, and antipsychotic use were not associated with the likelihood of having high blood pressure. Overweight or obesity Each additional unit of poorer sleep quality was associated with a 6.5% increase in the likelihood that an autistic adult's BMI was in the range of overweight or obesity, B = 0.06, χ 2 (1) = 5.82, p = 0.02, exp(B) = 1.065. Age, sex, race, low-income status, autistic traits, perceived stress, and antipsychotic use were not significantly associated with overweight or obesity. DISCUSSION In this study we aimed to describe and identify correlates of CVD risk factors in a large sample of autistic adults. Our analyses demonstrated that CVD risk factors were highly prevalent in autistic adults; 476 of the 545 autistic adults had one or more CVD risk factors, with overweight or obesity being the most common CVD risk factor among our sample of autistic adults aged 18-77. Older age, male sex assigned at birth, and poorer sleep quality were associated with a higher number of CVD risk factors. Using antipsychotic medications was associated with an increased likelihood of having diabetes, but not other CVD risk factors. Poorer sleep quality was associated with an increased likelihood of having a BMI that was in the overweight or obesity range. Our hypothesis that higher levels of perceived stress were associated with a greater number of CVD risk factors was not supported. Self-reported CVD risk factor prevalence Our study identified high prevalence of self-reported CVD risk factors in autistic adults. In non-autistic adults, a study using population-level data found that 46.5% have one or more CVD risk factor (Fryar et al., 2012). We found that 87.3% of autistic adults in our sample selfreported a diagnosis of at least one CVD risk factorincluding diabetes, high cholesterol, high blood pressure, and overweight or obesity-while 4.4% of autistic adults in our sample reported all four. Within our sample, 73.2% had BMI scores in a range associated with overweight or obesity, 45.3% had high cholesterol, 39.4% had high blood pressure, and 10.3% had diabetes. The rate of any CVD risk factor in the current study is notably higher than previously reported figures among autistic people using claims data. Bishop-Fitzpatrick and Rubenstein (2019) found that 46.2% of autistic adults had Medicaid claims for at least one CVD risk factor and Croen et al. (2015) found in their sample of autistic adults that 33.9% had obesity, 22.8% had dyslipidemia, 25.6% had hypertension, and 7.6% had diabetes. There are several potential reasons why we identified a higher prevalence of CVD risk factors in our survey sample compared to population-level samples. Although administrative data are generally considered the gold standard for identifying CVD and its risk factors (Psaty et al., 2016), self-reported CVD risk factors data generally agree with administrative data at >80% in large, population-level studies in the general population (Muhajarine et al., 1997;Newell et al., 1999;Robinson et al., 1997;Tisnado et al., 2006;Yasaitis et al., 2015). However, the agreement between self-reported health data and electronic health record (EHR) or claims data has not been specifically tested in autistic adults. It is possible that system-or provider-level factors such as diagnostic overshadowing or lack of provider knowledge about autism that drive disparities in receipt of highquality healthcare among autistic adults (Nicolaidis et al., 2015) are associated with underdiagnosis of CVD risk factors in autistic adults or underreporting of diagnosed CVD risk factors in the EHRs or insurance claims of autistic adults. Neurodiversity-related discrimination may lead providers to not test autistic adults for comorbidities because of perceptions about the difficulty of diagnostic procedures (e.g., blood draws needed to test for diabetes and dyslipidemia) and/or to treat autistic adults' autism diagnosis rather than the health conditions that they present with (Bishop-Fitzpatrick & Kind, 2017;Bishop-Fitzpatrick, Movaghar, et al., 2018;Nicolaidis et al., 2015). The lack of training for healthcare professionals focused on how to treat and interact with autistic adults may contribute to underdiagnosis of co-occurring conditions (Nicolaidis et al., 2015). Finally, participants in our survey sample may have been motivated to participate in a study about adult development and aging because they are experiencing aging-related health changes such as CVD. It is likely that the true prevalence of CVD risk factors lies in between estimates of prevalence reported in this study and others, although the concordance between claims and self-reports for CVD and its risk factors in the general population is relatively high (Tisnado et al., 2006). Taken together, data from this study and previous studies suggest that CVD risk factors are highly prevalent in autistic adults, pointing towards the need to increase the level of focus in autism research on CVD disparities in autistic people, as well as to increase awareness among general practitioners to screen for CVD in autistic people, particularly as they age. The association between sleep quality and CVD risk factors We found that poorer sleep quality was associated with both a higher total number of CVD risk factors as well as an increased likelihood of having a BMI in the range of overweight or obesity. The link between sleep quality and CVD risk factors is well-established: reduced sleep quality has been linked in meta-analytic studies with increased risk for both metabolic syndrome and coronary heart disease (Cappuccio et al., 2011;Lian et al., 2019), and our study suggests that this association also exists T A B L E 3 Exploratory binary logistic regression predicting individual cardiovascular disease risk factors among autistic adults. Notably, the odds ratios identified in our study were within the range identified by a recent meta-analysis of predominantly general population adults (Lian et al., 2019), suggesting that the magnitude of association between sleep quality and cardiovascular disease risk factors is similar. This is concerning because autistic adults, overall, report very poor sleep quality (Baker & Richdale, 2015;McLean et al., 2021), and we could expect that a larger proportion of autistic adults compared to adults in the general population will develop CVD risk factors if the correlational link between poor sleep quality and increased risk of CVD risk factors identified by this study is causal. The association between antipsychotic medication use and CVD risk factors Among our sample of autistic adults, we found that using antipsychotic medications was associated with having diabetes but was not associated with either number of CVD risk factors or with overweight and obesity, high cholesterol, or high blood pressure. Although our subsample of 82 autistic adults who used antipsychotic medications represented only 15.0% of our sample and we may have thus been underpowered to detect associations, this study provides preliminary evidence that suggests that antipsychotic medication use is associated with an increased likelihood of diabetes among autistic adults. Future research that leverages larger samples of autistic adults who use antipsychotic medications will help us to determine whether antipsychotic use is associated with increased risk for overweight and obesity or high blood pressure as in adults with serious mental illness who use antipsychotic medications (American Diabetes Association et al., 2004;Kahl et al., 2018;Kovacs & Arora, 2008;Mwebe & Roberts, 2019;Saari, 2004;Walker et al., 2015). The association between perceived stress and CVD risk factors Counter to our hypotheses, perceived stress was not associated with the presence of self-reported CVD risk factors. It is possible that the link between perceived stress and CVD identified within the general population (Richardson et al., 2012) simply does not exist among autistic adults. The fact that autistic adults have, overall, very high levels of perceived stress (Bishop-Fitzpatrick, Mazefsky, et al., 2018;Bishop-Fitzpatrick, Minshew, Mazefsky, & Eack, 2017b;Hirvikoski & Blomqvist, 2015) may have attenuated the association between high levels of perceived stress and the presence of CVD risk factors. It is also possible that the relatively high correlation between perceived stress and sleep quality (r = 0.49), which is not over the r > 0.70 cutoff for multicollinearity and thus does not violate regression assumptions, indicates that poorer sleep quality results from high levels of perceived stress, thus representing a causal pathway that could be tested in future longitudinal work. Limitations Our findings should be interpreted within the context of several limitations. First, our sample is a convenience sample that included a greater proportion of women than men. This documented sex ratio is reflective of other online studies of autistic adults (Rubenstein & Furnier, 2021). Furthermore, the sample did not include any autistic adults with co-occurring intellectual disability; therefore, it is not representative of the full population of autistic adults. Participants were not recruited to this study for the purpose of studying CVD risk factors in autistic adults (rather, adult development more broadly). Thus, our findings are not representative of the full population of autistic adults. However, this study does include a large proportion of participants who represent understudied groups within the autistic community, specifically women and middle aged and older adults. Second, our primary outcome variables (CVD risk factors) were selfreported and not confirmed by a medical professional. This may have led to error in terms of either over-or particularly under-reporting of CVD risk factors. Third, other variables that may be relevant to CVD risk factor development or CVD itself such as smoking, alcohol consumption, physical activity, and diet were unmeasured. The diagnosed prevalence of CVD or metabolic syndrome was also unmeasured in our sample. Although this study provides preliminary data that can inform future research on CVD in autistic adults, future research should specifically study the emergence of CVD to fully understand mechanisms underlying CVD emergence. Fourth, although our hypotheses framed perceived stress, sleep quality, and antipsychotic medication use as predictors of CVD risk factors, this study's design precluded a test of causal mechanisms underlying CVD risk factors in autistic adults given that data were cross-sectional. Future studies that investigate mechanisms underlying CVD in autistic adults should use longitudinal methods and include clinical diagnosis and/or confirmation of CVD and its risk factors. These future studies should also investigate historical antipsychotic use, dosage, and duration of antipsychotic use in order to disentangle mechanisms driving the link between antipsychotic use and CVD risk factors in autistic adults. CONCLUSIONS This study found that self-reported CVD risk factors are highly prevalent among autistic adults. It also found that poorer sleep quality was associated with an increased number of CVD risk factors and with an increased likelihood of overweight/obesity, while using antipsychotic medications was associated with an increased likelihood of diabetes. Findings are highly suggestive that CVD risk factors represent a major risk factor for premature mortality among autistic adults and deserve increased attention in both clinical work and research (Mandell, 2018). Importantly, sleep quality and antipsychotic medication use are both mutable factors that can be altered with targeted intervention. Improving sleep quality and carefully monitoring CVD risk factors among autistic adults who take antipsychotic medications both have the potential to improve the quality of autistic adults' health and lives.
2022-12-11T16:07:30.564Z
2022-12-09T00:00:00.000
{ "year": 2022, "sha1": "10c89b680959c2454b5ec285f22c9ec02244801c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/aur.2872", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "1dd992876923bee7b4228f62ecca99c7fea14895", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
261296563
pes2o/s2orc
v3-fos-license
The technical approach of using mobile positioning data to support urban population size monitoring This paper summarizes the methods and approaches of using mobile positioning data to estimate and monitor urban population size. It starts with the necessity of using big data to monitor urban population size in territorial spatial planning. Then it elaborates on the difference between the definition of "population size" reflected by mobile positioning data and the common concept of urban population size, and the necessity of verifying the logic and measurement of sample expansion at four levels. Finally, taking Wuhan city as a case, this paper proposes the technical approach of monitoring the size of urban permanent population through multi-source data verification. The study finds that when it comes to monitoring urban population size, mobile positioning data have the advantage of monitoring short-period changes in population size and spatial distribution, yet special attention must be paid to the three technical links of definition, sample expansion and verification. Population size has remained a basic topic in the field of urban and rural planning, and it is also one of the most important data for the reference of territorial spatial planning.Planning of land-use, public services, infrastructure and other fields is all underpinned by population size.With the establishment of the territorial spatial planning system, the size of permanent population and other major indicators have been included in the monitoring indicator system for the implementation of territorial spatial planning (Ministry of Natural Resources of the People's Republic of China [MNR], 2019). The past four decades have witnessed massive population migration from rural to urban areas and from small and medium-sized cities to large and mega ones resulting from rapid urbanization.It has posed great challenges to the traditional demographic approach due to large scale, high frequency and difficulty in estimating the temporal and spatial distribution of migration.The national census is the most accurate demographic approach in China, but it is only conducted once every ten years.Between the two censuses, the results are obtained through sample surveys on 1‰ of the population year by year, it is difficult to monitor massive and highly frequent population flows in a comprehensive and efficient way.And the longer the interval from the previous census, the greater the possibility of errors.Taking Shanghai as an example, after the Sixth National Census, the permanent population at the end of 2009 was corrected from 19,213,200 to 22,102,800, with a difference of 2,889,600. 1 With the development of information and communication technology, especially the increasingly high availability rate of mobile Internet, mobile positioning data such as mobile phone signaling data and mobile Internet positioning data have been used for rural and urban planning, playing an effective role in such areas as regional and urban spatial structures (Niu et al., 2014;Wang et al., 2017), urban transportation (Zhang, 2016), job-housing spatial relationship (Niu & Ding, 2015;Song et al., 2019), urban center system (Ding et al., 2016), and the provision of facilities and services (Niu & Li, 2019;Niu et al., 2019).As mobile terminal devices such as mobile phones are widely used and easy to carry around, real-time positioning of such devices provides a way to dynamically monitor the size and distribution of urban and rural populations.Mobile positioning data has thus been frequently used to estimate the population size.Over the years, however, there has been an interesting phenomenon that no papers on the application of mobile positioning data to estimate population size have been published in peer-reviewed academic journals, while media outlets have covered many stories about how mobile positioning data are used to estimate the permanent population in several super-cities and mega-cities.This suggests that formidable technical problems are to be solved before using mobile positioning data to measure population size.The news coverage, which requires no peer review, has neither been recognized by academia and nor applied to the planning practice.The technical approach of using mobile positioning data to measure permanent population needs to be identified before these data can be applied to the dynamic monitoring of population size. As an important part of the territorial spatial planning system, smart territorial spatial planning consists of major links such as the dynamic monitoring of planning based on the big data technology, and relevant requirements have been included in the technical documents of territorial spatial planning (MNR, 2019).Therefore, it is necessary to conduct a systematic discussion on the technical obstacles of using mobile positioning data to monitor population size, identify current progress, difficulties and feasible solutions, and forecast future technology trends.Starting from the characteristics of mobile positioning data as the data source, this paper discusses the differences in the definition of "population size" reflected by mobile positioning data, focuses on the technical aspects of sample expansion and inspection, and defines the scope of technology application by sorting out the technical approaches of using mobile positioning data to support the monitoring of urban population size. 1 Consistency in the definition of population size Definition of urban permanent population Permanent population is an important statistic indicator in the current national census of China and is defined as "the population actually living in a place regularly for six months or longer." 2 This definition focuses on people's real living needs, gives particular attention to the social context of rapid urbanization and mass population migration.Regarded as the most stable population size indicator, it provides more effective guidance on resource allocation in housing, infrastructure, environmental protection, healthcare, sports, cultural and recreational facilities based on people's livelihood needs (Shi et al., 2018).The definition is used in both territorial spatial planning and urban and rural planning. The application of mobile positioning data to measure the population size is based on the big data with positioning labels left by mobile communication devices when they are connected to the mobile communication network or mobile Internet.It intuitively records the specific spatial location of a device user at a certain point.Through a range of temporal and spatial records of the device in use, it partly restores life traces of the user, thereby identifying the social attributes of the user and estimating the total population size meeting specific conditions.There are two challenges from device identification to population estimation: Firstly, one device is not necessarily associated with one person.A person may have more than one mobile phone 3 , and some may not have their own mobile phones due to their financial status, lifestyle, age, etc.Therefore, the number of users identified is not equivalent to the population. Secondly, how people use their devices does not necessarily reflect how they live their lives.Despite the increasingly important role of mobile phones and other electronic devices in our daily life, devices for special purposes are used at intervals and cannot cover the whole 1 According to the Shanghai Municipal Bureau of Statistics, the city's permanent population was 19,213,200 in 2009.Yet the Sixth National Census conducted in 2010 found that the permanent population of Shanghai was 23,020,000, generating a difference of nearly four million.Therefore, the permanent population at the end of 2009 was revised to 22,102,800.process of our life.The loss of some location records may disrupt the identification of device users.For example, it is impossible to identify a user's residence when his or her mobile phone is shut down at night.Even in an ideal world where "one person corresponds to one device," the way people use their devices still cannot be regarded the same as the way they live their lives. Therefore, when measuring the permanent population size using the mobile positioning data, it is necessary to convert the life behavior logic of "actually living regularly for six months or longer" into the computing logic of the device use behavior, so as to sort out the users meeting the definition.In addition, it needs to perform a sample expansion from the number of users to the total permanent population of cities. Mobile positioning data-based measurement of "permanent population" must comply with the definition of "urban population size" Long time series for the measurement of permanent population Long time series is a basic requirement for using mobile positioning data to identify permanent population, and there are two reasons for that: Firstly, mobile positioning data measure population size and identify its location based on long-term use of devices.The population type and location of a user is measured through calculating the spatial and temporal trajectory of the device in many days and nights.For example, when estimating a person's residence by identifying the place where he or she stays for the longest time during 9:00 p.m. and 7:00 a.m., when people generally rest at home.Occasional night outs will disturb the measurement of residence in case of short time series; while the influence of occasional night outs can be eliminated by calculating the repetition rate, thus improving the accuracy of estimation in case of long time series.Secondly, given the definition of permanent population, "[people] living [in a place] regularly for six months or longer, " the urban population calculated based on the data for one to two weeks tend to be affected by short-term migration.The result will be inaccurate if the population visiting the place on a temporary basis, for business, travel, etc. is included. In contrast, long-period data can improve the accuracy of identifying permanent population by setting a longer time limit.For example, the condition can be set as living in the place for more than 50% of the days during a period of consecutive six months. It can be seen that the length of time series has a significant impact on the results.While it is difficult to obtain data sources, short time series data shall not be used as the basis for measuring permanent population.The time period of data collection should be as long as possible even if six-month is impossible.A reasonable and effective alternative is stratified sampling.When it is difficult to obtain long-term data on a continuous basis, the impact of contingency can be reduced and the reliability of measurement improved through sampling in a number of days per month during the six-month period. Continuous positioning record for the measurement of permanent population Continuous positioning can help improve the accuracy of permanent population identification supported by mobile positioning data.Positioning data is generated by the use of devices.The number and interval of positioning records, however, are different every day because devices have different positioning frequencies.In an ideal world, the positioning of device occurs continuously and evenly in one day (24 h).Such continuity ensures that daily positioning records restore people's spatial and temporal trajectory as truly as possible, approximating their life logic (Fig. 1a).However, it is impossible to restore a complete life trajectory using existing mobile positioning data, considering that the continuity and evenness of daily positioning records cannot be guaranteed (Fig. 1b). When the positioning records of a day appear in clusters in certain periods, it means that part of the life trajectory without the use of devices has not been recorded, which makes it difficult to restore the complete spatial and temporal behavior chain of device users, resulting in deviation in the identification of permanent population.Therefore, despite the difficulties in generating and acquiring data sources, they cannot simply be used due to the low frequency of collection.Instead, the daily positioning records should be distributed as continuously and evenly as possible in one day (24 h) by improving the algorithm and other means. The necessity of sample expansion Long time series and continuous positioning are only prerequisites for using mobile positioning data to identify permanent population.On the basis of a consistent definition, sample expansion is also required for the application of mobile positioning data to measure urban permanent population.Unlike applications such as the identification of regional spatial structures and the establishment of urban center system which are based on relative population value (Ding et al., 2016;Wang et al., 2017), the measurement of population size describes the absolute value of total population, for which sample expansion is a must.Moreover, sample expansion is still needed even for full sample data.Taking mobile phone signaling data as an example, all the data collected from the three major operators can be regarded as full sample data of mobile phone users, but the full sample of devices is not equivalent to that of the population, because some people do not use mobile phones and one person may have more than one phone.Therefore, full sample data is not necessary.In fact, the adoption of data from multiple operators is mainly for the purpose of making comparison, instead of skipping sample expansion.When it comes to the measurement of population size, full sample still needs to be expanded. Sample expansion at four levels from number of devices to population size Now that the necessity of sample expansion has been confirmed, the specific process of sample expansion entails more research.It is a popular practice in the industry to calculate the population simply using the number of devices and the sample expansion coefficient (population after sample expansion = number of devices/K).However, such simple practice ignores the complexity of sample expansion coefficients and makes it hard to spot or review any errors.To improve the accuracy and reliability of sample expansion, it is necessary to clarify sample expansions at different levels.Specifically, there are four levels of sample expansion from the number of devices to the population size.Taking mobile phone signaling data as an example, the sample expansion at four levels covers five P values and four K values (Fig. 2). In the data pre-processing stage, "the number of devices that identify permanent residence (P4)" is calculated.Using the mobile positioning data obtained through long time series and continuous positioning, a proper algorithm can be developed to ensure accurate identification. The calculation of "the number of active devices (P3)" at the first level (K3) aims to restore a considerable number of devices that are used irregularly, such as those that are turned off at night and therefore cannot identify permanent residence.In fact, with a precise definition of active devices, P3 can also be directly calculated via big data through the restrictive rules of time thresholds.The necessity of P4 calculation lies in that it can identify the spatial locations of permanent population and complete verification by comparing them with conventional statistical data of administrative spatial units. The calculation of "the total number of devices (P2) of a particular operator" at the second level (K2) aims to restore a considerable number of inactive devices.It is difficult to identify population characteristics by obtaining regular spatial and temporal trajectories of those devices as they are rarely used.Even if each operator were able to count the total number of cards issued in a city, the mobility of mobile users, and the use of cards in other cities, which is especially common after the cancellation of roaming charges, make it difficult to count the total number of devices that are actually used locally. The calculation of "the total number of users of all operators (P1)" at the third level (K1) aims to convert the number of devices to the number of people.Consideration should be given not only to the market share of one operator, but also to the possibility of one person using multiple devices.The latter includes two scenarios: 1) when the multiple devices possessed by one person belong to the same operator, they can be combined by appropriate location algorithm based on the data of their spatial and temporal trajectories; 2) when they belong to different operators, calculations can only be made through inter-network communication and other ways.By comparing the total amount of contact with each operator, the market share can be estimated synchronously. The calculation of "the number of urban permanent population (P0)" at the fourth level (K0) aims to restore a considerable part of non-mobile communication device users.Despite the popularity of mobile phones today, a considerable part of the population, including the elderly, infants and children, does not or is unable to use them, and there is no proper method to count the number of these people for the time being. In conclusion, among the above four levels of sample expansions, P4 and P3 can be accurately calculated based on continuous spatial and temporal positioning of mobile phone signaling data, hence the value of K3.K2, K1 and K0, however, are very uncertain and may present huge differences due to regional economic development level, social and cultural characteristics.Their value selection poses technical difficulties in the application of mobile positioning data to calculate permanent population.Any error in the coefficients, however small it is, will have a much bigger impact on the results.Therefore, the expansion coefficients of the four levels are the greatest challenge in measuring the size of permanent population. The necessity of verification As mentioned above, errors in the identification of permanent population will be caused directly if the calculation based on mobile positioning data is not conducted in strict compliance with the definition of population adopted in the traditional approach.Such errors will be amplified if the complexity of the sample expansion system and the difference of geographical spaces are not taken into account, exerting a profound impact on the accuracy and reliability of population measurement.Without an established process of sample expansions, it is an effective way to improve the reliability of measurement by checking the results with other data.There are two approaches available: One is checking the big data with field survey sampling data, that is, comparing the census results of a small number of typical areas with the results of big data measurement.This approach is undoubtedly reliable and effective if the sampling of the typical areas is conducted in a strict and appropriate way.The problem is that even small-scale censuses are time-consuming. The other is comparing the estimation results of mobile positioning data from two different sources to see whether the population changes and spatial characteristics reflected by them are identical.On this basis, taking conventional statistical data as a reference, it can be determined whether the trends and characteristics jointly reflected by multi-source data are consistent with the social and economic status quo.The advantage of this approach is that it requires less work.The key lies in the use of appropriate means to analyze and compare the multi-source data. Case overview Based on the case of Wuhan City, this study explores a feasible approach to cross-inspect and measure permanent population based on multi-source data.In the case of Wuhan, there are three sets of data on its permanent population, one from the statistical yearbook and the other two obtained based on mobile positioning data sources A and B (Table 1).The statistical yearbook published by the Wuhan Bureau of Statistics, using the traditional demographic method, shows that the permanent population of Wuhan was 11,081,000 at the end of 2018.The latter two, which take the first six months of 2019 as the calculation period, show that the permanent population of Wuhan is 11,310,000 and 13,340,000, respectively, with a difference of more than two million. In conclusion, there are significant differences in the sample expansion estimations based on mobile positioning data from different sources, and the big data-based measurement also differs from the traditional statistical data, but the spatial distribution patterns of the three are highly similar.Therefore, comparing, inspecting and revising the three sets of data in each spatial unit is a proper way to improve the reliability of big data-based measurement of permanent population in the absence of accurately calculated expansion coefficients. Inspection of change ratio, over-expectation ratio and difference ratio (1) Change ratio.Change ratio is the ratio of big data-based measurement to published statistical data, which directly reveals the spatial difference of population distribution between big data observation and statistical yearbook.Transverse comparison shows that the change ratios of the two kinds of data sources in each district present a similar rank, which suggests that population changes measured by different data sources are basically the same, and the values are both strikingly different between the "head" and "tail".The change ratio in the head section is quite high, which means that the big data-based measurement is significantly higher than the traditional statistical result, and that may be caused by an increase in permanent population or in device users.The head section, which includes Wuhan Donghu New Technology Development Zone, Wuhan Donghu Ecological Scenic Spot, Dongxihu District, and Hanyang District, mainly covers the suburban area.On the contrary, the tail section, which includes Qingshan District (the Chemical Industry Area), Xinzhou District, Huangpi District and Caidian District, mainly covers the outer suburbs of the city (Fig. 3). In addition, the change ratio of the two kinds of big data in Jiang'an District is closest to 1, i.e., the big data-based measurement is the closest to that in the statistical yearbook.The population value of the statistical yearbook can be regarded as an expected value extrapolated from the census year and based on the pattern of past population changes.The value of big data-based sample expansion is calculated based on the actual number of devices and the general behavior characteristics of device use as well as the relationship between devices and people.The area with the change ratio closest to 1 can be seen as having the most stable population size and age structure, with the population size estimated by big data closest to the expected value. Over-expectation ratio is the ratio of the results measured by big data to the expected changes based on published statistics.Taking the change ratio of Jiang'an District, which is closest to 1, as the reference for population change, the expected growth coefficients of the two data sources are 1.04 and 1.18, respectively.The expected value of the permanent population can be achieved by multiplying published statistical data by the expected growth coefficient.In the "middle" section, which includes Jiangxia District, Wuhan Economic and Technological Development Zone, Hongshan District, Jiangan District, Wuchang District, Jianghan District and Qiaokou District, the sample expansion results are all within ± 30% of the expected values and the difference of over-expectation ratio of the two data source is within ± 0.08.This indicates that the measurement calculated by two kinds of big data in the seven districts are in line with theoretical expectations with a small difference, making it difficult to decide which kind of measurement is more reliable.In the meanwhile, there is some ambiguity in the classification of the two spatial units Jiangxia District and Qiaokou District, which lie next to the dividing lines as the first and the last of the "middle" section. Difference ratio is the ratio of the measurement of two kinds of big data, directly reflecting the difference between them.Ideally, the ratio should remain in a stable range.By comparing the sections above, it is found that the difference ratio of the "middle" section is basically within ± 5% of the mean value, which is regarded as an acceptable interval for the two kinds of calculations.In comparison, the difference ratio is high in Jiangxia District and low in Qiaokou District, i.e., there is a great difference in the calculation results of the two data sources among ten districts and counties.Furthermore, the difference ratio is low in Hanyang District and Dongxihu District in the "head" section, similar to the spatial units in the "tail" section.In other words, there is some ambiguity in the classification of Hanyang District and Dongxihu District. Measurement of permanent population The cross-inspection results of the three indicators show significant differentiations between the head and the tail, as well as some spatial distribution characteristics.Considering the general rule of urban development, in the process of rapid urbanization, regional central cities and mega-cities tend to attract a large number of migrant population, who normally work in manufacturing and gather in the periphery and outskirts of the downtown area, which explains why the population calculated through big data for the "head" section is way larger than that in the statistical yearbook.Also, for these migrant workers belonging to the active age group, the availability rate of smart phones and other electronic devices is higher than the average level of the whole age group, which leads to the underestimation of K0 and K2 and a higher change ratio.A lower measurement should be taken as a higher one is more likely to cause errors. On the contrary, the core area of the downtown area and the outer suburbs tend to suffer population outflow in the process of urban renewal and development, which explains why the population measured by big data in the above "tail" area is far less than that in the statistical yearbook.Also, the degree of aging in those two areas is higher than the average level of the city because they attract a small number of migrant permanent residents, which leads to a low availability rate of smart electronic devices, hence the overestimation of the sample expansion coefficient.Therefore, higher measurement should be selected. To sum up, the number of permanent devices calculated by the two sets of big data based on their respective sources and the sample expansion coefficient of the total population underlie the different change ratios for the head and tail.The approach of head-tail combination is adopted, in which a more credible sample expansion value is selected through the big data calculation at the "head", "middle," and "tail."Specifically, first, for the "head" spatial statistical units with a higher over-expectation ratio, the calculation result of a lower over-expectation ratio is selected, on the contrary, the calculation result of a higher over-expectation ratio is selected for the "tail"; second, for the "middle" spatial statistical units with a very close over-expectation ratio, the calculation result of sample expansion more close to the data in the statistical yearbook is selected; third, if it is difficult to define which spatial statistical unit is more reliable through the calculation results of two kinds of data, a mean value is recommended. According to the principle of "taking the lower for the head, the higher for the tail, and the average for the uncertain" in deciding the sample expansion value, different scenarios of value determination are set up, hence the value range of permanent population in Wuhan between 11,980,000 and 12,310,000, with the recommended value of being 12,090,000. Case Summary The "permanent population" of Wuhan in the conventional sense is calculated through the big data under the logic of "the number of people who reside in Wuhan city at night for more than 50% of the days during a period of consecutive six months, and the residence at night is the place where they stay for the longest between 9:00 p.m. and 7:00 a.m. the next day."In the absence of accurate sample expansion coefficients, traditional statistical yearbook data are used to check the sample expansion results based on mobile positioning data of different sources.The units with large differences and those with blurred classification are identified through the change ratio, the over-expectation ratio, and the difference ratio, based on which a more credible value is selected.The value range and the recommended value of permanent population is eventually obtained based on the calculations of different scenarios.During the process, the verification of multisource data is an indispensable part, which helps improve the accuracy and reliability of mobile positioning data in measuring permanent population. Conclusion The application of mobile positioning data to measure permanent population is an effective tool for monitoring and evaluating the implementation of territorial spatial planning.In super-cities and mega-cities with large population inflow, mobile positioning data is especially suitable for monitoring the size of permanent population when monitoring the implementation of territorial spatial planning.Compared with conventional demographic approach, mobile positioning data has three advantages in monitoring urban permanent population: one, it is suitable for dynamic monitoring due to the short update cycle; two, it is convenient and efficient, and costs relatively less; three, it can monitor not only the size of permanent population, but also changes in the spatial distribution of population. In the application of mobile positioning data to measure the size of urban permanent population, the technical approach includes three parts: definition, sample expansion and verification.To begin with, the measurement of "permanent population" by mobile positioning data must comply with the definition of "urban population size".The consistency of the definition is the premise of the measurement.Secondly, sample expansion includes four levels, from the number of devices to the size of permanent population.The sample expansion coefficients at the four levels are the key difficulties in measuring permanent population.Finally, it is an essential step to verify the measurement obtained via mobile positioning data.The verification guarantees the measurement accuracy of permanent population. The cross-inspection of measurement obtained from multi-source mobile positioning data proposed in this paper compares the results achieved through two data sources with the population in the statistical yearbook.By cross-inspecting the change ratio, the over-expectation ratio and the difference ratio, the value range and the recommended value of permanent population are estimated through the head-tail combination method.Since there is no way to accurately determine the sample expansion coefficients at the four levels, cross-inspection of multi-source data can be used to measure the permanent population of mega-cities with large population inflow, and monitor the implementation of territorial spatial planning. Discussion and Prospect Firstly, artificial intelligence (AI)-based algorithms can be applied to sample expansions.Nowadays, technical difficulties are seen in the three technical links of definition, sample expansion and inspection, while sample expansion is the most challenging one.Technical breakthroughs are required to address the challenges in sample expansion and determine the sample expansion coefficients at the four levels.The values of the three key coefficients, K0, K1 and K2, depend not only on the type of data source, but also vary for different cities.Right now, using machine learning and other AI technologies to capture data from the behavior characteristics of device users seems a possible way to identify the values of K0, K1 and K2, but the relevant technologies are still under exploration. Secondly, the use of mobile positioning data to measure the permanent population should not be in obsessive pursuit of an accurate total number.Given the difficulties in the relevant technical approaches, only an interval value of permanent population can be achieved through sample expansion and verification.Therefore, when applying mobile positioning data to measure the urban permanent population, instead of focusing on the accurate total number of population, more attention should be paid to the changes in the size and spatial distribution of population. Thirdly, when it comes to the prospect for the application of mobile positioning data to measure permanent population, it should be stressed that the national census conducted every ten years is the most detailed and accurate method to obtain population data, and no big data-based measurement can be as detailed and accurate as it.Mobile positioning data cannot replace traditional demographic approach as the latter has its own justification and application scope.It is appropriate to use mobile positioning data to measure the permanent population in the years between two censuses; and for cities with largescale population inflow and outflow, mobile positioning data can be used to improve the accuracy of population measurement. Fig. 1 Fig. 1 Continuous positioning records (illustrated by the author).a Equal-interval continuous positioning.b Unequal-interval positioning Fig. 2 Fig. 2 Diagram of sample expansion logic for using mobile positioning data to measure permanent population (illustrated by the author) Fig. 3 Fig. 3 Spatial distribution patterns of different change ratios at the head and tail (illustrated by the author).(a) change ratio of data source A. (b) change ratio of data source B Table 1 Cross-inspection calculation results of multi-source data-based measurement of permanent population
2023-08-30T14:01:15.885Z
2023-08-30T00:00:00.000
{ "year": 2023, "sha1": "6ef1c9d5a0c43956030059a0fd3d7c8ec9de08d0", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s44243-023-00013-y.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "d6a37554d35f862e03b60c9192e1871c99e99037", "s2fieldsofstudy": [ "Geography", "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
9248817
pes2o/s2orc
v3-fos-license
Statins intake and risk of liver cancer Supplemental Digital Content is available in the text Introduction Liver cancer is the fifth most common cancer worldwide in men and the sixth most common cancer worldwide in women, and costs on patients, caregivers, and society that remains the most common malignancy. [1] The etiology of liver cancer involves both genetic and environmental factors. According to the American Cancer Association statistics, liver cancer mortality gradually increased, the relative survival rate of liver cancer being 18%. [2] Based on cancer registry data available in China, the agestandardized 5-year relative survival for liver cancer is 10.1% in 2015. [3] These data reveal the poor prognosis of liver cancer, and thus to prevent the occurrence of liver cancer is essential. Previous studies investigating have showed that statins have a chemopreventive potential in the liver cancer. [4] Statins are inhibitors of 3-hydroxy-3-methyl glutaryl coenzyme reductase A, which is a key enzyme in the rate-limiting step in cholesterol synthesis. [5] Statins are widely prescribed in the primary and secondary prevention of heart attack, stroke, and cardiovascular disease. [6] Recently, statin use has been reported to have a promising anticancer effect, [7] and statin monotherapy could potentially reduce any organ and colorectal cancer-related mortality. [8,9] Additionally, studies showing statin use has been found to be associated with decreased risks in hepatocellular carcinoma, [10] pancreatic cancer, [11] prostate cancer, [12] gastric cancer, [13] colorectal cancer, [14] and breast cancer. [15] Several meta-analyses of randomized controlled trials have examined the relationship between statin use and risk of liver cancer and have found that statin use is significantly reduce liver cancer risk. [16][17][18] However, there is lack of study to quantitatively assess statin use in relation to liver cancer. Thus, we conducted a dose-response meta-analysis to clarify and quantitatively assess statin use and risk of liver cancer. Methods Our meta-analysis was conducted according to the Meta-analysis Of Observational Studies in Epidemiology (MOOSE) checklist. [19] There are no ethical issues involved in our study for our data were based on published studies. Search strategy We included eligible studies to investigate the relationship between statins intake and liver cancer. To develop a flexible, nonlinear, r meta-regression model, we required that an eligible study should have been categorized into 3 or more levels. If multiple publications were available for a study, we included the longest follow-up study. PubMed and EMBASE were searched for studies that were published update to February 2017, with keywords including "liver cancer" OR "hepatocellular" OR "hepatic" OR "intrahepatic" AND "statin." We refer to the relevant original essays and commentary articles to determine further relevant research. Eligible study was also included through the reference lists of relevant review articles. The search strategy is shown in detail in the supplementary list S1, http://links.lww.com/MD/B785. Study selection Two independent researchers (CY and ZS) investigated the information regarding the correlation between statin use and liver cancer: outcome was liver cancer; the relative risks (RR) at least 3 quantitative categories. Moreover, we precluded nonhuman studies, reviews, meta-analyses, editorials, and published letters. To ensure the correct identification of qualified research, the 2 researchers read the reports independently, and the disagreements were resolved through consensus by all of the researchers. Data extraction Each eligible article's information was extracted by 2 independent researchers (MW and YC). We extracted the following information: first author; publication year; mean value of age; country; study name; sex; cases and participants; the categories of statin use; and RR or odds ratio (OR). We collected the risk estimates with multivariable-adjusted. [20] Quality assessment was performed according to the Newcastle-Ottawa scale for nonrandomized studies. [21] Statistical analysis We pooled RR estimates as the common measure of association statin use and liver cancer risk; the hazard ratio was considered equivalent to the RR. [22] Any results stratified by different subgroups of statin use and liver cancer risk in any single article were treated as 2 separate reports. Due to different cut-off points for categories in the included studies, we performed a RR with 95% confidence intervals (CI) by an increase of 50 cumulative defined daily dose per year using the method recommended by Greenland, Longnecker and Orsini and colleagues. [23] The dose of statin intake used the median stain intake. If the median stain intake category was not available, the midpoint of the upper and lower boundaries was considered as the dose of each category. In addition, using restricted cubic splines (RCS) to evaluate the nonlinear association between statin intake and liver cancer risk, with 3 knots at the 10th, 50th, and 90th percentiles of the distribution. A flexible meta-regression based on RCS function was used to fit the potential nonlinear trend, and generalized least-square method was used to estimate the parameters. [21] This procedure treats statin use (continuous data) as an independent variable and logRR of diseases as a dependent variable, with both tails of the curve restricted to linear. A P value is calculated for linear or nonlinear by testing the null hypothesis that the coefficient of the second spline is equal to zero. [23] STATA software 12.0 (STATA Corp, College Station, TX) was used to evaluate the relationships between statin use and liver cancer risk. Q test and I 2 statistic were used to assess heterogeneity among studies. The random-effect model was chosen if P Q <.10 or I 2 >50%, otherwise, the fixed-effect mode was applied. Begg and Egger tests were done to assess the publication bias of each study. P <.05 was considered significant for all tests. Figure 1 shows the results of literature research and selection. We identified 2601 articles from PubMed and 3723 articles from EMBASE. After exclusion of duplicates and studies that did not fulfill the inclusion criteria, 6 studies were chosen, [24][25][26][27][28][29] and the data were extracted, and a total of 6 reports datasets were included in the final meta-analysis. These studies were published update to February 2017. Study characteristics The characteristics of the included studies are shown in the Tables 1 and 2. Among the selected studies, 6 eligible studies involving 4 cohort studies and 2 case-control studies, 2 studies are from Caucasia and 4 from Asia, a total of 11,8961 participants with 9530 incident cases were included in this meta-analysis. Overall meta-analysis The results of statin use and the risk of liver cancer are shown in Table 3. The pooled results suggest that statin use is significantly associated with liver cancer risk, which was suggested both by the highest and lowest categories (RR = 0.46; 95% CI: 0.24-0.68; P <.001) ( Table 3). We found evidence of between-study heterogeneity (I 2 = 91.8%, P <.001) but we observed no evidence of publication bias (Egger asymmetry test, P = .063) (Table S1, http://links.lww.com/MD/B785). Dose-response meta-analyses between statins intake and liver cancer Using RCS function, the test for a nonlinear dose-response relationship was significant (likelihood ratio test, P <.001), suggesting curvature in the relationship, with an increase of Table 2 Outcomes and covariates of included studies of statins intake in relation to risk of Liver cancer. Subgroup analyses Subgroup analysis was performed to check the stability of the primary outcome (Table 3). Subgroup analyses based on the study location found a similar risk reduction of liver cancer in Asia (OR = 0.44, 95%CI: 0.11-0.77, P <.001) and Caucasian (OR = 0.49, 95%CI: 0.36-0.61, P <.001) ( Table 3). The relationship between statin use and liver cancer risk was similar in subgroup analyses, which were defined by study design, number of cases or participants, and study quality. An increment of 50 cumulative defined daily dose per year significantly decreased the liver cancer risk in any of the categories. Publication bias Each study in this meta-analysis was performed to evaluate the publication bias by both Begg funnel plot and Egger test. P >.05 was considered no publication bias. The results show that no obvious evidence of publication bias was found in the associations between statin use and liver cancer risk (supplementary Table S1, http://links.lww.com/MD/B785). A funnel plot for publication bias assessment is illustrated in supplementary Figure S1, http://links.lww.com/MD/B785. Discussion Statins are the most commonly used prescription drugs for the treatment of dyslipidemia. Recently, there has been an interest in a possible protective effect of statins on cancer risk, [30] and statin use has been reported to have a promising anticancer effect. Statins may also have cytostatic effects that extend the survival of cancer patients. [31] Statins are inhibitors of 3-hydroxy-3-methyl glutaryl coenzyme reductase A (HMG-CoA), which can combine with HMG-CoA reductase activity sites to reverse HMG-CoA reductase activity, thus inhibiting hydroxyvaleric acid synthesis, thus inhibiting several downstream products of the mevalonate pathway. [7] The main substrate for statins is the protein of Ras and Rho family, plus some GTP-binding proteins such as Rab, Rac, and Ral. The main function of Rho family protein is to coordinate the movement of cells and regulate gene transcription. [32] Statins inhibit the proliferation and differentiation of tumor cells by inhibiting the isoprene of Ras and Rho protein, which cannot be activated. Studies have shown that bone morphogenetic protein (BMP) pathway also has certain relationship with the incidence of tumor; statins can activate the BMP and BMP gene to induce cell apoptosis. [33] Furthermore, statin inhibits the proteasome pathway activation, limits cell cycle-dependent kinase inhibitor p21, and p27 protein decomposition, so it plays a role of a growth inhibitor of these molecules. [34] To our knowledge, several meta-analyses of observational studies and randomized controlled trials have examined the association between statin use and risk of liver cancer. [16][17][18] However, no study has been done to quantitatively assess statin use in relation to liver cancer. This is the first study to quantify the potential dose-response association between statin use and risk of liver cancer in a large cohort of both men and women. The primary finding in our meta-analysis is that statin use is significantly associated with liver cancer risk; an increase of 50 cumulative defined daily dose per year was associated with a 14% decrement in the risk of liver cancer. Subgroup analysis also proved the stability of the primary outcome. Previously it was hypothesized that the highest category of statins may have a greater chemoprotective effect in liver cancer, but in our hypothesis an increase of 50 cumulative defined daily dose per year was associated with a 14% decrement in the risk of liver cancer. Although, we performed this meta-analysis very carefully, however, some limitations must be considered in the current meta-analysis. First, different sex of population should be included in this meta-analysis to explore the impact of different sex of population on statin use and liver cancer. Second, we only select literature that was written in English, which may have resulted in a language or cultural bias, other language should be chosen in further study. Third, there might be insufficient statistical power to check the association. In conclusion, our meta-analysis suggests that statin use was independently associated with deleterious liver cancer risk reduction. However, large sample size, different ethnic, and different sex population of population are warranted to validate this association.
2018-04-03T00:05:39.989Z
2017-07-01T00:00:00.000
{ "year": 2017, "sha1": "245489b94d8b6ce90105d8f7faef9b039ab6b7b3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000007435", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "245489b94d8b6ce90105d8f7faef9b039ab6b7b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252391829
pes2o/s2orc
v3-fos-license
First Flight-Testing of LoRa Modulation in Satellite Radio Communications in Low-Earth Orbit At present, the use of LoRa modulation in satellite radio communications and the construction of a CubeSat constellation for the satellite Internet of Things based on LoRa technology has already begun. However, the limits of applicability of LoRa modulation in low-Earth orbits have not yet been established. This paper presents the results of the first flight tests of LoRa modulation for robustness against the Doppler effect in the satellite-to-Earth radio channel, carried out using a NORBY CubeSat operating at 560 km. Flight tests confirmed the very high immunity of LoRa modulation to the Doppler effect for modes with spreading factor SF ≤ 11 and spread spectrum modulation bandwidth BW > 31.25 kHz. LoRa modulation in these modes can be used in satellite communication without any limitations caused by the Doppler effect. For BW = 31.25 kHz, the LoRa radio channel is affected by the static Doppler effect. Communication with the satellite is possible in this case only at high elevation angles. For SF = 12, the dynamic Doppler effect becomes significant, and communication is possible only at low satellite elevation angles, which leads to the formation of a “hole” in the center of the coverage area directly below the satellite. In both cases, the duration of the communication session is significantly reduced because of the Doppler effect. In the case of SF = 11 and 12 at BW = 31.25 kHz, both static and dynamic Doppler effect catastrophically affect the LoRa radio channel, so that communication with the satellite becomes impossible. modulation directly in the satellite-to-Earth radio channel 97 have not yet been carried out. 98 In this paper, we present the results of the first flight tests 99 of LoRa modulation for robustness against the Doppler effect 100 in a satellite-to-Earth radio channel. Onboard experiments 101 were carried out using the NORBY CubeSat operating in 102 a low-Earth orbit [23]. The main purpose of the onboard 103 experiments was to verify the results of our laboratory studies 104 of LoRa modulation [13] and to determine the limits of appli-105 cability of LoRa modulation in satellite radio communica-106 tions in low Earth orbits. Particular attention in the conducted 107 experiments was given to LoRa modulation modes with 108 BW < 125 kHz, which have not been tested in the laboratory. 109 An important goal of the experiments was to detect, under 110 real conditions, the effect of radio communication disruption 111 predicted in [13] owing to the dynamic Doppler effect when 112 the satellite passes directly over the ground station. 113 The remainder of this paper is organized as follows. 114 Section 2 describes the methodology of on-orbit experiments. 115 The results of the experiments are presented in Section 3. 116 A summary of the results and conclusions are provided in 117 Section 4. 119 A. NORBY CubeSat 120 Nanosatellite NORBY is a 6U CubeSat designed for flight 121 tests of a new CubeSat-compatible platform developed by 122 Novosibirsk State University [23]. NORBY also carries a 123 payload for the registration of gamma rays and charged parti-124 cles as well as for testing SpaceFibre/SpaceWire technology. 125 The LoRa transmitter, which is part of the onboard radio 126 system, is essentially a payload for on-orbit studies of LoRa 127 modulation in satellite radio communication. 128 The NORBY CubeSat was successfully launched 129 on 28 September 2020 by a Soyuz-2.1b carrier rocket from 130 the Plesetsk cosmodrome into a near-polar orbit with an 131 inclination of 97.7 • , apogee of approximately 579 km, and 132 perigee of 545 km. 133 VOLUME 10, 2022 For more than a year, telemetry and data from NORBY 154 payloads have been successfully delivered to the ground 155 station via the LoRa radio channel, and control commands 156 have been transmitted from the ground station to NORBY. 157 The LoRa radio channel is also used to upload software to 158 NORBY for in-orbit software updates, which are required 159 when debugging on-board subsystems. 160 In accordance with the adopted concept of complete hard-161 ware redundancy of subsystems, NORBY has two identical 162 onboard radio systems. Only one of them can operate at any 163 given time. If the active BRC fails, it is switched off, and the 164 second BRC is switched on. Active BRC can also be selected 165 by commands from the ground station. 166 The NORBY on-board radio complex operates in the UHF 167 band at a frequency of 436.7 MHz. It should be noted that 168 NORBY CubeSat was not created to demonstrate any tech-169 nologies for satellite IoT, and its on-board radio complex 170 was originally intended to transmit telemetry and data from 171 payloads to the ground control complex. Therefore, the fre-172 quency range most frequently used on CubeSats was chosen 173 for BRC. Already in the process of implementing the project, 174 the idea arose to test LoRa modulation for resistance to the 175 Doppler effect, including with LoRa modulation parameters 176 that are interesting for IoT. And we used a tool that was 177 already ready for this -BRC, although it does not work in 178 the traditional IoT frequency range. We also note that the 179 NORBY-2 project currently being implemented is initially 180 focused on demonstrating and testing the capabilities of LoRa 181 modulation in satellite IoT in the 868 MHz and 2.4 GHz 182 frequency bands. 183 The output power of the BRC transmitter is adjustable in 184 the range of 0.1 to 4 W. At present, the default LoRa modu-185 lation parameters for NORBY radio sessions with the ground 186 station are SF = 10 and BW = 250 kHz, with an emitted BRC 187 transmitter power of 0.2 W. When NORBY is out of range 188 with the ground station, it transmits a beacon signal once per 189 minute containing basic telemetry data regarding the state of 190 NORBY. The beacon is transmitted alternately in the LoRa 191 and GFSK modes at a transmitter output power of 0.2 W. 192 Thus, the LoRa beacon is transmitted only once every two 193 minutes. 194 The antenna of each BRC is a pair of quarter-wave vibra-195 tors located at one of the ends of the satellite body (Fig. 1). 196 The antennas located at different ends of the satellite body are 197 completely identical. During the experiment, only one BRC 198 with its own antenna worked. The second was in reserve. The 199 radiation pattern of the NORBY antenna calculated using the 200 RF Module of COMSOL Multiphysics Simulation Software 201 is shown in Fig. 2. Here, the Z-axis is directed perpendicular 202 to the end face of the satellite body, while the X-and Y-axes 203 are directed perpendicular to the large and small sides of the 204 body, respectively. The radiation pattern was calculated for 205 the antenna with the satellite body including the deployed 206 solar panels. Figure 2 shows that the calculated nonuni-207 formity of the radiation pattern of the NORBY antenna is 208 approximately 7 dB. 209 At the time of LoRa modulation testing, the NORBY atti-210 tude determination and control system was in the debugging 211 stage. All orientation sensors were tested and operational. 212 The magnetic control system was able to slow down the 213 rotation of the satellite to approximately 0. the satellite. The ground station's steerable antenna system 220 consists of two crossed Yagi-Uda antennas (Fig. 3). The 225 The antenna is driven by a BIG-RAS/HR azimuth and 226 elevation rotator [26]. The Gpredict program [27] is used to 227 point the antenna at the satellite, which allows the real-time 228 tracking of satellites and prediction of the orbit. culates antenna pointing angles based on a two-line element 230 set (TLE) from the SATCAT catalogue [28]. The NORBY 231 satellite catalog number is 46494. 232 The rotator provides continuous pointing of the antenna 233 to the LEO satellite over the entire range of visibility of 234 the satellite from the ground station. However, when the 235 satellite passes a region close to the zenith, the antenna can-236 not accurately track the satellite due to the known keyhole 237 problem [29], which is inherent in antennas with an azimuth 238 and elevation type tracking mount. To clarify the problem, 239 consider the behavior of the antenna when tracking a satellite 240 flying near the zenith in a circular polar orbit (see Fig. 4). 241 In Figure 4, H is the orbit height, v is the satellite velocity, α 242 is the satellite azimuth, and θ is the satellite elevation angle. 243 If we do not take into account the sphericity of the Earth, then 244 by simple mathematical transformations it is easy to obtain 245 an expression for the azimuth angular velocity of the satellite 246 relative to the antenna ω a : where θ max is the maximum elevation angle of the satellite 249 at the point of the trajectory closest to the antenna at α = 250 90 • . It can be seen from (1) that as θ max approaches 90 • , ω a 251 increases without limit. That is, the antenna, when accurately 252 tracking the satellite near the zenith, must rotate very quickly 253 around the vertical axis. When the satellite approaches the 254 zenith along a trajectory with θ max = 90 • , the antenna in our 255 case is constantly oriented to the south in the horizontal plane, 256 and at the moment the satellite passes the zenith, it should 257 instantly reorient to the north. Naturally, no real antenna can 258 do this, since it takes some time. 259 The bottom panel of Fig. 4 shows ω a in the entire satellite 260 visibility zone calculated using a more accurate numerical 261 model that takes into account the sphericity of the Earth. The 262 calculations were performed for several NORBY trajectories 263 with different θ max . Here and below, it is assumed that the 264 satellite passes the trajectory point closest to the antenna at 265 time t = 0. The red dotted line in Fig. 4 shows the maxi-266 mum angular velocity of the antenna rotation provided by the 267 azimuth rotator. It can be seen from Fig. 4 that near the zenith, 268 when tracking a satellite on trajectories with θ max greater 269 than about 70 • , the azimuth angular velocity of the antenna 270 required for accurate tracking exceeds the angular velocity 271 provided by the antenna rotator. As a result, the antenna lags 272 behind the direction to the satellite. The lag of our antenna 273 can be up to about 37 • causing the satellite leaves the main 274 beam of the antenna. The result is attenuation of the received 275 radio signal, which becomes noticeable at satellite elevation 276 angles greater than 80 • and can reach about 20 dB at angles 277 greater than about 85 • . 278 The considered effect manifests itself at large satellite 279 elevation angles near the zenith. However, it is precisely when 280 the satellite moves in this area that the maximum values of the 281 Doppler rate are reached [13]. That is, possible failures of the 282 LoRa radio channel due to the dynamic Doppler effect are 283 expected primarily in this section of the satellite trajectory. 284 VOLUME 10, 2022 312 It should be noted that because NORBY transmits the 313 LoRa beacon at two-minute intervals, packet transmission 314 begins with an unpredictable delay of up to two minutes 315 after the satellite enters the radio coverage area of the ground 316 station. In cases in which the ground station operator did not 317 manage to send a command in time, this delay is even greater. 318 The main purpose of the experiments was to identify the 319 influence of the Doppler effect on the stability of the satel-320 lite LoRa radio channels. Therefore, during the experiments, 321 LoRa packets were transmitted at the maximum possible 322 power of the BRC transmitter of 4 W to avoid possible 323 radio communication disruptions owing to a weak signal or 324 external noise. 325 The SX1278 transceiver includes a received signal strength 326 indicator (RSSI), signal-to-noise ratio (SNR) meter, and an 327 indicator of the frequency difference between the carrier 328 frequency of the signal at the receiver input and the carrier 329 frequency of the receiver (frequency error, FER) [14]. All of 330 these parameters are recorded when a LoRa packet transmit-331 ted from NORBY is received at a ground station. 341 The data obtained allows us to determine the Doppler 342 frequency shift in the satellite radio channel for each data 343 packet. The Doppler shift can be directly calculated from 344 satellite trajectory data. During the experiment, trajectory 345 data were received from the GLONASS receiver onboard 346 NORBY. In addition, they can be determined from TLE data 347 from the SATCAT catalog [28]. If the transmitter emits a radio 348 signal with frequency F 0 , owing to the Doppler effect, the 349 receiver receives a signal with frequency [13] where v is the satellite velocity, n is the light speed, β is the 352 angle between the satellite velocity vector and the direction 353 to the ground station. Then, the Doppler frequency shift F D 354 can be expressed as shift for both the received and lost packets. 358 An additional contribution to the total frequency offset F 359 between the input signal and carrier frequency of the LoRa 360 receiver also comes from the frequency difference between 361 the reference oscillators of the receiver and transmitter F RT : (4) 363 We don't know of any other reasons that could contribute 364 to F. 375 According to (4), we assume that this difference is due to 376 F RT . This assumption is also confirmed by the observed 377 change in F − F D over time. This means that the observed 2.5 kHz change in the carrier 396 frequency of the transmitter is caused by heating by approx-397 imately 30 • C. 398 In all experiments performed, the absolute value of 399 F − F D did not exceed 2 kHz. This means that the change 400 in the carrier frequency of the received signal from NORBY 401 is mainly due to the Doppler effect. The frequency difference 402 between the reference oscillators of the receiver and transmit-403 ter only makes a small, albeit noticeable, contribution to the 404 total frequency offset. 406 The results of laboratory studies on LoRa modulation have 407 shown that LoRa radio communication with a satellite in 408 low Earth orbit can be disrupted in some cases owing to the 409 dynamic Doppler effect [13]. The maximum absolute value 410 of the Doppler rate is achieved when the satellite passes 411 at the zenith over the ground station (see Fig. 7c). Since 412 possible radio communication disruptions were expected at 413 high Doppler rates, NORBY orbits were chosen for our flight 414 experiments with a maximum satellite elevation above the 415 ground station of more than about 80 • . 416 As noted above, we could not completely stop the rotation 417 of the NORBY CubeSat during the experiments, and orient 418 it in the direction at the ground station. Therefore, during the 419 experiments, NORBY rotated unpredictably, and we ensured 420 that the rotation speed did not exceed a few degrees per 421 second. 422 As an example, Fig. 7 shows the results of experiment 423 No. 1 with LoRa modulation parameters SF = 7 and 424 BW = 500 kHz, at which, according to laboratory experi-425 ments [13], no influence of the Doppler effect was expected. 426 In total, during the radio communication session in this exper-427 iment, 773 packets of size L = 143 bytes were transmit-428 ted from NORBY, of which 748 packets were successfully 429 received by the ground station and 25 packets were lost. 430 The green curves in Figures 7a, 7b, and 7c show the 431 elevation angle, Doppler shift F D , and Doppler rate F D , 432 as NORBY passes over the ground station. The elevation 433 angle and Doppler shift were derived from the TLE data. The 434 Doppler rate is determined by differentiating F D . NORBY 435 was within the radio visibility of the ground station between 436 approximately −370 s and +370 s. 437 The bold blue dots in Figures 7a, 7b, and 7c indicate the 438 elevation angle, Doppler shift, and Doppler rate obtained 439 from the GLONASS receiver data contained in the received 440 packets. Once again, we note that the elevation angle, F D , 441 and F D obtained based on two different initial data prac-442 tically coincide. For lost packets, only the TLE data are 443 available. Therefore, in this case, the elevation angle, F D , 444 and F D were determined from the TLE data and the known 445 time of sending packets from NORBY. The lost packets are 446 marked in Fig. 7 and below with bold red dots. noted that Fig. 6d shows that the actual non-uniformity of the 461 radiation pattern of the NORBY antenna noticeably exceeds 462 the calculated 7 dB. The weakening of the signal near the 25 th 463 second is due to the delay in pointing the ground antenna near 464 the direction to the zenith. It may also be superimposed by the 465 weakening associated with satellite rotation. 466 Packet loss at low elevation angles before the satellite 467 leaves the radio visibility zone (Fig. 7d) cannot be explained 468 by a weak signal, which remains above the LoRa receiver 469 sensitivity almost to the horizon. However, in Fig. 7e, in the 470 area of these losses, there are reduced SNR values in the form 471 of points that fall outside of the main data array. Such SNR 472 behavior was observed only in some daytime experiments, 473 during which powerful construction equipment was operating 474 in the immediate vicinity (∼30-50 m) of the ground receiving 475 antenna during the construction of a new university building. 476 We attribute these packet losses to electromagnetic interfer-477 ence generated by this technique. 478 For a single lost packet at −89 s (Fig. 7), no external 479 causes were found to explain its loss. More than five thousand 480 packets were transmitted from the NORBY CubeSat for all 481 communication sessions during which the described experi-482 ments were performed. However, only four such cases were 483 recorded, for which no explanation was found for the loss of 484 the transmitted packet. 485 As expected, the experiment showed no influence of the 486 Doppler effect on the satellite-to-ground LoRa radio channel 487 with modulation parameters SF = 7 and BW = 500 kHz. 488 It should be noted that the RSSI and SNR time variations 489 shown in Fig. 7 also contain variations due to propagation 490 loss. The distance between the satellite and the ground station 491 varies from approximately 560 to 2700 km as NORBY moves 492 in orbit from zenith to horizon. In this case, the signal is atten-493 uated by approximately 13.7 dB. The total variations of RSSI 494 in Fig. 7 are significantly larger than this value. In addition, 495 the transmitter power of 4 W during the experiments ensures 496 the signal value at the LoRa receiver input is significantly 497 higher than the receiver sensitivity, as well as the allowable 498 SNR value right up to the horizon. Therefore, the propagation 499 loss does not affect the results of experiments presented in this 500 section and below. 501 It should also be noted that we did not check during the 502 experiments for the presence of any other radio transmit-503 ters operating in the same frequency range near the ground 504 antenna. However, their presence should show up in the SNR 505 data. Neither in the described experiment No. 1, nor in all 506 the others, no signs of the impact of any third-party radio 507 transmitters were recorded. 508 Here, we specifically considered the results of the exper-509 iment with LoRa modulation parameters SF = 7 and BW = 510 500 kHz. The relatively low sensitivity of the LoRa receiver 511 and the relatively low noise immunity in this mode, which 512 was the lowest in the experiments conducted, were the worst 513 for conducting the experiment under non-ideal environmental 514 conditions. The results obtained under these conditions made 515 it possible to illustrate the operability of the equipment used 516 in the experiment and the possibility of an unambiguous 517 interpretation of the data obtained. 519 The main objective of this research is to verify under real 520 space conditions the robustness parameters of LoRa modu-521 lation to the Doppler effect in the satellite-to-ground radio 522 channel given in the SX1278 transceiver specification [14] 523 and obtained in laboratory experiments [13]. the ground station only if they are received without errors. 573 Otherwise, they were considered lost. 574 It should also be noted that the LowDataRateOpti-575 mize LoRa modulation parameter was activated during the 576 experiments. 582 Experiments for SF = 7 were performed for all selected 583 values of BW. This is a reference series of experiments 584 in which it is expected to register the influence of the 585 static Doppler effect in accordance with the LoRa SX1278 586 transceiver specification [14]. However, in these experiments, 587 it is not expected to detect any influence on the LoRa radio 588 channel of the dynamic Doppler effect [13]. The main objec-589 tives of these experiments are to verify the specifications 590 of the LoRa SX1278 transceiver [14] regarding immunity 591 to Doppler shift and to confirm the robustness of LoRa 592 modulation against the dynamic Doppler effect in NORBY 593 orbit in accordance with laboratory studies [13]. Special 594 experiments with BW = 250 kHz were not carried out, 595 since in numerous daily radio sessions of NORBY with the 596 ground station at SF = 10 and BW = 250 kHz, no influ-597 ence of Doppler effects on radio communication was ever 598 recorded. 599 VOLUME 10, 2022 The main objective of experiments with SF = 10, 11 and 600 12 is to detect the influence of the dynamic Doppler effect 601 on LoRa modulation. Laboratory experiments [13] found that 602 LoRa modulation becomes less resistant to Doppler rate as 603 SF increases and BW decreases. Theoretical analysis [18], 604 [19] also shows that as SF increases and BW decreases, Table 2). The 619 immunity of LoRa modulation to the Doppler rate at SF = 620 7 and BW = 125 kHz, according to [13], is also sufficient 621 with a large margin for using LoRa modulation in satellite 622 radio communications. No. 1, which was performed last, only one packet loss due 655 to the unsuccessful orientation of the CubeSat antenna was 656 recorded (experiment No. 17). The relatively large number 657 of packet losses due to the unsuccessful orientation of the 658 CubeSat antenna in experiment No. 1 is also due to the low 659 sensitivity of the LoRa receiver at SF = 7 and BW = 500 kHz, 660 which is significantly less than in other experiments (see 661 Table 2). In experiment No. 4 (Fig. 8), the command to switch to 669 continuous transmission of a sequence of packets was sent 670 to the CubeSat from the ground station only after the arrival 671 of the second NORBY beacon, that is, with an additional 672 two-minute delay. Therefore, the transmission of data packets 673 from NORBY in experiment No. 4 began only at −133 s, 674 approximately four minutes after the satellite entered the 675 radio visibility zone of the ground station. In this experiment, 676 all data packets transmitted from NORBY were successfully 677 received by the ground station. The satellite-to-ground radio 678 channel worked steadily while NORBY was in the radio 679 visibility zone of the ground station, that is, above the hori-680 zon. Communication was interrupted only when the satellite's 681 elevation angle became less than about 1.7 • . 682 In experiment No. 6 ( Fig. 9), the transmission of packets 683 from NORBY started at t = −351 s, but only packet No. 684 210 was received first at t = −79 s when the Doppler shift 685 F D decreased to 7.7 kHz. Communication with the satel-686 lite was again interrupted at +76 s, when the Doppler shift 687 again increased in absolute value to 7.6 kHz. Subsequently, 688 the ground station did not receive any data packet. Fig. 9d 689 and Fig. 9e show that in the time interval between −79 s 690 and +76 s, both the signal level and signal-to-noise ratio at 691 the receiver input were quite large, significantly exceeding 692 the LoRa receiver sensitivity and LoRa demodulator SNR, 693 respectively. We do not know the values of RSSI and SNR 694 at times when the data packets from the satellite were not 695 received by the ground station. However, the behavior of 696 RSSI and SNR in other experiments (Fig. 7 and Fig. 8) 697 indicates that their abrupt change, leading to the termina-698 tion of communication with the satellite for a long period, 699 is unlikely. Therefore, we attribute the packet loss observed 700 in experiment No. 6 to the Doppler effect. 701 As noted above, the total frequency offset F between 702 the carrier frequencies of the input signal and LoRa receiver 703 differs slightly from the Doppler shift because of the dif-704 ference in the frequencies of the reference generators of the 705 receiver and transmitter (4). Therefore, it is possible to more 706 accurately determine the maximum allowable value of F max 707 above which the LoRa radio communication is broken using 708 the FER data of the LoRa receiver of the ground station. In our 709 case, we get F max = 7.8 kHz and 7.7 kHz for t = −79 s 710 Table 2). and stopped again at F D = −6.4 kHz. The conclusion is 725 similar to the previous one: the reason for the destruction of 726 the LoRa satellite-to-ground radio channel in experiment No. 727 7 is the Doppler shift. According to the FER data, the value 728 F max = 7.7 kHz was obtained both during a decrease and 729 increase in the absolute value of the Doppler shift. 730 We did not conduct experiments with BW = 250 kHz, 731 since the absence of the influence of the Doppler effect on 732 the LoRa radio channel in experiments with BW = 500, 733 125 and 62.5 kHz gives grounds to assume that it is absent for 734 BW = 250 kHz as well. It can also be noted that for almost 735 two years of NORBY operation, we did not find any influence 736 of the Doppler effect in regular radio sessions in the mode 737 BW = 250 kHz and SF = 10. Four experiments were performed with SF = 10 (Table 2). As expected (see Table 2 Table 2). The results of experiments the ground station in experiment No. 12 for an unidentified 793 reason (see Fig. 12). In these experiments, no influence of the 794 Doppler effect on the satellite-to-ground LoRa radio channel 795 was observed. of the LoRa satellite-to-ground radio channel, caused by a 823 large absolute value of F D , that is, the dynamic Doppler 824 effect. Five experiments were performed with SF = 12 (Table 2). 827 In experiment No. 20, at BW = 31.25 kHz and L = 55 bytes, 828 the ground station did not receive a single packet out of about 829 70 transmitted from the satellite. This result is similar to that 830 obtained in the experiment described above with SF = 11 and 831 BW = 31.25 kHz. The lack of LoRa radio communication 832 with the satellite in this case appears to be due to both static 833 and dynamic Doppler effects. However, the complete absence 834 of any data in the experiment did not allow us to draw any 835 VOLUME 10, 2022 receiver input (see Fig. 14). No impact of the Doppler effect 850 on the LoRa satellite-to-ground radio channel was observed 851 in these experiments. 852 The results of experiments No. 18 and No. 19 with SF = 853 12 and BW = 62.5 kHz are shown in Fig. 15 and Fig. 16 for 854 L = 55 and 143 bytes, respectively. It can be seen that with 855 these LoRa modulation parameters, there was no radio com-856 munication with NORBY at high satellite elevation angles, 857 that is, in the region of maximum absolute values of the 858 Doppler rate. In experiment No. 18, data packets transmitted 859 from NORBY ceased to be received by the ground station 860 at t = −89 s, when the Doppler rate F D increased in 861 absolute value to 38 Hz/s (see Fig. 15c). The reception of 862 data packets resumed at t = 93 s when the absolute value 863 This is a rapid change in the Doppler frequency shift, that 878 is, the dynamic Doppler effect. Thus, in these experiments, 879 the effect of the disruption of LoRa radio communication 880 during the passage of a satellite directly over a ground station, 881 predicted in [13] based on laboratory studies, was observed 882 for the first time. 884 We have presented here the results of the first flight tests of 885 LoRa modulation in a satellite-to-ground radio channel. The 886 tests were carried out using the NORBY CubeSat, which is 887 located in a low-Earth orbit with an altitude of approximately 888 560 km. The main purpose of the flight tests was to verify in 889 real space conditions the robustness of the LoRa modulation 890 against the Doppler effect, determined in laboratory studies 891 [13]. It was also supposed to check the maximum allowable 892 frequency offset between the transmitter and receiver given in Table 3. to weak signal or low signal-to-noise ratio at the input of the 910 LoRa receiver, six packets were lost for an unknown reason, 911 and 1192 packets were not received by the ground station 912 owing to the Doppler effect. The destructive impact of the 913 Doppler effect on the satellite-to-ground LoRa radio channel 914 was recorded in nine communication sessions (shaded rows 915 in Table 3). 916 The static Doppler effect was clearly manifested in four 917 experiments with SF = 7 and SF = 10 at spread spec-918 trum modulation bandwidth BW = 31.25 kHz (Nos. 6, 7, 919 10, and 11 in Table 3). The maximum frequency offset 920 F max between the carrier frequencies of the LoRa receiver 921 and the received signal, above which the LoRa radio link 922 was disrupted, was determined in each experiment. In total, 923 in four experiments, we obtained eight values of F max , four 924 of which were obtained when the satellite approached the 925 ground station and the rest when moving away. The averages 926 of the two F max values obtained from each experiment are 927 listed in was obtained also for SF = 12 but at BW = 125 kHz. ies of LoRa modulation [13]. Table 4 shows that all [14]. However, the datasheet [14] does 993 not contain any information regarding the criteria for LoRa 994 modulation stability when changing the frequency offset. The 995 results of laboratory experiments [13] concerning F max for 996 SF = 12 and BW = 125 kHz proved impossible to verify in 997 the NORBY orbit. There are no other experimental data on 998 the stability of the LoRa modulation to F ; therefore, there 999 is nothing to compare the obtained values of F max with. 1000 Table 5 shows in a visual form the Doppler effect restric-1001 tions on the use of LoRa modulation in radio communications 1002 with LEO satellites obtained in the NORBY experiments. For 1003 SF ≤ 11 and BW ≥ 62.5 kHz, there are no restrictions on the 1004 use of LoRa modulation in satellite radio communications. 1005 For BW = 31.25 kHz, the LoRa radio channel is affected 1006 by the static Doppler effect. Radio communication with the 1007 satellite is possible in this case only at high elevation angles 1008 of the satellite when flying directly over the ground station. 1009 For SF = 12, on the contrary, the dynamic Doppler effect 1010 becomes significant and radio communication is possible 1011 only at small satellite elevation angles at large distances from 1012 the ground station. In both latter cases, the duration of the 1013 communication session is significantly reduced due to the 1014 Doppler effect. 1015 In the case of SF = 11 and 12 at BW = 31.25 kHz, both 1016 static and dynamic Doppler effects catastrophically affected 1017 the LoRa radio channel. In this case, LoRa radio communi-1018 cation with a satellite in a low-Earth orbit is not possible. 1019 The restrictions imposed by the Doppler effect on the use 1020 of LoRa modulation relate primarily to LoRa modes that 1021 provide maximum receiver sensitivity and, consequently, the 1022 maximum radio communication range with minimum trans-1023 mitter power. Therefore, they are extremely important for the 1024 satellite Internet of Things. Our results show that the most 1025 super-sensitive LoRa modulation modes with SF = 11 and 1026 12 at BW ≤ 31.25 kHz are unsuitable for use in LEO satellite 1027 IoT networks due to the Doppler effect. This is the case unless 1028 some system is used to correct the carrier frequency of the 1029 LoRa receiver or transmitter on the base of the predicted 1030 Doppler shift and Doppler rate. 1031 The use of LoRa modulation modes with intermediate 1032 sensitivity, at which the influence of the Doppler effect 1033 begins, reduces the coverage area of the radio communi-1034 cation by one satellite. The static Doppler effect reduces 1035 the coverage area near the horizon. The dynamic Doppler 1036 effect results in a ''hole'' in the center of the coverage area 1037 directly below the satellite. The use of these LoRa modes 1038 in IoT satellite networks complicates the task of creating a 1039 globally contiguous coverage area using the LEO satellite 1040 constellation. 1041 LoRa modulation modes with SF ≤ 11 and BW > 1042 31.25 kHz can be used in satellite IoT without any limitations 1043 caused by the Doppler effect. The Doppler limits on the use of 1044 LoRa modulation in satellite radio communications obtained 1045 in the NORBY experiments are applicable to satellites in any 1046 orbit. However, it should be borne in mind that the orbital 1047 velocity of a satellite decreases with increasing orbit altitude. 1048 VOLUME 10, 2022 As a result, Doppler-induced restrictions become less critical when the IoT satellite constellation is placed in a higher orbit. In general, the satellite experiments conducted made it 1051 possible to determine the limits of applicability of LoRa 1052 modulation in radio communications with LEO satellites. munication due to the dynamic Doppler effect was detected 1055 when the satellite passed directly over the ground station. In conclusion, we would like to note that we are planning
2022-09-21T15:20:19.358Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "486ddd41b8150ed4b566e3e28619a569bcd0a0d9", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09895236.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "adf7e26125aa7cfc691e814840138e73c9c37b8c", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
119182684
pes2o/s2orc
v3-fos-license
Supersymmetry with long-lived staus at the LHC We consider SUSY extensions of the standard model where the gravitino is the dark-matter particle and the stau is long lived. If there is a significant mass gap with squarks and gluinos, the staus produced at hadron colliders tend to be fast (beta>0.8), and the searches based on their delay in the time of flight or their anomalous ionization become less effective. Such staus would be identified as regular muons with the same linear momentum and a slightly reduced energy. Compared to the usual SUSY models where a neutralino is the LSP, this scenario implies (i) more leptons (the two staus at the end of the decay chains), (ii) a strong e-mu asymmetry, and (iii) less missing E_T (just from neutrinos, as the lightest neutralino decays into stau). We study the bounds on this SUSY from current LHC analyses (same-sign dileptons and multilepton events) and discuss the best strategy for its observation. Introduction The determination of the mass and the couplings of the Higgs boson at the LHC will not complete our understanding of the mechanism responsible for the breaking of the electroweak (EW) symmetry. It will be also essential to establish whether or not there is a dynamical principle explaining its nature. Supersymmetry (SUSY) is a possibility that has attracted a lot of work during the past decades. Minimal SUSY extensions with a neutralino as the lightest SUSY particle (LSP) provide a good candidate for dark matter, imply a consistent picture for gauge unification, and can in principle accommodate a 125 GeV light Higgs [1]. It is apparent that the non-observation of flavor-changing neutral currents or electron and neutron electric dipole moments requires an effort in these frameworks. However, SUSY models have proven flexible enough to adapt, and they have reached the current phase of direct search at the LHC in a (reasonable) good shape. SUSY searches at hadron colliders have focused on a few generic signals with relatively small backgrounds. The classic one [2] is jets with no hard leptons but large / E T from squarksq going into a quark q plus the lightest neutralinoχ 0 1 . It was then emphasized [3] that chain decays of colored SUSY particles through charginos and heavier neutralinos giving two isolated leptons usually have a much larger branching ratio. In particular, gluino pairs provide same-sign (SS) dileptons together with jets and / E T , a clean signal of high discovery potential [4]. Initial searches at the 7 TeV LHC do not show any hints of such signals and set bounds on squark and gluino masses that rise up to 800 GeV and higher, although a complete exclusion of this mass region in the neutralino LSP model would require a careful consideration of some cases with an anomalous signal [5,6,7]. There are, however, other SUSY scenarios that provide a different generic signal, and one may wonder how constrained they are by current LHC analyses. In particular, a possibility that is well motivated from a model-building point of view is the case with a gravitino LSP. This is natural in all models with a low scale of SUSY breaking, like the ones mediated by gauge interactions [8,9]. Even in gravity-mediated models, the LSP gravitino may be an acceptable dark matter candidate with [10] or without [11] R-parity violation. In all these cases the nextto-LSP could be a long-lived charged particle (e.g., theτ ) that, if produced at the LHC, would decay after crossing the detectors. The search strategy in these scenarios is then different [12]. A charged particle of mass mτ and three-momentum p = βγmτ will curve under the magnetic field in the inner detector like a muon of the same momentum. There are, however, two observables that could distinguish such a heavy muon: an anomalous ionization in the silicon tracking detector and a delay in the time Figure 1: Distributions of dE/dx (left) and speed β (right) observed at D0 (from [14]). The scale of dE/dx is adjusted so that the distribution from Z → µµ peaks at 1. of flight from the vertex to the muon chambers. As a stau or a muon propagate in matter, low q 2 processes like ionization are insensitive to the mass, and one expects that the effects on the medium will only depend on the velocity (or βγ) of the particle. The Landau most probable energy deposition through ionization is large at low values of β (it goes like (βγ) −2 ), has a minimum at βγ ≈ 4 and reaches the so called Fermi plateau at βγ > 100 (see Fig. 30.9 at the PDG [13]). In particular, the ionization along the track of a 100 GeV stau of βγ = 2 (i.e., β = 0.89 and p = 200 GeV) would be very similar to that of a muon of the same momentum, and 25% higher at βγ = 1 (or β = 0.7). In Fig. 1-left we reproduce a plot from the Tevatron D0 experiment [14] of dE/dx relative to the average value for muons passing certain p T , rapidity and isolation cuts. For an actual stau, given the width of the expected distribution (around 30% of its average value, see Fig. 30.8 at the PDG [13]) and the uncertainty in the response of the detector, one could expect a clear difference with muons only for β ≤ 0.7. The direct measure of β has just a slightly better resolution. At D0 (see Fig. 1-right) 27% of the muons are measured with β > 1.1, and 3.5% of the subluminal ones have β < 0.8. A 37 pb −1 ATLAS analysis [15] at the 7 TeV LHC shows a more accurate description of the muon velocity, setting the limit mτ > 110 GeV from direct stau production. A recent study [16] by CMS using 5.0 fb −1 of data could imply higher bounds. It is difficult, however, to use their results to constrain a particular model, since (i) they do not provide the complete velocity distribution observed for muons, including the region with β > 1 (necessary to estimate the effect of the reconstruction on the stau velocity), and (ii) they could be overestimating the anomalous ionization of heavy particles. In particular, their method seems to imply a 10% excess for a stau of β = 0.9, when such particle is below the Fermi plateau and should ionize like a muon of the same three-momentum (see [17] for a discrimination based on radiative energy deposition). In addition, in models where the stau is significantly lighter than squarks and gluinos its velocity tends to be high (see [18] for an analysis of the kinematics in these chain decays), and one is left with relatively few events with a β small enough to give a clear deviation in the two observables. Let us take, just for illustration, a 150 GeVτ R together with 750 GeV Higgsinos, 800 GeV light-flavor squarks and 1 TeV gluinos, with the slepton doublets and the rest of squarks and gauginos in the 800-1000 GeV mass region. We will take the other twol R sleptons with a mass similar to mτ R (see next section). The cross section for direct (Drell-Yan) production at the 7 TeV LHC is around 33 fb, which is reduced to 31 fb once we require at least one stau with p T > 40 GeV and |η| < 2.5. In contrast, indirect production through squarks and gluinos gives σ = 429 fb, or σ = 428 fb once we impose the same p T and rapidity requirements. This accounts for 14 times more stau pairs from indirect than from direct production. We plot their βγ distribution in Fig. 2. While 28% of the staus from direct production have β < 0.8, only 5% of the ones from chain decays are in this β region. If we restrict to the β < 0.7 (where a more significant anomaly can be expected) these percentages are reduced to 14% and 1%, respectively. Therefore, in these models most of the events will contain two staus that look like regular muons of momentum p µ = pτ and energy E µ = E 2 τ − m 2 τ . Although specific analyses have been proposed [19], one may also ask how the usual SUSY searches constrain these scenarios assuming that the staus are identified as muons, and how to modify the cuts in order to optimize the search. In this paper we study the bounds from recent studies on SS-dilepton [20] and inclusive-multilepton [21] production at the LHC. 2 Same-sign leptons, jets and / E T SS dileptons can be an important signature in neutralino LSP models when gluinos are at accessible energies. If the collision producesgg pairs that decay into charginos and neutralinos other than χ 0 1 , SS leptons will be very frequent, as each decay chain can give a lepton or an antilepton with equal probability. In addition, gluinos must decay into (real or virtual) squarks producing jets, and there will also be / E T from the undetected neutralino LSP. The same type of signal (with a smaller number of jets) may also be obtained fromũũ pairs produced through gluino exchange in the t-channel. Gluinos in neutralino LSP models. In a recent (2.05 fb −1 at 7 TeV) study [20] ATLAS selects events in which the two higher-p T leptons (ℓ = e, µ) have the same charge, with at least 4 jets of p T > 50 GeV, and with / E T > 150 GeV (plus certain isolation and rapidity cuts). They estimate a background of about 1 event from ttX, fake leptons (b or c-hadron decays), charge misidentification and dibosons, while they observe no events in the data. Then this result is used to constrain the signal from 650 GeV gluinos that decay into ttχ 0 1 through a virtual stop of 1.2 TeV. They assume a 150 GeV neutralino and search for the channel where two of the four final top quarks give SS leptons. They predict around 7 events satisfying all the requirements, which allows them to exclude the model. We have reproduced their study in order to understand the differences with the long-lived stau (LLST) scenario. In our analysis we have used MadGraph 5 [22] to obtain thegg and thegg + jet cross sections, Prospino 2.1 [23] to estimate next-to-leading order corrections, PYTHIA 6.4 [24] for hadronization/showering effects and PGS 4 [25] (tuned to ATLAS in this study and to CMS in the multilepton analysis) for detector simulation. We find that at the given luminosity a 650 GeV gluino mass implies the production of 1047 gg pairs. A factor of ǫ = 0.55 must be included to take into account the detector reconstruction, identification and trigger efficiency, leaving the number of observable pairs in Lσǫ = 576. The detection of two SS leptons (from t decays) is then a very selective requirement, reducing the signal to just 18 expected events. The successive cuts N jet > 3 and / E T > 150 GeV reduce this number further to 12 and 7 events, respectively. Although these two cuts do not affect significantly the signal, they are essential to reduce the background. The total acceptance after cuts is A = 1.2%, implying a visible cross section σ vis = σǫA = 3.2 fb that is above the σ vis < 1.6 fb limit established by ATLAS. Gluinos in long-lived stau models. Generically, the LLST scenario will imply a signal with some basic differences versus the neutralino LSP case: • Two extra leptons, as SUSY particles are produced in pairs and each one will chain-decay into a stau. • A strong µ-e asymmetry, as these staus taken for leptons look always like muons. • Less / E T , as the lightest neutralino does not escape detection but decays into visible ℓl pairs. Some comments about the second and third points above, however, are here in order. To be definite we will takeτ 1 ≈τ R , andẽ R ,μ R of similar mass (as suggested by flavor and other precision observables). This means that, depending on the degree of degeneracy, when aẽ is produced it may or may not decay into aτ inside the detector (e.g.,ẽ →τ eτ,τ ν e ν τ , . . .). If e escapes without decaying, it will just look like a long-lived stau. Ifẽ decays promptly (we neglect the possibility with displaced vertices) the resultingτ will take a very large fraction of the selectron energy, and none of the extra particles (charged leptons and/or photons) will have enough p T to pass the cuts. Moreover, since theẽ boost is not ultrarelativistic (typically Eẽ/mẽ = 2-5), the extra particles will not be very focused along the stau direction and will not affect substantially its isolation cuts. Therefore, we can consider that the threel R are effectively long-lived staus looking like muons. Regarding the amount of / E T in this scenario, notice that if the last step in the decay is notχ 0 → ℓ ±l∓ 1 butχ ± 1 →l ± 1 ν ℓ , then / E T will not be necessarily small. In particular, if mχ± ≫ mτ then the final neutrino will take close to half of the chargino energy. Let us then perform the ATLAS analysis assuming the LLST scenario. We will start with the case with a 150 GeV stau (together withẽ R ,μ R of similar mass) instead of the neutralinõ χ 0 1 (assumed to be mostly a Bino), which is moved to 200 GeV. In our simulation we will just change these three sleptons (l ± 1 ) to muons of the same three-momentum. The 650 GeV gluinos, like in their study, will be forced to decay through a virtual stop into the neutralino, e.g., We find that the 576 gluino pairs yield 209 SS-dilepton events after the geometric and kinematic cuts, with 185 of them including at least 4 jets of p T > 50 GeV. The large acceptance reflects the presence of the extra lepton produced in our framework. The requirement / E T > 150 GeV, however, reduces the 185 events to just 18, defining a signal that is a bit larger than the one obtained in the neutralino LSP scenario. We obtain an acceptable model if the gluino mass is increased to 890 GeV, with only 3.5 events surviving the cuts on 50 initial gluino pairs. The possibility that most signal events are cut by the / E T requirement is frequent in these LLST models. For example, if the 650 GeV gluinos are forced to decay through a virtual lightflavor squarkq (instead of the stop), we find 207 SS dileptons, 163 of them with at least four very energetic jets, but only 2.3 events with large / E T . This result is mildly dependent on the neutralino mass. If mχ0 1 grows from 200 to 400 GeV the number of SS dileptons does not change, but the N jet > 3 cut is significantly stronger (as the total energy that goes into jets is smaller) and reduces the sample to 127 instead of 163 events. A heavier neutralino implies that the charged lepton from its decay tends to carry more energy. If it is a τ decaying leptonically, the energy taken by neutrinos will also be larger. The / E T cut is then weaker in this case: we obtain a total of 13 events, which are enough to exclude the model. Therefore, we find that the analysis would not exclude 650 GeV gluinos for mχ0 1 As explained above, when charginos appear in the gluino chain decay these models include a larger fraction of events passing the / E T cuts, e.g., g →tt →χ + 1 bt →l + 1 ν ℓ bt . Let us consider the case where they go into relatively light Higgsinos, µ = 200, 400 GeV with M 1,2 = 700 GeV. The results are summarized in Table 1, where we have assumed the same detector efficiency as in the previous study. We see that the signal is stronger than the one in analogous neutralino LSP scenarios, specially for values of the chargino mass significantly larger than mτ . If the Higgsino mass is 400 GeV we obtain that the ATLAS analysis implies mg ≥ 980 GeV. Squarks in long-lived stau models. Let us comment on the limits implied by this analysis when gluinos are heavier and the collision only produces squarks. Notice that in the neutralino LSP scenario considered by ATLAS with the squarks decaying into qχ 0 1 the signal would not include charged leptons. In our case, however, each neutralino will go into a muon-like slepton plus a lepton, providing a signal. Actually, these events would look similar to the gluino pairs studied before but with two fewer jets (or top quarks intt * production). Notice also that in the LLST scenario an event with a pair of light-flavor squarks will not pass the N jet > 3 requirement unless the squarks are produced with extra jets (a process that is included in our simulation) and/or the final τ lepton decays hadronically but is untagged. For the analysis of stop-pair production (in Table 2), we taket 1 (mostlyt R ) at 650 GeV with the rest of squarks decoupled. The signal will include SS dileptons and 4 jets if, for example, one of the tops decays hadronically and the other one leptonically, which would provide also / E T . We find, however, that the requirement / E T > 150 GeV is too strong (it reduces the acceptance to just 2.6%) and the model can not be excluded by the current analysis. If the stop can decay both to charginos and neutralinos, e.g.,t → bχ + → bτ + ν andt * →tχ 0 →bqq ′ τ −τ + , the channel with the two tops going through chargino does not contribute and the signal is even weaker (0.07 events pass the cuts on the initial 11tt pairs). We have included also this case in Table 2. To illustrate the case with light-flavor squark production, we take the first two families of squarks (L and R) with mq = 650 GeV together with 1.5 TeV gluinos. We obtain a total of 672qq events (90% from gluino in the t or the u channels), with 274 of them including an additional jet. In Table 3 we summarize our results when the squarks are forced to decay to neutralinos (M 1 = 200 GeV and M 2 , µ = 700 GeV) or can also decay into charginos (µ = 200 GeV, M 1,2 = 700 GeV). We observe that the N jet > 3 cut is now severe and, again, the / E T requirement puts the first case well below the background. The second case, with one squark giving a chargino (χ + → ντ + ) and the other one a neutralino (χ 0 → τ − hτ + ), implies more / E T , and squarks masses below 770 GeV would be excluded by these ATLAS results. Optimized SS-dilepton search. The search for LLST SUSY based on SS dileptons could be optimized by slightly adapting the cuts. The same ATLAS cuts used in the neutralino LSP search are optimal only for gluino production with stop and charginos in its chain decay. In the rest of the cases the missing E T cut must be relaxed. The requirement of 4 very energetic jets is optimal in the search for gluino production, but it must be also relaxed to N jet ≥ 2 in squark searches. In that case the background (which tends to be larger) can be reduced requiring for another hard lepton that combined with any of the SS leptons is off the Z mass shell. In all the cases the SS-dilepton excess exhibits a large electron-muon asymmetry, as long-lived sleptons look always like muons. If theτ 's are obtained from Higgsino decays we obtain no ee pairs and just 3-10% of eµ events, with the rest of them defined by two muon-like particles. For staus from parent gauginos there is 1% of ee, 20-30% of eµ, and 70-80% of µµ events. Inclusive multilepton search In a recent work [21] CMS has searched for an anomalous production of multilepton events at the 7 TeV LHC for an integrated luminosity of 4.98 fb −1 . Their analysis is very complete and model-independent, it applies to any scenario with new particles producing leptons and certainly to our LLST model. They use H T , defined as the scalar sum of the p T of all reconstructed jets, and the analogous S T (which includes the leptons and missing E T ) to detect the presence of heavy physics. They classify in a systematic way all the possibilities: 4 or 3 leptons; / E T above or below 50 GeV; lepton pairs around the Z mass or not; and low or high values of H T or S T . Moreover, they separate events with 0, 1 or 2 tau leptons decaying hadronically into a single track (one-prong τ h decays). Being heavier, the third lepton family tends to be more sensitive to the new physics. This is also the case in all SUSY models, where the Higgsinos couple to taus but not significantly to muons or electrons. Events with heavy particles decaying into leptons would appear in one or another of the bins that they consider, and the estimated background (which includes double vector-boson, tt, or ttV production) is particularly small in the 4ℓ channels. In LLST SUSY any event has at least two charged leptons (the two staus) at the end of the decay chains. If the staus are produced through neutralino the proces will also include extra leptons, whereas theχ ±χ0 channel implies a neutrino (i.e., missing E T instead of ℓ ± ) and an excess of three-lepton events. Notice that if the lighter neutralinos are mostly Higgsinos the muon-like slepton will come with a τ , while gauginos will imply the three lepton flavors with the same frequency. Under this multilepton analysis the difference between gluino and squark events is not so strong as in the SS-dilepton search, since the number of jets is not a discriminating observable. Instead, the mass difference between the colored particles (g orq) produced in the collision and the chargino/neutralino mass becomes critical. It is easy to see that if this mass difference is large the event will have energetic jets and a large value of H T , whereas if it is small most of the energy will go to the leptons. In Table 4 we show for illustration the implications of a LLST model with 650 GeV gluinos that are forced to decay through virtual squark into 200 GeV (mostly) Higgsinos, which then go toτ τ orτ ν (mτ = 150 GeV). We have imposed the isolation cuts and the trigger efficiencies described in [21], and have not included events where opposite-sign same-flavor (OSSF) lepton pairs are within the Z-mass window (75 GeV < m ℓℓ < 105 GeV), as they combine larger backgrounds with a smaller signal. We obtain close to 2500gg andgg + jet events that after cuts translate into 530 3ℓ and 74 4ℓ events. Relative to the background, the 4ℓ channels with 0 or 1 τ h offer the strongest signal, which is enough to exclude this possibility. We find that these 4ℓ channels are very efficient to explore LLST SUSY. In particular, 950 GeV gluino and squark masses seem excluded by this analysis. In the first case we find 97 gluino pairs that after cuts introduce 12 4ℓ, zero-τ h events where the SM expectation is 2.8, with similar figures for the squarks. In both LLST cases around 50% of the 4ℓ, zero-τ h events are defined by 3 muon-like leptons plus one electron, 30% are 4 muons, and 20% 2 muons and 2 electrons. Finally, we would like to comment on another result described in the CMS study. They find one 4ℓ event in the zero-τ h , no-Z, high-/ E T , low-H T bin when the expectation is 0.20 ± 0.07. This observation comes together with three more 4ℓ events in the N(τ h ) = 1, no-Z, high-/ E T , low-H T bin for a background of 0.59 ± 0.17 events (see Table 5). Although these events are not statistically significant, we think it is interesting to find whether LLST SUSY could explain consistently a multilepton anomaly of this type. The low-H T feature would be obtained if there is a relatively small mass difference between the colored particles (let us say squarks) and the charginos/neutralinos (mostly Higgsinos), which reduces the amount of energy going into jets. The four leptons would result when two neutralinos go into 2τ τ , with both taus decaying leptonically τ → ℓνν in the N(τ h ) = 0 event or one leptonically and the other one hadronically in the 3 events with one τ h . In Table 5 we have taken 1070 GeV (light-flavor) squarks, 1050 GeV Higgsinos and 200 GeV staus, with the rest of SUSY particles between 1500 GeV and 2 TeV. For the quoted luminosity we obtain 92qq pairs yielding after cuts a total of 8 4ℓ and 16 3ℓ events in different / E T , H T and N(τ h ) bins. The model would have also implications in the analysis based on S T (the total transverse energy from jets, leptons and / E T ). In particular, the 4ℓ event in the N(τ h ) = 0 bin and the three events with N(τ h ) = 1 tend to have large values of S T , as the parent particles are very heavy colored particles. Lower values of S T would require the direct production of the parent neutralino (mostly Higgsino) and masses around µ = 400 GeV. Summary and discussion SUSY has been during the past decades the favorite candidate to explain the physics above the EW scale. Unfortunately, no signs of SUSY have been observed yet at the LHC. In this paper we have analyzed how model-dependent this SUSY search has been. In particular, we have focused on a scenario where the squarks and gluinos created in pp collisions always produce a long-lived stau at the end of their decay chain. We have argued that if their mass difference is large, most of the staus will be fast (β > 0.8) and will look indistinguishable from a muon. Instead of the large / E T typical in neutralino LSP scenarios, these LLST models would be characterized by the presence of extra leptons. We have studied how constrained they are by recent SS-dilepton and multilepton searches performed by ATLAS [20] and CMS [21], respectively. We find that LLST SUSY provides signals with relatively low SM background. The optimal search for SS dileptons would be obtained by relaxing the cuts on / E T . In this sense, another very recent CMS analysis [26] of SS dileptons at the LHC provides the results in each region of E T and H T , which would allow a complete exploration of the scenario presented here (we estimate that it could yield bounds very similar to the ones obtained in Section 3). Both in SS-dilepton and multilepton searches the larger frequence of muons relative to electrons could be an interesting observation. Notice that any model with long-lived charged particles resulting from the decay of heavier colored ones would imply an excess of muon-like particles, while the usual backgrounds (from top-quark or vector-boson decays) are µ-e symmetric. The signature in this LLST scenario is somewhat similar to the one from models with broken R-parity and slepton decaying promptly into lepton plus gravitino [27,28]. Our signal, however, tends to include less / E T , as the whole slepton (and not just half of it) is visible. Given the negative results provided so far by standard SUSY searches at the LHC, in order to complete the search it seems necessary to explore in detail also these other SUSY possibilities.
2012-09-10T09:35:45.000Z
2012-06-22T00:00:00.000
{ "year": 2012, "sha1": "7e784fd51a363fdf2b776da9c45fd4ec05f42521", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1206.5108", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7e784fd51a363fdf2b776da9c45fd4ec05f42521", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
39732397
pes2o/s2orc
v3-fos-license
Applied Catalysis A: General Deactivation study of the hydrodeoxygenation of p -methylguaiacol over silica supported rhodium and platinum catalysts Hydrodeoxygenation of para -methylguaiacol using silica supported Rh or Pt catalysts was investigated using a fixed-bed reactor at 300 ◦ C, under 4 barg hydrogen and a WHSV of 2.5 h − 1 . The activity, selectivity and deactivation of the catalysts were examined in relation to time on stream. Three catalysts were tested: 2.5% Rh/silica supplied by Johnson Matthey (JM), 2.5% Rh/silica and 1.55% Pt/silica both prepared in-house. The Rh/silica (JM) showed the best stability with steady-state reached after 6 h on stream and a constant activity over 3 days of reaction. In contrast the other two catalysts did not reach steady state within the timeframe of the tests, with continuous deactivation over the time on stream. Nevertheless higher coking was observed on the Rh/silica (JM) catalyst, while all three catalysts showed evidence of sintering. The Pt catalyst (A) showed higher selectivity for the production of 4-methylcatechol while the Rh catalysts were more selective toward the cresols. In all cases, complete hydrodeoxygenation of the methylguaiacol to methylcyclohexane was not observed. Introduction Bio-oils upgrading can be performed using a variety of different approaches. In order to blend with crude oil, or to drop-in to existing petroleum processes, the oxygen content (30-50%) of the bio-oil has to be reduced. Deoxygenation of the bio-oils can be achieved using a zeolite cracking approach [1] or catalytic hydrodeoxygenation [2,3]. Reductive media such as hydrogen or a hydrogen donor solvent are typically used for hydrodeoxygenation or the hydrogen transfer reaction. While hydrodeoxygenation of bio-oils has been studied for decades, the catalytic mechanisms and reasons for catalyst deactivation are still not fully understood [4]. The chemical composition of the bio-oils is extremely complex and depends on the amount of cellulose, hemicelluloses and lignin in the biomass feedstock and the pyrolysis conditions. During the pyrolytic process, celluloses and hemicelluloses produce sugars and furans which undergo additional decomposition to generate esters, acids, alcohols, ketones and aldehydes [2]. The phenolic compounds (phenols, guaiacols and syringols) are produced from the lignin component. Amongst all the compounds present in the bio-oil, the phenolics are by far the most studied. The reasons are their multiple functional groups, their high proportion in the bio-oil and their tendency to promote catalyst deactivation. Another reason of the extensive use of phenolics as model compounds for bio-oil upgrading relies on the higher bond dissociation energy required to break aryl-hydroxyl or aryl-methoxy linkages compared to alkyl hydroxyl or alkyl ether linkages [5]. Within the pyrolysis of aromatic compounds, guaiacol has received the most attention [6,7]. During the upgrading process, guaiacol can undergo demethoxylation, demethylation and partial or complete hydrogenation. Various catalysts have been studied for the hydrodeoxygenation of guaiacol. In a previous study, noble metals catalysts such as Pt, Pd or Rh, when compared to conventional sulfided CoMo/Al 2 O 3 , showed better performance and exhibited a lower carbon deposit [8]. A comparative study of Pt/Al 2 O 3 , Rh/Al 2 O 3 and presulfided NiMo catalysts for the HDO of microalgae oil reported the better stability of the noble metal catalysts reaching a steady state after 5 h time on stream. The NiMo catalyst which did not reach steady state after 7 h reaction was prone to higher carbon deposition [9,10]. Catalyst supports also play a significant role in the stability of the catalysts. Previous works reported that use of basic magnesia supports reduced the coking of the catalyst when compared to acidic alumina supports [11]. In this paper we report on the HDO reaction of p-methylguaiacol (PMG) over silica-supported rhodium and platinum catalysts. Silica was selected as the catalyst support for this study due to its less acidic properties with the aim of reducing carbon deposition. Instead of guaiacol, HDO was performed using p-methylguaiacol as the model compound, as it is one of the main components of the pyrolytic oil formed from lignocellulosic feedstocks. Also unlike guaiacol, the methylation in the para position allowed discrimination of different reaction pathways via the generation of m-or p-cresol as illustrated in Fig. 1. The complete list of product names was given in Table S.1. Two 2.5% Rh/silica catalysts and a 1.55% Pt/silica catalyst were tested for this study. Materials p-Methylguaiacol (PMG) and reference products were purchased from Sigma-Aldrich. A 2.5% Rh/SiO 2 catalyst was obtained from Johnson Matthey and prepared by incipient-wetness impregnation rhodium chloride salt on a Grace-Davison silica support (catalyst reference M02026). A 1.55% Pt/SiO 2 and a 2.5% Rh/SiO 2 catalysts were prepared by incipient-wetness impregnation of aqueous ammonium tetrachloroplatinate(II) (Alfa Aesar, 99.9%, (NH 4 ) 2 PtCl 4 ) and Rhodium (III) chloride, (Sigma, 99.9%, RhCl 3, xH 2 O) over fumed silica (Sigma-Aldrich, 0.2-0.3 mm avg. part. Size). Detailed protocols for Pt/SiO 2 (A) and Rh/SiO 2 (A) catalysts prepared by Aston University was described in previous work [12]. Each catalyst was ground and sieved to between 350 and 850 m before use. The characteristics of the catalysts are listed in Table 1. All other reagents and solvents were purchased from Sigma-Aldrich and used without further purification. Design of the fixed-bed unit In a previous study, the eluent gas stream of guaiacol HDO was quantified using on-line GC analysis [13]. However in our system the complexity of the HDO products mixture (see Fig. 1) was not compatible with on-line GC analysis of the deoxygenated/hydrogenated oil. For example, para and meta-cresols could only be GC-differentiated after a silylation step. Therefore a collector was required to sample the condensable products at different time on stream without interrupting the reaction. In a previous paper the liquid products were collected by bubbling the vaporised products into a cold liquid such as isopropanol [7]. This technique has the advantage to give an absolute value for each product, however this technique was felt more suitable for a low pressure system. In the present study, liquefaction of the product gas stream was obtained after passing through a condenser at 5 • C. As illustrated in Fig. 2, after passing through the condenser, the gas-liquid were separated and the liquid collected by gravitation into a ¼ inch stainless steel tubing before filling the collector from the bottom. A system of valves permitted isolation of the collector for sampling without disturbing the pressure of the system. The light products were also collected into a U-shape pipe, cooled to −60 • C connected after the pressure relief valve. The analysis of the light trap showed that only 5% of the toluene (lightest compound detected) was not condensed after passing through the condenser. No other products were detected to have passed the condenser in the gas phase except a trace of p-methylguaiacol (PMG) due to its large excess in the product stream. However, due to the lack of precision on the sampling volume (liquid hold-up in the condenser), an exact mass balance and an absolute quantification of each product could not achieved. As consequence, only a relative molar quantification of products was performed with the conversion, yield and selectivity as defined in Eqs. Catalytic hydrodeoxygenation of p-methylguaiacol The catalytic test was performed in a continuous-flow, fixed-bed reactor over 0.45 g of silica supported noble metal catalyst. Similar catalyst bed volumes of 0.84-0.88 cm −3 were estimated from the bulk densities of the catalysts of 0.51, 0.52 and 0.54 g cm −3 for the Rh/SiO 2 (A), Pt/SiO 2 (A) and Rh/SiO 2 (JM), respectively. With a reactor inner diameter of 0.40 cm, the bed catalyst length was around 6.7-7.0 cm. The catalyst was pre-reduced in-situ before reaction at 300 • C for 2 h under 100 mL min −1 of 40% H 2 /Argon. After the catalyst was reduced, p-methylguaiacol (PMG) was pumped into the gas flow and vaporised at 200 • C. The reaction temperature was 300 • C with a hydrogen partial pressure of 4 barg giving a H 2 :PMG molar ratio of 15. The total pressure was made up to 10 barg using argon. The weight hourly space velocity (WHSV) of PMG was 2.5 h −1 , while the gas hourly space velocity (GHSV) was 7200 h −1 with gas flow rate of 100 mL min −1 . Gas mass flow controllers were used to feed hydrogen and argon while a Gilson HPLC pump was used to feed the p-methylguaiacol. In order to avoid condensation, gas lines before and after the reactor were heated to 220 • C. A condenser at 5 • C was used to liquefy the products before sampling. The HDO products (100-200 mg) were diluted in 5 mL of dichloromethane (DCM). Analytes preparation In order to fully quantify the products and due to the significant variation of products' abundance, three distinct solutions were prepared from the same mixture of products/internal standards (IS). First, an aliquot of the HDO products in DCM (100 L) was mixed with 50 L of IS (C10 at 0.86 and C17 at 10.2 g L −1 ). Then, 20 L of this mix was silylated while the remaining mixture was diluted with 0.5 mL of dichloromethane. Finally, 5-10 L of the diluted solution was also silylated to quantify the PMG, methylcatechol and cresol products. The non-silylated solutions were injected to quantify the light products such as methylcyclohexane and toluene but also the 4-methyl-2-methoxycyclohexanone which co-eluted with the trimethylsilyl methylcyclohexanol. This technique permitted a full quantification of minor and major products. GC/FID experimental conditions Qualitative analyses of the HDO products were performed on a Shimadzu GC-2010 coupled to a MS-QP2010S. Samples were injected on a ZB-5MS capillary column (30 m × 0.25 mm × 0.25 m). The quantitative analyses were performed on an HP 5890 gas chromatograph fitted with a Supelco DB-5 capillary column (30m × 0.32 mm, 1 mm thickness). Quantification was obtained using decane C10 and heptadecane C17 as internal standards and the relative response coefficients were based on exact products when possible or on response coefficients of similar product structure for non-commercial products. Temperature programmed oxidation analysis Temperature programmed oxidation (TPO) was carried out using a combined TGA/DSC SDT Q600 thermal analyser coupled to an ESS mass spectrometer for evolved gas analysis. A sample loading of 10-15 mg was used and samples were typically heated from 30 • C to 900 • C using a ramp rate of 10 • C min −1 under 2% O 2 /Ar, with a flow rate of 100 mL min −1 . For mass spectrometric analysis, various mass fragments were followed such 18 (H 2 O), 28 (CO), and 44 (CO 2 ). All TGA work was kindly carried out by Andy Monaghan at the University of Glasgow. Surface area and pore volumes distribution Nitrogen porosimetry was conducted on the Quantachrome Nova 4000e porosimeter and analysed with the software of NovaWin version 11. Samples were degassed at 120 • C for 2 h under vacuum conditions prior to analysis by nitrogen adsorption at −196 • C. Adsorption/desorption isotherms were recorded for all parent and Pt-impregnated and Rh-impregnated silicas. The BET (Brunauer-Emmett-Teller) surface areas were derived over the relative pressure range between 0.01 and 0.2. Pore diameters and volumes were calculated using the BJH method according to desorption isotherms for relative pressures >0.35. Metal dispersion and surface areas Pt and Rh dispersions were measured via CO pulse chemisorption on a Quantachrome ChemBET 3000 system. Samples were outgassed at 150 • C under flowing He (20 mL min −1 ) for 1 h, prior to reduction at 150 • C under flowing hydrogen (10 mL min −1 ) for 1 h before room temperature analysis (this reduction protocol is milder than that employed during Pt or Rh impregnation and does not induce particle sintering). A CO:Pt surface stoichiometry of 0.68 was assumed according to the literature [14,15]; the CO-Rh interaction was much more complicated because the CO bond is very sensitive to the particular electron distribution, such as the spinstates and the initial occupations of the Rh 5 s electronic states [16]; therefore, the CO:Rh ratio was difficult to determine. For estima- tion, a CO:Rh surface stoichiometry of 1 could be assumed on the basis of literature [16,17]. CHNS elemental analysis Carbon loadings (accumulated on the spent catalysts) were obtained using a Thermo Flash 2000 organic elemental analyser, calibrated to a sulphanilimide standard, with the resulting chromatograms analysed using Thermo Scientific's Eager Xperience software. Vanadium pentoxide was added to aid sample combustion. Raman spectroscopy analysis Raman spectra of post reaction catalysts were obtained with a Horiba Jobin Yvon LabRAM High Resolution spectrometer. A 532.17 nm line of a coherent Kimmon IK series He-Cd laser was used as the excitation source for the laser. Laser light was focused for 10 s using a 50× objective lens and grating of 600. The scattered light was collected in a backscattering configuration and was detected using nitrogen cooled charge-coupled detector. A scanning range of 100 and 4100 cm −1 was used. p-Methylguaiacol conversion and selectivity The activity/selectivity of the three catalysts was studied with the time on stream. The main purpose of these experiments was to determine if the catalyst activity reached a steady state or if deactivation was continuous. Low activity of the catalyst was not an issue but identification of the variation of catalyst activity and product selectivity were critical for a future kinetic study of the hydrodeoxygenation of p-methylguaiacol. Catalyst testing was performed at 300 • C, a WHSV of 2.5 h −1 , 4 barg hydrogen and a H 2 :PMG molar ratio of 15:1. The rhodium/silica (JM) catalyst and both Rh/silica (A) and Pt/SiO 2 (A) catalysts were studied over several days (see Fig. 3). Rh/SiO 2 (JM) showed fast deactivation initially but this was followed by a period of constant activity, whereas although the Rh/SiO 2 (A) showed the same deactivation profile initially, no steady state was observed. The deactivation profile of the Pt/SiO 2 catalyst was different from that of the rhodium catalysts in that it exhibited a constant loss of activity. This linear deactivation has previously been reported on HDO of guaiacol over Pt/Al 2 O 3 and Pt/MgO [11]. The deactivation of the Pt/silica was plotted (Fig. 4) using the relationship, ln[X t0 /(1 − X t )] = ln(k w ) − k d t, where, X t is the conversion the reactant at time t, k is the rate constant, w represents weight time, k d the deactivation rate constant and t is time [18]. The deactivation plot gave a deactivation rate constant of 0.02 h −1 (R 2 = 0.92). However, the Rh/Silica (A) data fitted a logarithmic curve with regression coefficient (R 2 ) of 0.99 which showed that the deactivation mechanism was not time independent. The catalysts initial selectivity and those after ∼12 h and ∼32 h TOS are shown in Fig. 5. Compared to the Pt/SiO 2 (A) and the Rh/SiO 2 (A), the Rh/SiO2 (JM) was the only catalyst that showed constant selectivity from 10 h to 33 h TOS. After 32 h on stream, 42 mol% of the products were p-methylcatechol with Rh/SiO 2 (A) whereas with Rh/SiO 2 (JM) the selectivity to p-methylcatechol was only 12 mol%. This significant variation may be explained by the different nature of silica support used. As illustrated in Fig. 6, Rh/SiO 2 (A) produced p-methylcatechol with the same rate from 12 to 72 h TOS. While deoxygenation and hydrogenation reaction were deactivated over time, the demethylation of the PMG was not affected. The high demethylation activity for the Rh/silica (A) can be suggested by higher acidity of the silica support. For the Pt/SiO 2 (A), the selectivity toward the 4-methyl catechol increased from 8 to 25 mol% from 1 h to 32 h TOS while the selectivity toward the m-and p-cresol decreased from 22.6 and 40.1 mol% to 13.6 and 34.0 mol%. Comparing the two rhodium catalysts some notable differences are observed. Initially the Rh/silica (A) shows a high selectivity to toluene and a low selectivity to 4-methyl catechol but by 32 h TOS the selectivity has reversed. This behaviour raises the question as to whether it is possible to remove two OH groups before desorption, effectively by-passing the formation of p-cresol as an intermediate. This behaviour is not seen with the Rh/silica (JM) catalyst where the selectivity is relatively unchanged over the TOS. There are two significant differences between the rhodium catalysts. Different silica supports were used and the metal crystallite size is different (Table 1). Their preparations were identical so the significant difference of products selectivity between the two rhodium catalysts could be attributed to either the nature of the silica support or a metal particle size effect. A more in-depth investigation of these effects will be required to fully interpret these changes in selectivity. Yields of p-cresol, m-cresol and 4-methylcatechol as function of time The variations of the molar yield of the principle products (those with yields >2 mol%) are shown in Fig. 6 for the three catalysts. With Rh/SiO 2 (JM) and Pt/SiO 2 (A) catalysts p-cresol was the main prod-uct. For the Rh/SiO 2 (A), p-cresol was the main product for the first 24 h TOS, subsequently methylcatechol was the main product. As illustrated in Fig. 6, Pt/Silica (A) showed similar conversion rate of p-methylguaiacol to 4-methyl catechol than Rh/Silica (A) from 24 h to 72 h. However, the Pt/silica catalyst showed a different deactivation profile for the three main products (p-cresol, m-cresol and 4-methylcatechol), with the yield of 4-methylcatechol decreasing more slowly than the yields of both para-and meta-cresol suggesting that catalyst deactivation affected the demethylation of the p-methylguaiacol less than the demethoxylation and direct deoxygenation. Platinum has been reported to favour the demethylation of guaiacol and this could explain the high demethylation activity in the early stages [19]. In the case of both Rh catalysts, there was low conversion of p-methylguaiacol to 4-methylcatechol initially, which then increased and stabilized to a constant production after 10 h or 5 h on stream for Rh/silica (A) or Rh/silica (JM) respectively. Indeed, for the Rh/silica (A) catalyst, 4-methylcatechol becomes the principal product. This is explained by the concomitant loss of deoxygenation of the methylcatechol to m-or p-cresol. The yields of the p-cresol and m-cresol also stabilized after 10 h on stream for the Rh/silica (JM) catalyst consistent with it achieving steady state. On the other hand, the Rh/silica (A) and Pt/silica did not reach a steady state within the time of the study. It could be speculated that Rh/silica (A) required a longer reaction time in order to reach a steady state as illustrated by the deactivation profile in Fig. 3. However, in the case of the Pt/silica catalyst, the continuous deactivation profile suggested that the catalyst may not reach a steady state condition. Extended testing would be required to determine whether a low activity steady state was reached or whether the system was subject to continuous deactivation. As illutrated in Fig. 1, the production of m-cresol or p-cresol required the demethylation and direct deoxygenation. The p-cresol can also be produced directly from the demethoxylation of the methylguaiacol. As illustrated in Figure S anced. This can be explained by the higher activity of both catalysts toward the demethylation (Fig. 6). However, while the Rh (JM) showed a constant ratio between the m-and p-cresol, the Rh (A) and Pt (A) showed an increase of the ratio with TOS. As consequence, the pathways for the production of m-cresol and p-cresol were not affected the same way with the deactivation of the Pt (A) and Rh (A) catalysts. It could be suggested that the demethoxylation was less affected than the direct deoxygenation. However, different rate of hydrogenation of the p-cresol and m-cresol could also explained this difference and will be discussed in the next section. ing to the reaction pathways illustrated in Fig. 1, the molar ratio of p-cresol (1), m-cresol (5), p-methylcatechol (3) and the hydrogenated products of the p-cresol (6-7), m-cresol (8-9), pmethylcatechol (10-11) was calculated for the three catalysts. As illustrated in Fig. 7A, the hydrogenation of the p-cresol and m-cresol on the Pt/silica (A) catalyst was in the same range and followed the same loss of activity. From 3 h to 72 h on stream the ratio increased from 3.9 and 3.7 to 8.5 and 6.4 for the p-cresol and mcresol, respectively. In the case of both Rh catalysts ( Fig. 7B and C), the hydrogenation rate of m-cresol was around twice the rate of hydrogenation of the p-cresol after 48 h. For the Rh/silica (A) catalyst, while the loss of hydrogenation activity for the p-cresol was in the same range than for Pt/silica catalyst, the loss of hydrogenation activity related to m-cresol was far more pronounced. Finally, the Rh/silica (JM) catalyst showed a loss of hydrogenation activity for both cresols up to 10 h on stream followed with a constant ratio around 18 and 10 for p-cresol and m-cresol. As illustrated in Fig. 7, the evolution of 4-methylcatechol hydrogenation was less affected for the Pt/silica (A) and Rh/silica (JM) catalysts than the Rh/silica (A) catalyst. In the case of Rh/silica (A), the molar ratio of methylcatechol(5):hydrogenated products of methylcatechol (10)(11) increased from 2 to 9 after 56 h on steam while the ratio only increased from 2 to 2.8 for Rh/silica (JM). In contrast, the ratio slightly decreased from 2.2 to 1.4 for the Pt catalyst. While the hydrogenation of the cresols over the Pt catalyst decreased with time on stream, the hydrogenation of the catechol inversely increased leading to a constant overall selectivity to hydrogenated products. In all cases, the ratio between non hydrogenated:hydrogenated products showed that the catalysts were more active for the hydrogenation of the methylcatechol than the cresols. This suggested that the presence of vicinal alcohol favoured the adsorption of the catechol on the catalysts. Characteristics of the Pt and Rh post-reaction catalysts Previous work had suggested that catalyst deactivation was due to carbon deposition on the surface of the catalyst with the initiation of coke formation suggested to be located at the acid site of the support [20]. As illustrated in Fig. 8, TPO analysis of the spent catalysts clearly showed the presence of carbonaceous deposits. It is interesting to note that the Rh/silica (JM) catalyst showed the highest amount of carbon laydown yet it reached a steady state in contrast to the other catalysts. It is also notable that the two catalysts with the same support show quite similar mass loss, which could be expected if the carbon deposit was principally associated with the support. The extent of overall carbon laydown however is very low when considered as a percentage of the feed. Over the Rh/silica (JM) 0.18% of the feed was deposited on the catalyst, while for Rh/silica (A) only 0.03% was deposited. Over Pt/silica (A) the amount deposited was only 0.04% of the feed. Looking in detail at the TPO the Rh/silica (JM) catalyst showed mass loss at low temperature (∼160 • C) suggesting the released of adsorbed species as there is no concomitant generation of carbon dioxide. There are weight losses resulting in carbon dioxide evolution at ∼250 • C and 300 • C. At these temperatures the surface species are likely to be pseudo-molecular with a significant H:C ratio. There are then two weight loss events at 445 • C and 469 • C, which are accompanied with carbon dioxide evolution. These weight losses reveal the presence of two similar carbonaceous deposits; the lower temperature species is unique to the Rh/silica (JM) catalyst, while the higher temperature event is common to all three catalysts. The rapidity of the weight loss at 469 • C indicates fast combustion of the deposit suggesting that this deposit is hydrocarbonaceous in nature and is associated with the metal. There is a further weight loss event at 583-640 • C, on all the catalysts, which is accompanied with carbon dioxide evolution. This high temperature weight loss can be associated with the combustion of graphitic species (Raman spectroscopy revealed a weak G-band at ∼1590 cm −1 on all the catalysts) on the silica supports, which would be consistent with the loss in surface area as measured by BET (Table 1). The carbon content of the used catalysts was also determined by CHN analysis (Table 1) and showed the same trend as that found with the TGA. The surface area of the Rh/silica (JM) catalyst was nearly twice that of the Pt/silica and Rh/silica (A) catalysts which could explain the higher carbon deposition. Reduction of the surface areas of 20%, 30% and 43% for Rh (A), Pt (A) and Rh (JM) respectively, are attributed to the carbon blocking pores. As illustrated in Table 1, after reaction the metal dispersion was reduced in all three catalysts. The Pt catalyst showed the largest drop with metal dispersion reducing from 7.2 to 4.8% and a concomitant increase in metal crystallite size. Sintering of Pt/silica catalysts under HDO conditions has been observed in a previous study and can be explained by a weak interaction between Pt and the silica support [21]. This sintering, in conjunction with the carbon laydown, would explain the continuing loss of activity of the Pt/silica catalyst. In contrast, the metal dispersion of the Rh/silica (JM) and Rh/Silica (A) was only reduced from 2.8 to 2.6 and from 6.8 to 6.1, respectively, indicating a much stronger interaction between support and Rh metal. Finally, by the end of the reaction, the metal surface area of the Rh/silica (A) was three times higher than that of the Rh/silica (JM) catalyst, yet the p-methylguaiacol conversion was lower, indicating that there was not a simple correlation between metal surface area and activity. Conclusion Both Rh/silica catalysts showed both similar deactivation profiles with a fast deactivation at early time on stream followed with slow deactivation for the Rh/silica (A) or constant activity for the Rh/silica (JM). The Pt/silica (A) catalyst showed continuous deactivation correlated with metal sintering and carbon laydown. The carbon deposit, higher in the case of the Rh/silica (JM) compared to the Pt and Rh/silica (A), could be explained by the different nature of the silica support. Detailed analysis of the product distributions with time revealed that the specific activity of the catalysts for demethylation, demethoxylation and hydrogenation were affected differently by the catalyst deactivation. The demethylation activity was the least affected by the catalyst deactivation, whereas hydrogenation activity was severely decreased for the Rh/silica (A) catalyst. This behaviour suggests that different sites are responsible for demethylation and hydrogenation activity. The Pt catalyst showed a shift of hydrogenation selectivity from cresols to 4-methylcatechol and the production of 4-methyl cyclohexan-1,2-diol. TPO analysis of the deposited carbon revealed at least three carbonaceous species on the surface of the rhodium catalysts, while only two different carbon species were detected on the platinum catalyst. Only the Rh/silica (JM) reached a prolonged steady state after 10 h on stream and modelling of the kinetics of PMG HDO will be reported in a subsequent paper.
2019-04-08T13:12:31.586Z
2017-06-05T00:00:00.000
{ "year": 2017, "sha1": "6fae031ca28a60d098ab022e84e0dc794f063741", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.apcata.2017.03.039", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "931f4ba5d1954879833f093ea8427d5fd8f35519", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
55445030
pes2o/s2orc
v3-fos-license
ULF wave activity during the 2003 Halloween superstorm : multipoint observations from CHAMP , Cluster and Geotail missions We examine data from a topside ionosphere and two magnetospheric missions (CHAMP, Cluster and Geotail) for signatures of ultra low frequency (ULF) waves during the exceptional 2003 Halloween geospace magnetic storm, when Dst reached∼ −380 nT. We use a suite of waveletbased algorithms, which are a subset of a tool that is being developed for the analysis of multi-instrument multi-satellite and ground-based observations to identify ULF waves and investigate their properties. Starting from the region of topside ionosphere, we first present three clear and strong signatures of Pc3 ULF wave activity (frequency 15–100 mHz) in CHAMP tracks. We then expand these three time intervals for purposes of comparison between CHAMP, Cluster and Geotail Pc3 observations but also to be able to search for Pc4–5 wave signatures (frequency 1–10 mHz) into Cluster and Geotail measurements in order to have a more complete picture of the ULF wave occurrence during the storm. Due to the fast motion through field lines in a low Earth orbit (LEO) we are able to reliably detect Pc3 (but not Pc4–5) waves from CHAMP. This is the first time, to our knowledge, that ULF wave observations from a topside ionosphere mission are compared to ULF wave observations from magnetospheric missions. Our study provides evidence for the occurrence of a number of prominent ULF wave events in the Pc3 and Pc4–5 bands during the storm and offers a platform to study the wave evolution from high altitudes to LEO. The ULF wave analysis methods presented here can be applied to observations from the upcoming Swarm multi-satellite mission of ESA, which is anticipated to enable joint studies with the Cluster mission. Introduction Magnetospheric ultra low frequency (ULF) waves play an important role in the overall dynamics of geospace plasmas and particularly in radiation belt dynamics (e.g., Baker and Daglis, 2007).ULF waves are large-scale phenomena, and in principle, simultaneous observations at many locations are needed to understand in depth their generation and propagation (Takahashi and Anderson, 1992).Oscillations with quasi-sinusoidal waveform are called pulsations continuous (Pc).Those with waveforms that are more irregular are called pulsations irregular (Pi) and are associated with magnetospheric substorms.In particular (Jacobs et al., 1964), continuous pulsations with frequencies in the range 1 mHz to 5 Hz, denoted as Pc1-2 (100 mHz-5 Hz), Pc3 (20-100 mHz), Pc4 (7-20 mHz), and Pc5 (1-7 mHz), have been extensively studied using measurements from both space-borne and groundbased instruments for many years (for a recent review see Menk, 2011).They are broadly of two types, depending on whether their energy source originates in the solar wind on the dayside or from processes within the magnetosphere (e.g., substorms and other instabilities in the magnetotail) on the nightside. A large number of past studies employing measurements from ground magnetometers, radar and geosynchronous satellites were focused on their polarization properties, Moreover, the three time intervals that Pc3 ULF wave activity was initially identified in CHAMP observations and were selected and further expanded into two-hour intervals for analysis using Cluster and Geotail measurements are marked in red. occurrence distribution, dependence on solar wind parameters, relation to geomagnetic storms and substorms and lastly, associated particle flux modulations.These studies revealed that toroidal and poloidal mode field line resonances together with compressional Pc5 waves account for most of the observed coherent pulsations observed in the outer magnetosphere (Anderson et al., 1990). Multipoint observations show that upstream ULF waves in the Pc3-4 bands are generated in the foreshock region and entering and propagating through the magnetosphere as compressional waves (Sakurai et al., 1999;Constantinescu et al., 2007;Heilig et al., 2007;Clausen et al., 2009). Externally excited ULF waves are intimately related to shear instabilities at the dawn and dusk flanks of the magnetopause (Engebretson et al., 1998) or driven by quasiperiodic variations of the solar wind dynamic pressure on the dayside magnetopause (Kepko et al., 2002).Solar wind upstream waves may also directly enter near the equatorial noon subsolar point or the high latitude cusp regions (Kessel et al., 2004, and references therein).ULF wave excitation is also caused by sudden impulses (solar wind pressure pulses) on the magnetosphere (Southwood and Kivelson, 1990;Zong et al., 2009;Sarris et al., 2010). In particular, observations of ULF waves with discrete frequencies of 1.3 mHz, 1.9 mHz, 2.8 mHz and 3.4 mHz provide evidence for the existence of magnetohydrodynamic (MHD) waveguide or cavity modes in the magnetosphere (Samson et al., 1992;Lee et al., 2007).The characteristic frequencies of ULF waves have been, however, found to be widely distributed, suggesting the existence of alternative sources from which they draw their energy. Low-frequency instabilities of the ring current plasma during periods of intense geomagnetic activity are closely related with waves observed during geomagnetic storms (Ukhorskiy et al., 2009).The drift and bounce motions of energetic particles of the ring current may lead to fluctuations of electric and magnetic fields in the magnetosphere and ionosphere in the case of excess available energy (Baddeley et al., 2002). The ULF wave characteristics vary throughout the magnetosphere because the geomagnetic field and magnetospheric plasma are strongly inhomogeneous on the wavelength scale of these waves.Magnetospheric ULF waves are spatially constrained by the magnetopause, which defines the boundaries of the magnetosphere, as well as by the extent of the plasmasphere and ionosphere. Moreover, inhomogeneities limit the accessibility of ULF waves to particular regions of the magnetosphere.For instance, the ratio of plasma density on the two sides of the plasmapause, separating the cold dense plasma in the inner magnetosphere and the hot low-density in the outer magnetosphere, can be larger than a factor of 100 (Dent et al., 2006). Furthermore, the frequency of ULF waves propagating through the magnetosphere is determined by the plasma composition.Enhanced populations of heavy ions (He + and O + ), which have been observed during magnetospheric substorms (e.g., Daglis et al., 1994;Daglis and Axford, 1996) and, especially, during geospace magnetic storms (e.g., Daglis, 1997) have a profound effect on the wave resonant frequency and harmonics (e.g., Thorne and Horne, 1997).Unlike the case of an O + ions torus, a steep plasmapause observed in the He + ions is followed by an increase in the ULF wave resonance frequency (Fraser et al., 2005). The Halloween 2003magnetic storm (29 October 2003-31 October 2003) was a rare event that caused an extreme distortion of the outer Van Allen radiation belt (Baker et al., 2004), which was depleted and then re-formed closer to the Earth.This event offered a unique opportunity to study the wave-particle interactions in the radiation belts (Horne et al., 2005;Loto'aniu et al., 2006) and provided an ideal set of conditions to examine magnetospheric/ionospheric responses to solar wind (Harnett et al., 2008).The Halloween 2003 magnetic storm had a double peak (see the Dst index plot from 27 to 31 October 2003 in Fig. 1) and it was associated with two coronal mass ejections (CMEs) that took place on 28 and 29 October 2003, respectively. Herein, we analyze magnetic field measurements recorded on 30 and 31 October 2003 by the low Earth orbit (LEO) CHAMP satellite, and the Cluster and Geotail spacecraft. Starting from CHAMP data and using a wavelet analysis technique, we present three representative intervals with clear ULF wave signatures in the Pc3 frequency band.For these time intervals, we also present corresponding observations from Cluster and Geotail.The simultaneous occurrence of Pc3 waves at various satellites offers a useful platform to study the wave occurrence and evolution from high altitude observations to LEO, and from the outer magnetosphere to the topside ionosphere.We also study the occurrence of Pc4-5 waves in the Cluster and Geotail locations throughout the storm.The results of our approach, combining observations from a LEO satellite with magnetospheric multi-satellite missions, demonstrates the applicability of our methods to data of the upcoming Swarm three-satellite constellation of ESA.Swarm is the first LEO multi-satellite mission to study the near-Earth electromagnetic environment. Data analysis based on wavelet transforms ULF waves have been traditionally identified through visual inspection of series of spectrograms based on the Fast Fourier Transform (FFT).Motivated by the continuously increasing amount of data collected by space missions and groundbased instruments, algorithms have been developed based on FFT spectra to automatically examine spectrograms and identify ULF waves.Therefore, a variety of automated FFT routines exist (Anderson et al., 1992;Loto'aniu et al., 2005;Bortnik et al., 2007). Since the 1990s, the wavelet spectral analysis has become popular, as it allows the quantitative monitoring of localized variations of power within the time series data (for example, Alexandrescu et al., 1996;Balasis et al., 2005Balasis et al., , 2006;;Balasis and Mandea, 2007).Furthermore, Heilig et al. (2007) developed an algorithm for the selection of possible ULF waverelated pulsation events from both ground and space magnetometer data.Other examples of the application of wavelets (continuous and discrete) to space data can be found, for instance, in Nose et al. (1998) and Murphy et al. (2009). In some way, the wavelet transform is a generalized form of the Fourier transform.The main difference of wavelets is that the temporally confined basis functions used in the wavelet transform to decompose a time series can be stretched with a flexible resolution in both frequency and time.They narrow while focusing on high-frequency components and widen while searching for the low-frequency background.Thus, the frequency range of the analyzing wavelets corresponds to the spectral content of time series components (Torrence and Compo, 1998). The wavelet transform can be superior to the Fourier spectral analysis when the spectral properties of transient, impulsive, short-lived or non-stationary signals need to be analyzed.While the Fourier transform provides fixed frequency resolution and is well suited for the representation of a continuous, long-lasting signal, the wavelet analysis can provide sufficient frequency resolution to a continuous wave band at the lower frequency range of the wavelet window, and better time resolution at the higher-frequency band of the wavelet window at the expense of frequency resolution.If the nature of the investigated signal is well known in advance, one can judiciously select either the Fourier or wavelet transform for the better representation and analysis of the signal in the frequency domain.However, when it is necessary to search for either continuous or impulsive signals and the nature of the signal is not a priori known, then a wavelet transform is more appropriate, particularly if the frequency band being investigated is carefully placed in the middle range of the wavelet frequency range so that both time and frequency resolution are carefully balanced.Our goal of investigating ULF waves on the ground and in space, which can be continuous, impulsive, stationary or propagating, points to the wavelet as the most appropriate spectral analysis technique for the search and investigation, particularly in the form of an automated tool. It is therefore not surprising that wavelet analysis is becoming a common tool for analyzing localized variations of power within a time series.By decomposing a time series into time-frequency space, one is able to determine both the dominant modes of variability and how those modes vary in time.The advantage of analyzing a signal with wavelets as the analyzing kernel is that it enables one to study features of the signal locally with a detail matched to their scale.Balasis et al. (2005) performed wavelet spectral analysis of magnetic field magnitude data derived from CHAMP 1 Hz vector fluxgate magnetometer (FGM) measurements, covering a period of approximately three years (August 2000-May 2003).The wavelet spectral analysis of CHAMP data proved to be capable of detecting, identifying and classifying artificial noise sources, such as instrument problems and pre-processing errors, as well as high frequency natural signals of external fields, including ionospheric plasma bubbles and magnetospheric ULF waves. Furthermore, Balasis and Mandea (2007) successfully used the same technique to look at CHAMP satellite data from 2004 to 2005 for ULF wave activity a few days before and after the great Sumatran earthquakes on 26 December 2004 with a magnitude of 9.3 and 28 March 2005 with a magnitude of 8.7.The same wavelet tools have been applied by Mandea and Balasis (2006) to satellite magnetic data with the aim to investigate the effects of a giant flare from magnetar SGR 1806-20 on the near-Earth electromagnetic environment, thus showing remarkable applicability to the delineation of fine electromagnetic structures contained within geophysical signals.There are several parameters of the wavelet transform, such as frequency range, power spectral density amplification factor, which need to be correctly adjusted in order to capture different kind of anomalous signals.In our present study, we apply the same values determined by Balasis and Mandea (2007) for tuning the wavelet transform. Specifically, we use the continuous wavelet transform with the Morlet wavelet as the basis function on magnetic field measurements from the LEO satellite CHAMP, the We use a Mean Field-Aligned (MFA) coordinate system in the analysis of the satellite observations in order to separate ULF field variations perpendicular to, as well as along the From top to bottom are shown the time series of the CHAMP total magnetic field, calculated from the 1 Hz FGM data after applying a 16 mHz high-pass filter, its corresponding wavelet power spectrum as well as the temporal variation of the CHAMP electron density data along with its magnetic latitudinal dependence, indicating that the satellite was moving from the North to the South Pole.The corresponding MLT values are also given at the bottom of the graph.A prominent Pc3 ULF wave is observed starting at around 07:52 UT and lasting ∼15 min.The strong ionospheric currents' signatures near the poles that cover lower frequencies can also be seen in this plot. magnetic field direction.The unit vectors of the MFA coordinate system are defined as follows: the parallel component in the coordinate system, p, is obtained from a 20-min running average of the instantaneous magnetic field.The other components are then chosen to be φ = p × R / p × R , where R is the radius vector of the satellite, and r = φ × p.Thus, φ is the azimuthal component and is positive eastward, while r, completing the orthogonal system is meridional and points radially outward at the magnetic equator.ULF waves in the p, φ and r directions are referred to as compressional, toroidal and poloidal, respectively.It should be noted that the average magnetic field has been subtracted from the projection of the magnetic field onto the average unit vector p.The 20-min running average applied to the field during coordinate rotation acts as a high-pass filter. Nonetheless, in the case of the standard transformation of LEO satellite measurements into an Earth-oriented frame adds undesirable attitude noise to the data (Heilig et al., 2007); thus, generating clean vector data requires a lot of manual intervention.However, as the compressional (fieldaligned) component dominates over the transverse components, the wave signature can well be derived from the total field variations for a LEO satellite (Jadhav et al., 2001;Heilig et al., 2007).Therefore, CHAMP total magnetic field can be considered a fairly good approximation of its compressional component for studying ULF waves. Observations The solar activity in the end of October 2003 initiated a series of intense magnetospheric disturbances during two successive deep reductions of the Dst index (see Fig. 1) as two consecutive CMEs impacted the Earth's magnetosphere.In Fig. 1, the three days that include the storm onset, the first storm peak (−353 nT) with the associated short recovery phase as well as the second storm peak (−383 nT) along with the regular recovery phase, i.e., 29, 30 and 31 October 2003, are labeled in red.The "Halloween" storm, 29-31 October 2003, has received considerable interest and analysis from both ground and space instrumentation, as it offers a great opportunity of understanding the response of the magnetosphere-ionosphere system to strong and continuous driving.During the Halloween storm, Geotail enters the magnetosphere on the dusk side and orbits around the nightside of From top to bottom are shown the time series of the CHAMP total magnetic field, calculated from the 1 Hz FGM data after applying a 16 mHz high-pass filter, its corresponding wavelet power spectrum as well as the temporal variation of the CHAMP electron density data along with its magnetic latitudinal dependence indicating that the satellite was moving from the north to the south pole.The corresponding MLT values are also given at the bottom of the graph.A prominent Pc3 ULF wave is observed starting at around 14:44 UT and lasting ∼15 min.The strong ionospheric currents' signatures near the poles covering lower frequencies can also be seen in this plot. the magnetosphere near the equatorial plane, while Cluster's orbit has its apogee far off the equatorial plane on the dusk Northern Hemisphere of the magnetosphere.In Fig. 2 the locations of Cluster-1 and Geotail satellites in the Geocentric Solar Ecliptic (GSE) coordinate system and for the xy-and xz-planes are shown from 00:00 UT on 30 October 2003 to 23:59 UT on 31 October 2003. Herein, we study the ULF wave activity that accompanied the Halloween storm using observations obtained by a topside ionosphere mission and two magnetospheric missions, whereas previous ULF wave observations made by a LEO mission have only been compared to ground measurements (e.g., Heilig et al., 2007). In this section we start our analysis from the region of topside ionosphere.We first present three clear and strong signatures of Pc3 ULF wave activity (frequency 15-100 mHz) found by examining the tracks of the CHAMP satellite.The CHAMP track represents the satellite's half-orbit as it moves from one pole to another and lasts approximately 45 min.We then expand these three time intervals in order to have a twohour duration for purposes of comparison between CHAMP, Cluster and Geotail Pc3 observations but also to be able to search for Pc4-5 wave signatures (frequency 1-10 mHz) into Cluster and Geotail measurements associated with the Halloween storm.Due to the fast motion through field lines in a LEO orbit we are able to reliably detect Pc3 (but not Pc4-5) waves from CHAMP. The CHAMP satellite was launched in July 2000 into an almost circular, near-polar orbit with a period of 94 min and an initial altitude of 454 km (Reigber et al., 2005).The intense solar activity of solar cycle 23 had degraded the orbit altitude to ∼400 km at the time of the Halloween storm, in October 2003.CHAMP re-entry occurred in 2010.The low Earth orbit of CHAMP allows a global view of the topside ionosphere within the relatively short time of a full orbit. Cluster, which consists of four identical spacecraft flying in a tetrahedral configuration (Escoubet et al., 1997), was launched in 2000 with the aim to investigate the Earth's magnetic environment at multiple scales.The four Cluster spacecraft, therefore, represent a valuable tool for the analysis of magnetospheric ULF pulsations, as shown by a plethora of recent studies (e.g., Eriksson et al., 2005;Schäfer et al., 2007;Clausen et al., 2009).For this purpose, the Cluster spacecraft were originally placed in a 4 × 19.6 R E elliptical polar From top to bottom are shown the time series of the CHAMP total magnetic field, calculated from the 1 Hz FGM data after applying a 16 mHz high-pass filter, its corresponding wavelet power spectrum as well as the temporal variation of the CHAMP electron density data along with its magnetic latitudinal dependence indicating that the satellite was moving from the north to the south pole.The corresponding MLT values are also given at the bottom of the graph.A prominent Pc3 ULF wave is observed starting at around 22:25 UT and lasting ∼20 min.The strong ionospheric currents' signatures near the poles covering many frequencies can also be seen in this plot. orbit with a period of 57 h.For our study, we used magnetic field measurements from the FGM instrument (Balogh et al., 1997) with a time resolution corresponding to one spacecraft spin period, namely 4 s, with a Nyquist frequency of 125 mHz.During the Halloween storm the Cluster probes were flying in close configuration, and no significant differences are seen in the ULF wave occurrence between the different probes.Therefore, we only present here the observations from the Cluster-1 satellite. The Geotail satellite was launched in July 1992 with the aim of studying the structure and dynamics of the Earth's magnetotail over a wide range of distances, extending from the near-Earth region (8 R E from the Earth) to the distant tail (∼200 R E ).Since February 1995, when it fulfilled its original objective, Geotail has been placed in an elliptical 9 by 30 R E orbit from where it is providing data on different aspects of the solar wind interaction with the magnetosphere.For this study, we used magnetic field measurements collected by Geotail when the spacecraft traversed from the upstream region of the quasi-perpendicular shock, through the duskside magnetosheath to the nightside outer magnetosphere and the dawnside magnetosheath.The Geotail spacecraft carries fluxgate magnetometers along with a search coil magnetometer, providing magnetic field data in the frequency range below 50 Hz (Kokubun et al., 1994).The Geotail data used have a time resolution of 3 s, with a Nyquist frequency of 167 mHz. ULF wave activity in a LEO orbit Figure 3 presents CHAMP total magnetic field time series derived from the 1 Hz FGM measurements after applying a 16 mHz high-pass filter along with its corresponding wavelet power spectrum in the Pc3 frequency band (8-128 mHz).It was empirically found that a cutoff of 16 mHz for the highpass filter used at the preprocessing of the time series is able to reduce the amplitude of pulsations with frequencies lower than or equal to 10 mHz by approximately 90 %.Thus, by this choice, we are certain that all low-varying background activity will be eliminated, as well as any possible contribution by Pc5 ULF waves.Using a higher cutoff would eradicate all influence by Pc4 waves as well, but then Pc3 waves would also suffer its effects, so the selection of 16 mHz was made as a reasonable compromise, in order to leave Pc3 and higher frequency waves as unaffected as possible.The selection of the cutoff for the Pc5 case in Sects.3.2-3.4was based on similar criteria. Corresponding to the time interval from 07:35 to 08:21 UT on 30 October 2003, which according to the plot of Fig. 1, refers to the middle part of the short recovery phase of the magnetic superstorm, characterized by the first minimum of Dst index (i.e., −353 nT), it also includes electron density data derived from the 15 s Planar Langmuir Probe (PLP) measurements.The inclusion of the electron density recordings helps to identify time segments of the signal that contain signatures of post-sunset equatorial spread F (ESF) events (Stolle et al., 2006), and therefore, discriminate between Pc3 wave and plasma depletion occurrence.The corresponding values of the CHAMP magnetic latitude (shown in red) and magnetic LT (MLT) are also provided in the graph.It is worth noting that Pc3 waves are observed over the auroral zones and the dayside equator, while wave power decreases significantly at mid-latitudes, a profile that we attribute to strong ionospheric currents (see also Fig. 6).Furthermore, a dramatic north to south asymmetry in the Pc3 waves was observed over the auroral zones.On the other hand, because the equatorial electrojet disappears on the nightside, Pc3 wave power has significantly decreased over the nightside equator.Wave activity that is sporadically observed in the nightside is likely due to phenomena like currents enhanced during substorms or the propagation of Pi2 waves from the magnetotail. In Fig. 1, the three time intervals in which Pc3 ULF wave activity was initially identified in CHAMP observations on the morning of 30 October 2003 and in the afternoon and evening of 31 October 2003, which are selected and further expanded into two-hour intervals for analysis using Cluster and Geotail measurements, are marked in red.Moreover, the locations of Cluster-1 and Geotail satellites during these three time intervals are highlighted in Fig. 2. Event 1: 07:00-09:00 UT on 30 October 2003 Centered around the first Pc3 wave event identified in the CHAMP 1 Hz FGM measurements, Fig. 6 presents the total magnetic field time series along with its corresponding wavelet power spectrum in the Pc3 band (8-128 mHz) from 07:00 to 09:00 UT on 30 October 2003.Electron density recordings collected by the PLP instrument are also shown during this interval in the recovery phase of the first magnetic superstorm studied.LEO satellites such as CHAMP traversing the topside ionosphere are usually considered to be able to adequately observe waves only in higher ULF frequencies (see also Sect. 3 above).On the other hand, the Cluster-1 poloidal, toroidal and compressional magnetic field time series, derived from the 4 s FGM measurements, after applying a 16 mHz and a 2 mHz high-pass filter, are shown in Figs.7 and 8 along with their corresponding wavelet power spectra in the Pc3 and Pc5 (1-32 mHz) frequency band.The ULF oscillations, clearly visible on the quiet background, are similar on all four satellites of the Cluster mission and therefore provide an indication of the scale-size of the waves, which is related to the satellites' distances of separation.The MFA components of the magnetic field time series are, however, shown only for the Cluster-1 satellite. In Figs. 9 and 10, the poloidal, toroidal and compressional magnetic field time series, derived from the 3 s FGM measurements of Geotail, after applying a 16 mHz and a 2 mHz high-pass filter, are shown along with their corresponding wavelet power spectra in the Pc3 and Pc5 frequency band.It is worth noting that Sakurai and Tonegawa (2005) During the recovery phase of the first peak of magnetic superstorm, the Geotail satellite traversed through the dusk-side magnetosheath towards the outer magnetosphere (Fig. 2).In the heart of the magnetosheath, there are no no-ticeable oscillations in the three components of the magnetic field, but appear only as the satellite approaches the magnetopause.In light of this, the Pc5 observations made both by the Geotail satellite as well as the Cluster-1 satellite well within the magnetosphere are attributed to shock waves compressing the Earth's magnetosphere (pressure pulse excitation mechanism of ULF waves as discussed in the Introduction).Associated with the interplanetary coronal mass ejection (ICME) that was observed on 29 October 2003, the shock speed estimated from the travel time from the Sun to the Earth exceeded 2000 km s −1 . Visible to the naked eye is the changing distance in time between the peaks of the waves, indicating the frequency is changing with the radial distance from the Earth.We will return to this in the subsequent sections on ULF waves observed as the second stronger peak (−383 nT) of the magnetic superstorm that was in progress. Event 2: 14:00-16:00 UT on 31 October 2003 With the simultaneous observations from the CHAMP, Cluster-1 and Geotail satellites, we studied long-lasting Pc3 and Pc5 waves in the recovery phase of the second peak of the magnetic superstorm when the Dst index had a value below −80 nT.From 14:00 to 16:00 UT on 31 October 2003, the Cluster-1 satellite flew in the dawnside Northern Hemisphere of the magnetosphere, while the Geotail satellite was in the duskside of the magnetosheath, providing us with a unique opportunity to study also the ULF waves' global distribution (Fig. 2). Figures 11 and 12 show the wavelet power spectra of FGM measurements from the CHAMP, Cluster-1 and Geotail satellite covering the Pc3 and Pc5 frequency bands, respectively.Specifically, the upper panel corresponds to the continuous wavelet power spectra of the CHAMP total magnetic field measurements, followed by the wavelet analysis results of each MFA magnetic field component observed by the Cluster-1 and Geotail satellites.From top to bottom, the radial magnetic field B r , azimuthal magnetic field B ϕ and parallel magnetic field B are, however, not consistent. In the period from 14:24 UT to 15:24 UT, among the MFA components of Cluster-1 magnetic field time series, the radial component had the largest amplitude, the azimuthal component was smaller, and the parallel component was the weakest.As we can see in Fig. 11, from the radial to the parallel component, the wavelet power spectrum density in the Pc3 frequency band decreased from 6 nT 2 Hz −1 to approximately 4 nT 2 Hz −1 .The frequency range covered by the wavelet power enhancement in the three MFA magnetic field components was among 16-60 mHz, with the spectrum peak frequency near 34 mHz. The amplitude of the wavelet power enhancement observed by the Cluster-1 and Geotail satellites in the Pc5 frequency band also varied; the MFA magnetic field component with the largest amplitude observed by both Cluster-1 and Geotail was the parallel.From the radial to the parallel component, the wavelet power spectrum density decreased from approximately 4.2 nT 2 Hz −1 to 6 nT 2 Hz −1 .The wavelet power enhancement in the three MFA magnetic field components seen in Fig. 12 was between 2-4 mHz, with the spectrum peak frequency near 2.8 mHz. From these observations, we can conclude that Pc3 and Pc5 waves can simultaneously occurred and be observed over a large portion of the magnetosphere, from the outer limits to the topside ionosphere and from morning to evening with a similar spectral frequency.Nonetheless, in the time interval before 14:36 UT neither Pc3 nor Pc5 waves are observed by the Geotail satellite.In the magnetosheath where the Geotail satellite was located, ULF waves are common and an important source for Pc3-5 waves observed in the magnetosphere.Although ULF waves have an important role to play in the solar wind-magnetosphere energy coupling, inhomogeneity due to the stress of the increased solar wind dynamic pressure exerted on the magnetopause seems to have a crucial effect on the generation or propagation of ULF waves (i.e., Blanco-Cano et al., 2006, and references therein).is separated in the Pc3 and Pc5 frequency band.Similarly to Sect.3.3, the upper panel corresponds to the continuous wavelet power spectra of the CHAMP total magnetic field measurements, followed by the wavelet analysis results of each MFA magnetic field component observed by the Cluster-1 and Geotail satellites.From top to bottom, the radial magnetic field B r , azimuthal magnetic field B ϕ and parallel magnetic field B are, however, not consistent. Event Pc3 waves are observed throughout the trajectory of the CHAMP, Cluster-1 and Geotail satellites, but more pronounced along the LEO of the CHAMP satellite between 22:00 and 22:30 UT and in the parallel MFA component of the magnetic field as this was measured by Geotail.The peak wavelet power spectrum density reaches a value of approximately 6 nT 2 Hz −1 .The three satellites observations are different in terms of amplitude as well as frequency.The Pc3 waves observed by Cluster-1 between 21:00 and 21:30 UT are mainly in the lower frequency part of the spectra, while the high-frequency waves observed by Geotail are not visi-ble.The Pc3 waves frequency range is observed between 32 and 64 mHz. The frequency of ULF waves is not affected only by the geometry of the magnetic field and boundaries such as the magnetopause and the plasmapause, but also by the generation mechanism.It varies with L-shell value and local time.During the time interval between 21:00 and 23:00 UT, the Geotail satellite traversed through the dawnside magnetosheath towards the interplanetary medium, while the Cluster-1 satellite was flying within the magnetosphere crossing L shell 11.7 to 5.3.Based on observations from the Cluster-1 satellite, along with the GOES-10 and 12, as well as the Polar satellites, Wang et al. (2008) have shown that ULF oscillations' period varied with Cluster-1 observing the shortest period and Polar the longest.The period of toroidal and poloidal mode ranged from 128 to 512 s, with the spectrum peak period near 256 s indicative of a Pc5 wave activity. As we can see in Fig. 14, from the radial to the parallel component, the wavelet power spectrum density in the Pc5 frequency band decreased remarkably from 6 nT 2 Hz −1 to approximately 2 nT 2 Hz −1 in the MFA components of the Geotail magnetic field time series.In the toroidal component of the magnetic field, the Pc5 oscillations had the largest amplitude, while the poloidal components were weaker, and the compressional component almost could not be seen compared to the above two modes. Discussion and conclusions We have analyzed multi-point observations from the CHAMP, Cluster and Geotail missions during the Halloween 2003 superstorm in order to investigate ULF wave activity present during the evolution of the the storm with newly developed tools based on continuous wavelet transforms.As demonstrated in the previous sections, these wavelet-based tools are capable of examining magnetic field measurements from: -a topside ionosphere or a magnetospheric mission; -a single-satellite or a multi-satellite mission; and consequently, identify ULF waves at the: -Pc3 (topside ionosphere and magnetospheric missions) or -Pc4-5 (magnetospheric missions) frequency range. We have started our analysis by examining CHAMP data for signatures related to ULF waves occurring during different phases of the magnetic superstorm.Due to the CHAMP satellite's fast motion through field lines in a LEO orbit, we have been able to reliably detect Pc3 (but not Pc4-5) waves along its orbit.Subsequently, we have selected three prominent Pc3 wave events as seen in CHAMP magnetic field measurements in the morning of 30 present clear evidence of Pc3 wave activity observed simultaneously by satellites in the topside ionosphere to the outer magnetosphere and the magnetosheath.Moreover, based on Cluster-1 and Geotail data, we were able to draw inferences on Pc5 wave activity associated with the specific superstorm. During the Halloween 2003 superstorm, strong compression of the magnetotail, as evidenced by enhanced tail field strengths and increased plasma density, was observed by Geotail (Miyashita et al., 2005).Sakurai and Tonegawa (2005) Nonetheless, the analysis presented in the previous sections encompasses a total of three consecutive, though distinct time intervals of enhanced ULF wave activity with simultaneous observations spanning from the magnetotail to the innermost magnetosphere.On the other hand, Engebretson et al. (2007) have reported unusual wave activity in the Pc1-2 frequency range observed by the Cluster spacecraft in association with the Halloween 2003 storm.At the onset of the superstorm on 29 October 2003, intense broadband activity in the frequency range between ∼0.1 and 0.6 Hz appeared simultaneously at all four spacecraft located on both sides of the magnetic equator at perigee (near 14:00 UT and 08:45 MLT).It should be noted that wave power was especially strong and more structured in frequency in the compressional component, while a minimum was observed at 0.38 Hz, corresponding to the oxygen ion cyclotron frequency. Apart from Wang et al. (2008), Pc5 waves on 31 October 2003 identified in the geosynchronous GOES satellites' measurements, Pilipenko et al. (2010) have found that during periods of ground Pc5 activity enhancement on 29 and 31 October 2003 (05:00-24:00 and 00:00-19:00 UT, respectively), the GOES 10 satellite located in the morning sector of the magnetosphere detected Pc5 pulsations, most evident in the toroidal component.Rae et al. (2005) presented an interval of extremely longlasting narrowband Pc5 pulsations during the recovery phase of a large geomagnetic storm on 25 November 2001.These pulsations occurred continuously for many hours and were observed throughout the magnetosphere and in the dusksector ionosphere.The fortuitous spacecraft conjunction of the Cluster, Polar, and geosynchronous satellites in the dusk sector during a 3 h subset of this interval has allowed extensive analysis of the global nature of the pulsations and the tracing of their energy transfer from the solar wind to the ground.Herein, we demonstrate the applicability of our tools to the analysis of similar spacecraft conjunctions. The consistency between the Pc3 and Pc5 wave observations confirm the applicability and the potential of our wavelet-based algorithms for the analysis of multiinstrument multi-satellite observations and the detection, identification and classification of ULF waves.In the past decade, a critical mass of high-quality scientific data on the electric and magnetic fields in the Earth's magnetosphere has been progressively collected.This data pool will be further G. Balasis et al.: Multipoint observations of ULF waves during a superstorm enriched by the measurements of the upcoming ESA/Swarm mission, a constellation of three satellites in three different polar orbits between 400 and 550 km altitude, which will be launched in 2013.This data pool provides unique opportunities to study ULF pulsations in the magnetosphere (e.g., Constantinescu et al., 2007;Usanova et al., 2008;Sarris et al., 2009;Picket et al., 2010).New analysis tools that can cope with increased volume of measurements by numerous spacecraft located at different regions of the magnetosphere, similar to the ones employed in the present study, will effectively enhance the scientific exploitation of the continuously accumulated data. Fig. 1 . Fig. 1.The time series of Dst index from 26 October 2003 to 2 November 2003.The three days that include the storm onset, the first storm peak (−353 nT) with the associated short recovery phase, as well as the second storm peak (−383 nT) along with the regular recovery phase, i.e., 29, 30 and 31 October 2003, are labeled in red.Moreover, the three time intervals that Pc3 ULF wave activity was initially identified in CHAMP observations and were selected and further expanded into two-hour intervals for analysis using Cluster and Geotail measurements are marked in red. Fig. 2 . Fig. 2. The Cluster-1 and Geotail locations in GSE coordinates on the xy-plane (upper part) and on the xz-plane (lower part) for 30 and 31 October 2003.The three events discussed in this paper are also marked.(These plots are modified versions of the graphs derived by the Tool for Interactive Plotting, Sonification, and 3-D Orbit Display -TIPSOD provided by NASA.) Fig. 3 . Fig.3.The CHAMP track from 07:35 to 08:21 UT on 30 October 2003.From top to bottom are shown the time series of the CHAMP total magnetic field, calculated from the 1 Hz FGM data after applying a 16 mHz high-pass filter, its corresponding wavelet power spectrum as well as the temporal variation of the CHAMP electron density data along with its magnetic latitudinal dependence, indicating that the satellite was moving from the North to the South Pole.The corresponding MLT values are also given at the bottom of the graph.A prominent Pc3 ULF wave is observed starting at around 07:52 UT and lasting ∼15 min.The strong ionospheric currents' signatures near the poles that cover lower frequencies can also be seen in this plot. Fig. 4 . Fig.4.The CHAMP track from 14:26 to 15:15 UT on 31 October 2003.From top to bottom are shown the time series of the CHAMP total magnetic field, calculated from the 1 Hz FGM data after applying a 16 mHz high-pass filter, its corresponding wavelet power spectrum as well as the temporal variation of the CHAMP electron density data along with its magnetic latitudinal dependence indicating that the satellite was moving from the north to the south pole.The corresponding MLT values are also given at the bottom of the graph.A prominent Pc3 ULF wave is observed starting at around 14:44 UT and lasting ∼15 min.The strong ionospheric currents' signatures near the poles covering lower frequencies can also be seen in this plot. Fig. 5 . Fig. 5.The CHAMP track from 22:08 to 22:53 UT on 31 October 2003.From top to bottom are shown the time series of the CHAMP total magnetic field, calculated from the 1 Hz FGM data after applying a 16 mHz high-pass filter, its corresponding wavelet power spectrum as well as the temporal variation of the CHAMP electron density data along with its magnetic latitudinal dependence indicating that the satellite was moving from the north to the south pole.The corresponding MLT values are also given at the bottom of the graph.A prominent Pc3 ULF wave is observed starting at around 22:25 UT and lasting ∼20 min.The strong ionospheric currents' signatures near the poles covering many frequencies can also be seen in this plot. Fig. 7 . Fig. 7. Event 1: 07:00-09:00 UT, 30 October 2003.The Cluster-1 Pc3 (8-128 mHz) activity.Left column: From top to bottom are shown the time series of the poloidal, toroidal and compressional components, respectively, of the magnetic field, calculating from the 4 s FGM data after applying a 16 mHz high-pass filter.Right column: From top to bottom are shown the corresponding wavelet power spectra. have identified large amplitude Pc3 waves in the magnetic and electric field measurements collected by Geotail on the morning of 30 October 2003.Specifically, they found Pc3 waves at 07:20-07:40 UT and 07:40-07:55 UT on 30 October 2003. Fig. 8 .Fig. 9 .Fig. 10 . Fig. 8. Event 1: 07:00-09:00 UT, 30 October 2003.The Cluster-1 Pc5 (1-32 mHz) activity.Left column: From top to bottom are shown the time series of the poloidal, toroidal and compressional components, respectively, of the magnetic field, calculating from the 4 s FGM data after applying a 2 mHz high-pass filter.Right column: From top to bottom are shown the corresponding wavelet power spectra. 3: 21:00-23:00 UT on 31 October 2003 ULF waves were observed throughout the recovery phase of the magnetic superstorm on 31 October 2003.In Figs. 13 and 14, we focus on the interval between 21:00 UT and 23:00 UT, where the wavelet power spectra of FGM measurements from the CHAMP, Cluster-1 and Geotail satellites have identified large amplitude Pc3 waves in the magnetic and electric field measurements collected by Geotail at 01:00-01:30, 07:20-07:40, 07:40-07:55 and 08:10-08:40 UT on 30 October 2003.Our study provides evidence for Pc3 wave activity detected by Geotail satellite between 07:00 and 09:00 UT on 30 October 2003, covering the second, third and fourth time intervals analyzed by Sakurai and Tonegawa.These are past studies on the Halloween 2003 superstorm that are consistent with the results presented in this paper for the third time interval, i.e., from 21:00 to 23:00 on 31 Oc-tober 2003.Zong et al. (2007) examined Cluster mission magnetic field data collected between 21:30-22:30 UT on 31 October 2003 and found evidence for Pc5 waves occurrence.The observed magnetic ULF pulsations were dominated by the toroidal mode, accompanied by a relatively weak poloidal mode.The ULF modulation terminated where higher frequency fluctuations appeared as the Cluster spacecraft entered the plasmasphere boundary layer (PBL), where the plasma ion density was abruptly elevated.In addition, Wang et al. (2008) identified Pc5 wave activity between 21:00-23:00 UT on 31 October 2003 in Cluster-1, GOES 10, GOES 12 and Polar magnetic field measurements.In comparison to the observations of Wang et al. (2008), we have found that Pc5 waves can be seen in all the MFA components of the Cluster-1 spacecraft during the same time interval.
2018-12-12T10:44:56.484Z
2012-12-21T00:00:00.000
{ "year": 2012, "sha1": "86c7963b92e5883b83d586246ffa28621d9862a0", "oa_license": "CCBY", "oa_url": "https://angeo.copernicus.org/articles/30/1751/2012/angeo-30-1751-2012.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "86c7963b92e5883b83d586246ffa28621d9862a0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
236219344
pes2o/s2orc
v3-fos-license
DEVELOPMENT OF A PARALLEL GRIPPER WITH AN EXTENSION NAIL MECHANISM USING A METAL BELT Aiming to expand the range of applications for parallel grippers, we propose an extension nail mechanism that can be mounted on a parallel gripper. We also propose an extension nail mechanism comprising a stainless steel belt, two transport belts, a triangular nail, and a drive unit. The triangular nail is connected to one end of the stainless steel belt, and the drive unit is connected near the other end. We achieve smooth sliding of the nails underneath objects by arranging the transport belts on either side of the stainless steel belt. By elastically winding one end of the stainless steel belt and each of the transport belts, the nail mechanism can be miniaturized while achieving large expansion and contraction. We achieve stable grasping operations by using the extension nail mechanism of the parallel gripper in accordance with the flexibility of the object. INTRODUCTION In recent years, Japan's declining birthrate and aging population have made it difficult to secure sufficient numbers of workers at logistics sites. When automating logistics sites to mitigate this issue, robotic hands are indispensable for picking operations such as removing articles from belt conveyors. Various types of robotic hands have been developed , including two-finger [Osaki 2013, Levine 2018, Tanaka 2020, three-finger [Chen 2020, Townsend 2000, Yuan 2020], four-finger [Aukes 2014], and fivefinger hands [Li 2020, Pfanne 2020. Another robotic hand [Catalano 2014] was devised such that the joint mechanisms work together to reduce the number of motors. Other robotic hands [Fujita 2020, Hasegawa 2019, Morrison 2018 designed for use at distribution sites have also been proposed. In particular, parallel grippers, in which a finger mechanism moves linearly to grasp articles, have simple control mechanisms and high reliability, and thus are widely used at logistics sites. Articles commonly handled at logistics sites can be roughly classified as rigid objects such as cardboard boxes or flexible objects packed in a cushioning material. When a parallel gripper grasps an article, rigid bodies generally do not deform, whereas flexible objects do and thus there is a risk of damage. Therefore, in this study we investigated grippers equipped with a mechanical element equivalent to a nail for supporting the bottom surfaces of flexible objects. Various robotic hands with fingertips equipped with a mechanical nail-like element have been proposed. Specifically, there are configurations in which a force sensor between fingertips and nail members sense objects [Kõiva 2018, Murakami 2003] as well as configurations in which a nail member is mounted on silicon rubber for passive compliance adjustment [Morita 2000]. In other methods, a thin object on a flat surface is picked up with fingertips [Babin 2018, Yoshimi 2012]. However, the structure by which the short nail member is fixed to the fingertip cannot support the entire bottom surface of a flexible object. Therefore, grasping methods for supporting the entire bottom surface of an object have been proposed. In particular, one method slides a long plate underneath objects lifted by a robotic hand [Nakamoto 2010], and another slides a long plate underneath objects through extrusion by an annular belt [Tadakuma 2013]. In methods involving the sliding of long plates, the support range of the bottom surface of the object depends on the length of the plate. Accordingly, when a long plate supporting the object's entire bottom surface is mounted on a parallel gripper, the parallel gripper becomes larger. Furthermore, if the mounting surface in contact with the annular belt has a large friction coefficient, the operation of the single annular belt may be limited. This is because the frictional force from the mounting surface makes it difficult to smoothly push out a singular annular belt. Therefore, when the friction coefficient of the mounting surface is large, it is considered effective to use two annular belts, with one stacked above the other. We therefore investigated a method for mounting a small retractable nail mechanism on a parallel gripper and supporting an object's entire bottom surface with the extended nail. To expand the range of applications for parallel grippers, we propose an extension nail mechanism that can be mounted on the fingertip of one side of the parallel gripper ( Fig. 1). In the proposed mechanism, the nail part smoothly slides underneath the object, and both miniaturization and a large expansion/contraction span of the nail part are achieved. The extended nail portions are magnetically connected to opposing fingertips in order to improve load resistance. Mechanism verifications showed that using the extension nail mechanism of the parallel gripper in accordance with the flexibility of the object achieves stable grasping operations. This paper reports the design policy, specific mechanisms and system configurations, and the results of basic experiments using the proposed parallel gripper with an extension nail mechanism. DEVELOPMENT CONCEPT The development concept of the parallel gripper in this paper is to achieve stable grasping operations by using an extension nail mechanism mounted on one fingertip of the parallel gripper in accordance with the flexibility of the object. The development concept of the extension nail mechanism is to develop a mechanism configuration that achieves both miniaturization and Finger mechanism Object Base Nail mechanism large expansion/contraction spans with the nail part smoothly sliding underneath objects. The performance and design requirements were examined in consideration of these development concepts and the installation environment, namely, a logistics site. Handled objects We examined and categorized the articles to be handled, assuming a distribution warehouse as the robot application environment. Based on this classification, we developed a parallel gripper that can handle two types of commonly distributed objects: Items with these shape features were placed individually on a flat surface. Design requirements To investigate the performance required for a parallel gripper to grasp the assumed objects, we established the following design requirements. -As a grasping strategy, the parallel gripper should approach the object from above to grip and lift the object. -Based on the size of the target cardboard box (D 140 mm), the maximum opening width of the parallel gripper was set to at least 140 mm. The dimensions of the parallel gripper were approximately H 300 × W 300 × D 300 mm. -The maximum payload of the robot arm (TV800; Shibaura Machine Co., Ltd.), on which the parallel gripper was attached, is 5.0 kg. Therefore, the total mass of the parallel gripper should not exceed 4.9 kg. -We used current-controllable DC motors as actuators. One DC motor opens and closes the parallel gripper, and another extends and retracts the nail mechanism. The finger part with the extension nail mechanism is modularized so that the entire finger part can be quickly replaced in the event of failure. Furthermore, multiple modularized fingers are arranged on the parallel gripper in accordance with the size of the object to be handled. -A permanent magnet is attached to the opposing fingertip so that the extension nail mechanism can be magnetically connected to the opposing fingertip without electric power. An iron member is used as part of the nail portion. -The assumed mass W of the object to be handled is 0.1 kg. For the nail to slide underneath the object, it is necessary to lift the object with a force of at least 1 N, a value obtained by multiplying the assumed mass W by gravitational acceleration. The pressing force of the nail portion must therefore be at least 1 N. Examination of extension nail mechanism Figures 2 and 3 show schematic diagrams of the proposed extension nail mechanism and parallel gripper, respectively. We investigated configurations of the extension nail mechanism allowing the nail portion to smoothly slide underneath the object to transfer it onto the nail. The proposed extension nail mechanism comprises a stainless steel belt, two transport belts, a triangular nail, a belt drive, and winding units for each belt. Stainless steel belts were adopted for their thinness and strength. A triangular nail is connected to one end of the stainless steel belt, with the other end wound around the beltwinding unit. The first transport belt is situated just above the upper surface of the stainless steel belt, one end of which is folded back by the first nail roller for separation from the upper surface of the stainless steel belt and fixed to the support member. The other end is wound around the first transportbelt-winding unit. The second transport belt is placed just below the lower surface of the stainless steel belt and is folded back by the second nail roller so that one end is separated from the lower surface of the stainless steel belt. The other end is wound around the second transport-belt-winding unit. In each belt-winding unit, torque always acts in the direction the belt is wound by an elastic member such as a spring. Winding and arranging each belt improves its storability. The first direction-change roller changes the path of the first transport belt in an arbitrary direction, whereas the second changes the path of the second transport belt. The gap between the direction-change part changes the route of the stainless steel belt in an arbitrary direction. The belt-driving unit sandwiches the stainless steel belt between the drive roller and the passive roller and sends the belt out by rotating the drive roller. The triangular nail is moved forward or backward by operation of the belt-driving unit. At this time, the first transport belt winds around the first nail roller and the second transport belt winds around the second nail roller, moving in conjunction with the stainless steel belt. As the nail extends, it slides underneath the object's bottom surface and the mounting surface, and the object is transferred onto the first transport belt as the nail advances. When the target object is transferred to the first transport belt, the surface of the first transport belt, which is in contact with the bottom surface of the target object, comes out bit by bit so as to sink into the bottom surface of the target object. The first transport belt thus smoothly slides underneath the object, reducing the likelihood of damaging the object, even in the case of flexible objects. Similarly, the surface of the second transport belt, which is in contact with the mounting surface, also comes out bit by bit so as to gradually make contact with the mounting surfaces. The second transport belt thus moves smoothly on the mounting surface. Through the operation of the first and second transport belts, the nail portion executes a smooth reciprocating motion. OVERVIEW OF DEVELOPED PARALLEL GRIPPER This section describes the structure of the parallel gripper developed based on the concepts presented in Section 2. open/close drive unit, a finger unit with an extension nail mechanism, and a finger unit with a magnetic fingertip. To grasp a long flexible object, two fingers with a modularized extension nail mechanism were attached to the movable base. To make opposable fingers, two fingers with magnetic fingertips were attached to the base and connected with an acrylic plate. Overall configuration Because the side face of the object is supported by the acrylic plate, the nails easily slide underneath the object. The open/close drive unit fixed to the base unit is connected to a small DC motor (4.5 W; reduction ratio 29:1) and a trapezoidal screw (lead 1 mm) by coupling. The movable base moves linearly with the rotation of the trapezoidal screw, thereby adjusting the distance between the opposing fingers. Three DC motors are used, one for the open/close drive unit and one for each extension nail mechanism. As Fig. 5 shows, the extended nail mechanism magnetically connects to the opposing fingertip. A neodymium magnet (force 49 N; size φ20 × 4 mm) is attached to the opposing fingertip. Figure 6 shows a schematic of the drive control system used in the experiment. In this system, voltage corresponding to the target speed is output to the motor driver, and the DC motor is driven by passing a current through it. Control is performed by detecting the speed, inputting it to the counter board, and feeding it back. A displacement sensor is used to detect when the elongated nail reaches the tip of the opposing finger. Structure of the extension nail mechanism Figures 7-9 show images of the developed extension nail mechanism. The finger with the extension nail mechanism measures H 220 × D 115× W 110 mm with the nail contracted and has a total mass of about 1.1 kg. The stroke of the nail mechanism is 180 mm. For expansion and contraction, the winding unit of each belt combines a wire pull-out constant-load spring (1.96 N) and a passive rotating part. Three pull-out constant-load springs wind up the stainless steel belt, the first transport belt, and the second transport belt. The stainless steel belt, which is sandwiched between the drive roller and the passive roller, is sent out by rotation of the drive roller, thereby extending the nail portion. A small DC motor (4.5 W; reduction ratio 370:1) is connected to the drive roller via a timing belt. The first transport belt has a path that covers the nail surface such that the nail can smoothly slide underneath the object. The transport-belt material is high-strength silicone rubber (tear strength 32 N/mm). The inclination angle of the nail part is 30°. Inclination angle of the nail part We investigated the inclination angle of the nail that would allow it to smoothly slide underneath objects. Specifically, we considered the relation between the pressing force of the nail mechanism and the inclination angle of the nail. Figure 10 is a schematic diagram showing the nail sliding underneath the object. Here, F [N] is the pressing force of the nail mechanism, θ is the nail inclination angle, R [N] is the surface pressure from the object to the nail surface, f1 [N] is the frictional force between the nail's lower surface and the mounting surface, and f2 [N] is the frictional force between the nail's upper surface and the object. Assuming that F is balanced with R, f1, and f2, the following equation is established. = 1 + 2 cos + sin . (1) Furthermore, assuming that the friction coefficient between the nail's lower surface and the mounting surface is μ1 and that the normal force is N1 [N], the following expression holds for the friction force f1 [N]: (2) Similarly, assuming that the friction coefficient between the nail's upper surface and the object is μ2 and that the normal force is N2 [N], the following equation holds for the friction force f2 [N]: (3) From Eqs. (2) and (3), Eq. (1) becomes = 1 1 + 2 2 cos + sin . (4) Assuming that friction coefficients μ1 and μ2 are extremely small due to the transport belt covering the nail mechanism, the following equation is established. Assuming the surface pressure R is a constant value independent of the nail inclination angle θ, the pressing force F of the nail mechanism decreases with smaller θ. In consideration of nail mechanism durability and Eq. (5), we set θ to 30°. MECHANISM VERIFICATION This section details the results of experiments using the developed parallel gripper, including the pressing force of the extension nail mechanism and the approaching action toward the bottom of a flexible object, the load resistance of the extended nail when magnetically connected to the opposing finger, and the grasping motion of the object. Previously, the position and orientation of the object were detected using an external sensor, but this time, to confirm the mechanism operation, information on the position and orientation of the object as well as the operation target values for each arm were given in advance, and the handling operations of the parallel gripper were performed based on this information. Extension experiment of nail mechanism As shown in Fig. 11, a weight was placed on a force gauge (DS2-500N; IMADA, Inc.) to immobilize it. Then, the force gauge was placed in contact with the tip of the nail mechanism, the nail mechanism was extended, and the pressing force was measured. The gauge showed that a maximum pressing force of 13 N was applied, confirming that the developed nail mechanism can generate the target force of 1 N or more. As shown in Fig. 12, we verified the approach of the nail mechanism toward the bottom of the flexible object. A bag filled with about 0.6 kg of rice was used as the flexible object. In the experiments, the nail mechanism performed extension operations from the contracted state, and we visually confirmed whether the nail mechanism could slide underneath the flexible object. These experiments confirmed that the developed nail mechanism smoothly slides underneath the flexible object due to the arrangement of the transport belts on both sides of the stainless steel belt. The nail mechanism took about 5 s to reach the maximum extended state from the contracted state. We also confirmed that a winding structure combining a constant-load spring and passive rotating part during pawl retraction smoothly wound each transport belt. However, for the nail mechanism to smoothly slide underneath the bottom surface of the object, the bottom surface of the object needs to be somewhat round. Figure 11. Measurement of the nail mechanism pressing force. Figure 12. Evaluation of the nail mechanism sliding underneath a flexible object. Evaluation of the load resistance of the extended nail mechanism As shown in Fig. 13, we evaluated load resistance of the transport belt when the elongated nail mechanism was magnetically coupled to the opposing fingertip. The nail mechanism was in the maximum extended state. In experiments with the parallel gripper lifted by the robotic arm, a weight (about 1.8 kg) was placed on the center of the transport belt of the elongated nail mechanism, and we verified whether magnetic coupling of the nail mechanism could be maintained. Because the maximum payload of the robotic arm supporting the parallel gripper is 5 kg, we set the weight to 1.8 kg in consideration of the weight of the parallel gripper (4 kg). These experiments confirmed that even when a 1.8 kg weight was placed at the center of the transport belt, the magnetic fastening of the nail mechanism was not released, indicating that the load capacity was sufficient. Object Nail part Ground Object grasping experiment We performed a basic grasping motion experiment using the developed parallel gripper combined with a robotic arm. As described in Section 2.1, the rigid object was a cardboard box (H 90 × W 200 × D 140 mm; about 0.1 kg), and the flexible object was a long object (about W 70 × D 230 mm; about 0.1 kg) packed in cushioning material. In the experimental procedure, we first moved the parallel gripper to the position for grasping the target object by the robotic arm. Next, we adjusted the gap between the opposing fingers by operation of the open/close drive unit. Finally, the robotic arm picked up the target object. Figure 14 shows how the rigid body was picked up. We confirmed that the nail mechanism can be more stably lifted by positioning it underneath the cardboard box during grasping. In Figure 15 shows how the flexible object was picked up. During grasping, we confirmed that the extended nail mechanism slide underneath the flexible object, and that the extended nail mechanism magnetically connected to the opposing finger, allowing the flexible object to be lifted and transported. In Fig. 15, the transition time from the open state of the parallel gripper to lifting the object was about 48 s. It took this long because the extension operations for each nail mechanism were performed separately for verification of the mechanism. Limitations of the system We identified two main limitations of the system that need to be addressed in future research. One limitation is that the expansion and contraction operation times of the nail mechanism are long because it takes time to straighten the wound stainless steel belt. In addition, if the nail mechanism slides underneath the flexible object too quickly, the object might be damaged. Therefore, it is necessary to determine the appropriate movement speed of the nail mechanism in consideration of whether or not the object will be damaged. The other limitation is that the nail mechanism may not be able to slide underneath the flexible object, as shown in Fig. 16. This occurs when the tip of the nail mechanism cannot enter the gap between the bottom of the object and the surface upon which the object lies. If the approach of the nail mechanism fails, the nail mechanism must be retracted again and the extension motion must be restarted. In the case shown in Fig. 16, it took about 13 s for the nail mechanism to resume the extension movement. Therefore, it is necessary to make the nail mechanism thinner for cases in which the gap between the surface and the object is small. CONCLUSION Aiming to expand the range of applications for parallel grippers, we proposed an extension nail mechanism that can be mounted on one finger of a parallel gripper and described the verification of its mechanisms. In the proposed extension nail mechanism, the transport belts were arranged on either side of the extending nail part, allowing it to smoothly slide underneath the target object. To satisfy the design specifications for the extension nail mechanism (i.e., miniaturization and large expansion/contraction spans), each belt was elastically wound and arranged. With this configuration, we achieved a 180 mm expansion/contraction of the nail part. Because the elongated nail magnetically connected to the opposing fingertip, we confirmed that the extended nail could function even when supporting a weight of 1.8 kg. The developed parallel gripper grasps rigid cardboard boxes by adjusting the distance between the opposing finger mechanisms and grasps flexible objects by sliding an extension nail underneath them. A series of basic performance tests confirmed the utility of the developed parallel gripper. In future research, we will investigate autonomous grasping operations by combining the developed parallel gripper with external sensors and a robotic arm, thereby promoting application of automated systems for logistics sites.
2021-07-26T00:06:48.851Z
2021-06-02T00:00:00.000
{ "year": 2021, "sha1": "b5c3d04ced0f2545221b035770ff6b195eb50068", "oa_license": null, "oa_url": "https://www.mmscience.eu/journal/issues/june-2021/articles/development-of-a-parallel-gripper-with-an-extension-nail-mechanism-using-a-metal-belt/download", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8c9c5b92f3c70b1f5814b57f6448adb1aae1aa5a", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
22899329
pes2o/s2orc
v3-fos-license
Flower morphology , nectar features , and hummingbird visitation to Palicourea crocea ( Rubiaceae ) in the Upper Paraná River floodplain , Brazil We investigated flower morphology, nectar features, and hummingbird visitation to Palicourea crocea (Rubiaceae), a common ornithophilous shrub found in the riparian forest understory in the Upper Paraná River floodplain, Brazil. Flowers are distylous and the style-stamen dimorphism is accompanied by other intermorph dimorphisms in corolla length, anther length, and stigma lobe length and form. We did not observe strict reciprocity in the positioning of stigma and anthers between floral morphs. Flowering occurred during the rainy season, October to December. Nectar standing crop per flower was relatively constant throughout the day, which apparently resulted in hummingbirds visiting the plant throughout the day. Energetic content of the nectar in each flower (66.5J) and that required daily by hummingbird visitors (up to 30kJ) would oblige visits to hundreds of flowers each day, and thus movements between plants that should result in pollen flow. Three hummingbird species visited the flowers: the Gilded Sapphire ( Hylocharis chrysura), the Black-throated Mango (Anthracothorax nigricollis), and the Glittering-bellied Emerald ( Chlorostilbon aureoventris). The frequency of hummingbird visitation, nectar features, and the scarcity of other hummingbird-visited flowers in the study area, indicate that P. crocea is an important nectar resource for short-billed hummingbirds in the study site. INTRODUCTION Rubiaceae include species with floral features (morphological and energetic) related to a variety of pollinating agents including bees, butterflies, moths and hummingbirds (Passos and Sazima 1995, Stone 1996, Machado and Loiola 2000, Wesseling et al. 2000).Hummingbird pollination is frequent among Palicourea Aublet is a Neotropical genus (closely related to Psychotria, Taylor 1997) comprising about 200 species of shrubs or small trees that typically occur in the understory and subcanopy of moist to wet forest; most species exhibit floral traits consistent with hummingbird-pollination (Sobrevila et al. 1983, Murcia and Feinsinger 1996, Ree 1997, Taylor 1997, Contreras and Ornelas 1999).According to Taylor (1997) nearly all Palicourea species are distylous. Heterostyly is a genetic polymorphism in which plant populations are composed of two (distyly) or three (tristyly) floral morphs that differ reciprocally in the heights at which stigmas and anthers are positioned in the flowers (Barrett 1990).Other traits commonly associated with heterostyly are self and intra-morph incompatibility and an array of ancillary floral polymorphisms (Barrett 1990).Heterostyly has been reported in at least 28 angiosperm families (Barrett et al. 2000); Rubiaceae is one particularly important family in this respect, containing hundreds of heterostylous species (Barrett et al. 2000). In the Upper Paraná River floodplain of Brazil, Rubiaceae is among the most diverse families, including at least 22 species or about 5% of the local phanerogamic flora (Souza et al. 1997).Palicourea crocea (Sw.)Roem.et Schult. is a common heterostylous shrub in the understory of riparian forest of that region (Souza et al. 1997, Souza andSouza 1998).Flowers are visited by hummingbirds (Souza and Souza 1998) and P. crocea appears to be one of the few local species displaying floral features related to hummingbird pollination.Hummingbirds are the most specialized nectarivorous birds and represent both the ecologically and numerically dominant group in bird-plant interactions in the Neotropics (Stiles 1981).Considered important components of the Neotropical fauna, hummingbirds visit and pollinate many plant species in Brazil (Mendonça and Anjos 2003). In this paper, we report floral morphology, nectar features, and hummingbird visitation to P. crocea in the Upper Paraná River floodplain (Brazil). The main goals of the current study were to evaluate: (1) morphological components of heterostyly in P. crocea; (2) nectar production and standing crop patterns throughout the day; (3) response of flowers to nectar removal; and (4) behavior and visitation patterns of hummingbirds to flowers. MATERIALS AND METHODS Palicourea crocea bears terminal inflorescences that emerge from the foliage on flexible peduncles and are easily accessible for animals in hovering flight.Flowers are scentless, with yellow to reddish tubular corollas that contrast with the green foliage.The inflorescence branches are also brightly colored, varying from orange to red.Nectar accumulates in the enlarged basal part of the corolla tube and an internal ring of trichomes encloses the nectar chamber, separating this from the anthers and stigma (see also Souza and Souza 1998).Vouchers of Palicourea crocea have been deposited at the Nupélia herbarium -Universidade Estadual de Maringá (HNUP 2453(HNUP -2456)). The study was carried out on Porto Rico island (103 ha; 22 • 45'S and 53 • 15'W), between the States of Paraná and Mato Grosso do Sul, Brazil.The island lies in the Upper Paraná River, a conservation unit [Área de Preservação Ambiental (APA) das Ilhas e Várzeas do Rio Paraná (Environmental Preservation Area)], at an elevation of 230m a.s.l.According to the Köeppen system, the region's climate is classified as Cfa (tropical-subtropical) with an average annual temperature of 22 • C (summer average 26 • C, and winter average 17 • C), and an average annual rainfall of 1500 mm (Eletrosul 1986).The area lies within the phytoecological region of Seasonal Semideciduous Forest (Souza et al. 1997), in the extreme west portion of the Atlantic Forest in Brazil (Simões and Lino 2002).Porto Rico island has been heavily deforested, leaving only 3 small forest fragments which occupy about 6.17 ha.The study was conducted in a remnant of riparian forest.At the study site, P. crocea is especially abundant in areas subject to flooding, where individuals are usu-ally clumped in distribution and sometimes occur in dense patches.A population of P. crocea with more than 100 individuals in a single patch was chosen for observations.Floral traits were observed in the field.Floral measurements (Fig. 1) were made on fresh or fixed material (70% ethanol).A digital caliper (accuracy to 0.01 mm) was used to measure: (1) corolla length, (2) stigma height (with stigma lobes closed and held vertically), (3) anther height (to tip of anther), (4) stigma lobes length, and (5) anther length.The difference between stigma and anther heights (6) was calculated for each flower as the absolute value of anther height less stigma height (Faivre and McDade 2001).Time and length of anthesis were observed in 15 flowers from four individuals tagged at the bud stage.Cumulative nectar production during the day was assessed on 19 October 2002.Flowers were bagged in mosquito netting at bud stage to prevent visits from animals and nectar was sampled at twohour intervals beginning at 0800h and continuing until 1800h.Flowers were sampled destructively, thus different sets of flowers (N = 9-13 flowers) were used in each removal period.We measured nectar volume per flower (in µl) and sugar concentration (% sucrose, wt/total wt) in all samples.The former was obtained by using graduated microliter syringes (Hamilton) and sugar concentration was measured with a hand refractometer (Atago, 0-32%).The amount of sugar produced was denoted in mg per flower after Bolten et al. (1979) and converted to joules assuming that 1mg of sugar yields 16.8 joules (Dafni 1992).The results of cumulative nectar production yielded data to indicate the maximum amount of nectar that unvisited flowers could produce throughout the day.We did not observe any mites in flowers.Thus, nectar values obtained in the study are likely to represent the actual values of nectar produced.Nectar standing crop, the amount of nectar available to visitors, was evaluated three times a day (0800, 1300, and 1800h) in flowers exposed to foragers (N = 12-16 different flowers per sample).Samples were taken on 18 October and repeated on 24 October. The response of flowers of P. crocea to nectar removal was evaluated on 29 October.Flowers (N = 10-13) were subjected to one of the following three treatments, simulating legitimate visits by pollinators (see McDade and Weeks 2004b): (1) removal of nectar at 2-h intervals between 0800h and 1800h; (2) removal at 5-h intervals (0800, 1300, and 1800h); and (3) removal of nectar only once, at 1800h (control).For each removal schedule, total nectar production was the sum of volumes removed over the course of the day, whether six, three or one An Acad Bras Cienc (2006) 78 (1) times.In all treatments flowers were tagged at bud stage for identification and bagged to prevent visits from animals.Nectar was extracted without removing the flowers from the plant, thus extreme care was taken to avoid damaging the nectaries or other floral structures.The repeated nectar samples also allowed us to observe the pattern of nectar secretion for comparison to the cumulative nectar data. Observations were carried out in November 2001 and from October to November 2002 (from 0700-1800h), for a total of 87 hours.Hummingbirds were observed directly or with binoculars and photographed for analyses of their visiting behavior.Identification was based on Grantsau (1988).We recorded hummingbird species, the time birds entered and left the floral patch, the duration of each foraging bout, the number of flowers probed per bout, the way hummingbirds removed the nectar and the height of inflorescences visited.All agonistic interactions observed were also recorded.Visitation rates were defined as the number of visits recorded in relation to the total time of observation, and expressed in bouts per hour. All data were tested a priori for normality (Shapiro-Wilk's test) and homogeneity of variances (Levene's test).Parametric statistics were used whenever possible.Differences in morphological attributes, nectar volume, and nectar concentration between floral morphs of Palicourea crocea were evaluated by t-test.The Chi-square test (χ 2 ) was used to evaluate the proportion of individuals in the studied population with flowers of each morph.Nectar production and standing crops at different times of the day were compared by analyses of variance (one-way ANOVA or Kruskal-Wallis nonparametric ANOVA).The effects of nectar removal on total volume of nectar produced by the sets of flowers submitted to different removal schedules were compared by one-way ANOVA.Differences in rates of hummingbird visitation to P. crocea among time intervals were evaluated using the Chi-square test.The Mann-Whitney U -test was used to compare the duration of each feeding bout and the number of flowers probed per bout by different hum-mingbird species.Hummingbird body mass data were obtained in Grantsau (1988). RESULTS Palicourea crocea flowers are distylous; stigmaanther position divided the plants into two distinct morphs: short-styled (SS) flowers, with a short style and long stamens, and long-styled (LS) flowers, with the complementary arrangement.The studied population had an approximately 1:1 ratio of the morphs (χ 2 = 0.10, df = 1, P = 0.75; N = 40).The stylestamen dimorphism on P. crocea flowers was accompanied by other inter-morph variations in corolla length, anther length, and stigma lobe length (Table I).Short-styled flowers had significantly longer corollas and anthers than LS flowers.Stigma lobes were notably distinct in the two morphs regarding both length and form; SS had straight, longer stigma lobes, whereas LS flowers had curved, shorter stigma lobes.We did not observe strict reciprocity in the position of stigma and anthers between floral morphs; the difference between heights of stigma and anthers within individual flowers was greater for SS than LS flowers (Table I). Anthesis was diurnal and seemed to be synchronous.P. crocea flowers were opened at dawn at which time pollen and nectar were already available.Each individual flower lasted for approximately one day.After flower opening, corollas become progressively more reddish.Next morning, the corollas, now slightly wilted, had fallen from the plant or could be readily dislodged by touch. The main blooming period of P. crocea was during the rainy season, from October to December.The peak was in November when up to 90 percent of the individuals bore buds or developing inflorescences and about 68 percent of them had open flowers (N = 50).However, a few plants flowered at different times and throughout the year a few individuals could be found in flower (Fig. 2).Each day, one to ten flowers opened per inflorescence and, during the blooming peak, a mean of 51.8 (±55.2SD) flowers per plant opened each day The fruits are green when immature, but turn purplish-black when ripe.The fruiting period started in November and extended until March.In December, about 76 percent of the marked individuals had green fruits and, in February, more than 95 percent of them had ripe fruits. Cumulative nectar production in bagged flowers of P. crocea during the day is shown in Table II.Most of daily nectar volume was secreted before 1000h.Mean sugar concentration remained rela-tively constant throughout the day.By the end of the day, bagged flowers accumulated a mean (± SD) of 14.6 ± 4.2 µl of nectar with a mean sugar concentration of 24.4 ± 1.5%, corresponding to an average daily production of 66.5 joules per flower.Long-styled and short-styled flowers produced similar nectar volumes (t = 0.28, P = 0.78; N = 8 SS and 5 LS) and concentrations (t = 1.40,P = 0.19).Thus, results of all nectar samples for SS and LS flowers are presented together.Average nectar volume per flower did not differ statistically among sets of flowers submitted to different removing schedules (Table III). In flowers exposed to foraging animals, nectar standing crop was almost 50% less than in bagged flowers, presumably due to consumption by visitors., 1838).All three species visited flowers legitimately.Hummingbirds made a total of 169 visits in 87 hours of observation.Hylocharis chrysura and A. nigricollis were the most frequent (62.7% and 32.5% of the total observed visits, respectively), whereas C. aureoventris was sporadic, accounting for only 3 percent of visits.In about 1.8 percent of the visits it was not possible to identify the bird to species.Besides hummingbirds, some unidentified robbing bees, and diurnal moths and butterflies which visitation to flowers may result in some pollen transfer were observed feeding at flowers.Hummingbirds visited the observed clump of P. crocea at about two visits per hour.H. chrysura visited the flowers more frequently than A. nigricollis (Table V).Time of day was not related to number visits per hour (X 2 = 0.93, df = 10, P = 0.999), given that visitation rates were relatively constant throughout the day.While probing for nectar, hummingbirds consistently touched anthers and stigmas with their bills and, due to the existence of LS and SS morphs, we observed that pollen loads were placed on two different portions in the beaks.Nevertheless, we occasionally observed H. chrysura individuals rubbing their bills against branches which likely removed pollen (10% of its visits; N = 106).This behavior was observed only once in a female A. nigricollis. Hylocharis chrysura foraged haphazardly at flowers situated at different heights, whereas A. nigricollis most often visited the upper inflorescences; in only seven percent of the observed visits (N = 43) did individuals of A. nigricollis forage on low flowers (< 1 m high).Body mass was related to number of flowers probed and time spent per foraging bout.The larger A. nigricollis explored a significantly higher number of flowers per bout than H. chrysura and, likewise, stayed longer on the floral patch (Table V). After visiting the flowers, hummingbirds either (a) flew away from the floral patch (H.chrysura = 41% of 73 visits, A. nigricollis = 59.5% of 37 vis-its) or (b) perched in shrubs or trees in the vicinity.In the second case, hummingbirds flew away soon afterwards or visited the flowers again.Only ten agonistic interactions were recorded (0.06 displacements per visit, N = 169), the majority between conspecifics (7 of 10).Anthracothorax nigricollis was the dominant species in interspecific encounters (N = 2).Hylocharis chrysura chased a butterfly once. No hummingbird species was recorded at the study site other than those visiting P. crocea.In addition to P. crocea, individuals of all three species were observed taking nectar from flowers of Inga vera (Mimosaceae) on the island and in adjacent areas.Hummingbird presence on the island was apparently related to the blooming periods of P. crocea and I. vera.Between May and July, when neither species was in flower, no hummingbirds were recorded at the study site. We found two distinct classes of anthers and stigma height for SS and LS flowers of P. crocea that were accompanied by between-morph variation in ancillary features of heterostyly (corolla length, stigma lobe length, anther length).The dimorphism in style and stamen heights recorded for P. crocea, as well as the other morphological differences be- tween SS and LS flowers, has been reported for other members of Palicourea (Sobrevila et al. 1983, Feinsinger and Busby 1987, Ree 1997, Taylor 1997, Contreras and Ornelas 1999) and probably promotes outcrossing (Barrett 1990).Besides the physical separation between anthers and stigma, most distylous species have an intramorph incompatibility system (Feinsinger andBusby 1987, Stone 1996). Regarding reciprocal positioning of anthers and stigma, P. crocea deviated significantly from the expectation for distylous species; separation between anther and stigmas was greater for SS flowers than LS flowers, perhaps due in part to the longer corolla length of SS flowers.Differences in anther/stigma separation between SS and LS flowers have been reported for other Rubiaceae such as Gaertnera vaginata (Pailler and Thompson 1997), Psychotria poeppigiana and P. chiapensis (Faivre and McDade 2001), and Sabicea cinerea (Teixeira and Machado 2004) but, in these cases, separation between anthers and stigma was greater in LS flowers than in SS flowers. The flowering phenology of P. crocea resem-bles that of other hummingbird-pollinated species, such as Hamelia patens (Feinsinger 1978) and Barbacenia flava (Sazima 1977), displaying a definite blooming peak but with some flower production throughout the year.Although the main blooming period of the studied population was not long, P. crocea appears to represent an important nectar source for short-billed hummingbirds and other animals in the Upper Paraná River floodplain due to its abundance, numerous flowers and nectar features. The values of nectar volume and sugar concentration in flowers of P. crocea are within the range of those reported previously for hummingbirdpollinated plants (Opler 1983, McDade andWeeks 2004a).Nectar characteristics did not differ significantly between SS and LS flowers; thus, they are likely to reward pollinators equally. Plant species studied thus far are variable in their response to nectar removal by foragers.Nectar removals have been reported to stimulate, have a neutral effect, or reduce nectar secretion (Feinsinger 1978, Gill 1988, Bernardello et al. 1994, 2004, Pio-An Acad Bras Cienc (2006) 78 (1) vano et al. 1995, Torres and Galetto 1998, Navarro 1999, Freitas and Sazima 2001, Castellanos et al. 2002, Langenberger and Davis 2002, McDade and Weeks 2004b).For P. crocea flowers, removals had no effect on reward volume; flowers submitted to different removing schedules produced similar amounts of nectar.Thus, it is likely that the rate of nectar production by flowers of P. crocea is unaffected by hummingbird visits.For P. croceaand other species with flowers that do not respond to nectar removal or have floral nectar significantly depleted by flower mites -measurements of nectar accumulation in unvisited bagged flowers provide accurate estimates of the potential energetic value of a flower to hummingbirds. The patterns of nectar availability (standing crop) are determined by both nectar secretion and animal visitation rates (Torres and Galleto 1998).Perhaps on account of the high number of P. crocea shrubs in the clump and the fact that nectarivores usually visit only a small proportion of flowers available in large patches (e.g.Goulson 2000), nectar standing crop (although almost 50% less than in bagged flowers), did not vary significantly throughout the day.It is also possible that the observed high variation among flowers sampled at any one hour made it difficult to detect differences. Based on hummingbird visiting behavior and bill length in relation to flower morphology, all three species are potential pollinators of P. crocea.Taking into account the frequency of visits, H. chrysura and A. nigricollis are the most effective hummingbird pollinators.Hylocharis chrysura individuals were occasionally observed cleaning their bills by rubbing them against a branch, a behavior that could reduce pollen transfer (Ree 1997) and thus the efficiency of pollination by birds of this species. Nectarivores are sensitive to nectar availability in flowers and can respond to variation in nectar supplies by changing their foraging behavior (e.g.Quirino and Machado 2001).In the present study, constant nectar standing crop probably allowed hummingbirds to maintain their activity (visitation rates) at P. crocea flowers at the same level through-out the day. Anthracothorax nigricollis probed considerably more flowers per bout than H. chrysura perhaps due to its larger mass and energetic requirements.The 24-hour energy costs for a H. chrysura weighing 4g is estimated to be 34.4 kJ whereas for an A. nigricollis weighing 7g it is calculated to be 43.3 kJ.(see McMillen and Carpenter 1977).Such values correspond to the energy supplied by 518 and 651 P. crocea flowers each producing 66.5 J, respectively.However, H. chrysura visited the flowers twice as frequently as A. nigricollis, which resulted in similar number of flowers probed per day (X ² = 0.24; df = 1; P = 0.63).Thus, H. chrysura seems to be a pollinator as suitable as A. nigricollis for P. crocea in the study site.Considering the average number of flowers that open per individual each day, hummingbirds would need to visit many shrubs in order to satisfy their energetic demands; movement of birds between shrubs should results in inter-plant pollen flow.This, associated with occurrence of distyly, may favor outcrossing. Based on the spatial arrangement, number of flowers per plant, floral morphology and reward, P. crocea could be classified as a clumped moderate flower (sensu Feinsinger and Colwell 1978) that would chiefly attract hummingbirds that are territorialists or territory-parasites (Feinsinger and Colwell 1978).Both H. chrysura and A. nigricollis exhibited territorial behavior, such as perching near the flowers and signaling their presence by vocalizations, visual displays and, on some occasions, aggressive attacks.Agonistic displacements were, however, uncommon perhaps due to the high abundance of flowers. On Porto Rico island, P. crocea was apparently the only plant species with floral traits related to bird pollination and flower availability was in general low.This could explain the small number of hummingbird species recorded, as well as their absence at certain times of the year.Compared to other Atlantic Forest Sites in Brazil (Sazima et al. 1996, Buzato et al. 2000), the richness of ornithophilous species in the Upper Paraná River floodplain appears to be low (pers.obs.), as do hummingbird species richness and abundance (Anjos and Seger 1988, Straube et al. 1996, Gimenes and Anjos 2004). Besides P. crocea, hummingbirds were observed in the study site visiting only Inga vera, a species that does not display floral traits related to ornithophily, but appears to be another important nectar source to hummingbirds.The hummingbird visitation to P. crocea flowers, combined with its nectar features and the low availability of other ornithophilous plants in the study area, suggests that the species is an important resource for short-billed hummingbirds in the study area.Similarly, the activity of these birds on flowers, together with their foraging behavior and morphology, indicate that H. chrysura and A. nigricollis are likely important pollinators of P. crocea.Palavras-chave: interações aves-plantas, heterostilia, polinização, Mata Atlântica, vegetação ripária, conservação. Flowering and fruiting of P. crocea were recorded for 50 individually marked shrubs every month, from February 2002 to March 2003.Each month, we counted the number of individuals with buds or developing inflorescences, open flowers, immature fruits, and ripe fruits.Flowering and fruiting peaks were defined based on the number of individuals bearing open flowers and fruits, re-spectively.The number of open flowers per plant was estimated by counts at 23 individuals during the flowering peaks of 2001 and 2002.The ratio of floral morphs in the studied population was evaluated based in 40 of the 50 marked individuals. TABLE I Floral dimensions (mm; measurements taken as indicated in Fig. 1) and results of t-test for long-styled (LS) and short-styled (SS) morphs of Palicourea crocea in the Upper Paraná River floodplain. SD: standard deviation An Acad BrasCienc (2006) 78 (1)HUMMINGBIRDS AT Palicourea crocea FLOWERS 49 TABLE II Cumulative nectar production (volume and concentration) in Palicourea crocea flowers through- out the day in the Upper Paraná River floodplain. SD: standard deviation; CV: coefficient of variation (%). K-W/ANOVA: results of Kruskal-Wallis (χ 2 ) or one-way ANOVA (F). a Sample sizes are the same for joules per flower, volume, and concentration. TABLE IV Nectar standing crop in Palicourea crocea flowers in the Upper Paraná River floodplain at different times of day. SD: standard deviation; CV: coefficient of variation (%). K-W: results of Kruskal-Wallis (χ 2 ) comparisons. a Sample sizes are the same for joules per flower and volume.bFlowerswith no nectar were not included in the analysis.An Acad BrasCienc (2006) 78 (1)
2017-07-07T12:40:36.927Z
2006-03-01T00:00:00.000
{ "year": 2006, "sha1": "bf4193adee60a86a34dddf3bceba6e2484dd963a", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/aabc/a/bPm59sYrCGqfzxgMMtcGHGz/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aef81a1e4503468c2e5fd389e6e363a086439a66", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
204703197
pes2o/s2orc
v3-fos-license
Genetic factors define CPO and CLO subtypes of nonsyndromicorofacial cleft Nonsyndromic orofacial cleft (NSOFC) is a severe birth defect that occurs early in embryonic development and includes the subtypes cleft palate only (CPO), cleft lip only (CLO) and cleft lip with cleft palate (CLP). Given a lack of specific genetic factor analysis for CPO and CLO, the present study aimed to dissect the landscape of genetic factors underlying the pathogenesis of these two subtypes using 6,986 cases and 10,165 controls. By combining a genome-wide association study (GWAS) for specific subtypes of CPO and CLO, as well as functional gene network and ontology pathway analysis, we identified 18 genes/loci that surpassed genome-wide significance (P < 5 × 10−8) responsible for NSOFC, including nine for CPO, seven for CLO, two for both conditions and four that contribute to the CLP subtype. Among these 18 genes/loci, 14 are novel and identified in this study and 12 contain developmental transcription factors (TFs), suggesting that TFs are the key factors for the pathogenesis of NSOFC subtypes. Interestingly, we observed an opposite effect of the genetic variants in the IRF6 gene for CPO and CLO. Moreover, the gene expression dosage effect of IRF6 with two different alleles at the same single-nucleotide polymorphism (SNP) plays important roles in driving CPO or CLO. In addition, PAX9 is a key TF for CPO. Our findings define subtypes of NSOFC using genetic factors and their functional ontologies and provide a clue to improve their diagnosis and treatment in the future. Introduction Cleft lip and cleft palate are orofacial disruptions of the normal facial structure that can cause problems with feeding, speaking, hearing and social integration among affected individuals [1,2]. Estimates have suggested that orofacial clefts occur in approximately 1 in 700 live births worldwide [2][3][4]. The majority of orofacial clefts lack additional defects in other tissues and are categorized as nonsyndromic cleft lip with or without cleft palate (CL/P) [5], which accounts for 70% of all orofacial clefts cases. CL/P cases include cleft palate only (CPO), cleft lip only (CLO) and cleft lip with cleft palate (CLP) [6]. Asian and Native American ancestry populations generally exhibit the highest birth prevalence rates for nonsyndromic orofacial cleft (NSOFC), whereas European ancestry populations have intermediate prevalence rates, and African ancestry populations have the lowest prevalence rates [7]. The overall prevalence of NSOFC in China is 1.67 per 1,000 newborns, with rates for CPO (2.7), CLO (5.6) and CLP (8.2 per 10,000 newborns) [8]. Both genetic factors and environmental risk factors contribute to the pathogenesis of NSOFC [9]. However, it has been difficult to identify specific etiologic factors for this disorder because the defects arise during early embryological development and because recurrence is both fairly common and unpredictable [1]. Moreover, because cleft lip and cleft palate are highly genetically heterogeneous [10], it is crucial to understand the genetic contributions of facial development in order to improve the clinical care of affected individuals. Genome-wide association studies (GWAS) have led to the discovery of at least 43 genes/loci associated with NSOFC [11][12][13][14][15][16][17][18][19][20], with genetic variants in the region of the IRF6 gene showing the strongest association with nonsyndromic CL/P among different populations [11,16,[21][22][23]. Given that the lip and primary palate have distinct developmental origins from the secondary palate, and that CPO, CLO and CLP have different phenotypes, it seems reasonable to hypothesize that these disorders might also harbor different genetic etiologies. However, most previous genetic studies of NSOFC used mixed samples of different subtypes (CL/P) rather than analyzing CPO, CLO or CLP separately. Only recently, a CLP GWAS identified 14 novel loci and suggested that the CPO, CLO and CLP subtypes harbor different genetic etiologies [24]. Therefore, the genetic association signals for specific CPO or CLO subtypes may have been missed in previous studies because the true signals for the specific subtypes could have been diluted by other subtypes. On the other hand, the typical GWAS method has limited power to identify the associated genes with disease. For example, some genes that could be genuinely associated with disease status might not reach a stringent genome-wide significance threshold via typical GWAS [25]. The low signal-to-noise ratio, inherent in the majority of large datasets, presents a major difficulty in the analysis of complex biological systems [26]. Therefore, we combined the typical GWAS method with the gene network and ontology analysis methods to explore the genetic contributions to each NSOFC subtype. Typical GWAS and replications identified nine novel loci responsible for NSOFC To identify the NSOFC susceptibility genes/loci that are specific to CPO or CLO, we genotyped 935 unrelated CPO patients, 948 unrelated CLO patients and 5,050 unrelated control individuals of Southern Han Chinese ancestry using the Illumina HumanOmniZhongHua-8 BeadChip [27], which has 900,015 single-nucleotide polymorphisms (SNPs) in our discovery stage. Sample collection for the cohorts is shown in Table 1 After standard quality-control filtering for the participants and the SNPs (see Methods) and after excluding samples with poor quality and genetic heterogeneity using population stratification analysis, we obtained genotype data for 930 isolated CPO patients, 945 isolated CLO patients and 5,048 control individuals ( Table 2, the discovery cohort). The principal component analysis (PCA) results indicated that the remaining cases and controls were genetically well matched, without evidence of gross population stratification (S3 Fig). The genomic inflation factor (λ GC ) in the discovery cohort was 1.016 for CPO and 1.031 for CLO, suggesting that the association test statistics were not substantially confounded by the population substructure. We performed GWAS analysis using the logistic test with adjustment for sex and PCs (C1, C2, C3 and C4) using PLINK version 1.9 software. To further increase the genome coverage, we performed an imputation analysis to infer the genotypes of additional common SNPs (see Methods). The quantile-quantile (QQ) plots (using R package "qqman") of the association results are shown in S4 Fig. The Manhattan plot (using R package "qqman") of the P values is shown in Fig 1. CPO did not show a strong association signal in the discovery stage. If we used the normal GWAS significance cutoff, such as P < 5 × 10 −8 , we may omit some true association genes at the first replication. The association of CPO would be weak because the previous CPO GWAS analysis based on trios did not discover many CPO main effect genes/ loci. Previous studies suggested that the genetic model of CPO should be composed of many minor effect genes/loci. Therefore, we used the P < 9 × 10 −7 cutoff in the discovery stage for Table 2. Genes/loci identified for NSOFC by typical GWAS. Results of typical GWAS for SNPs significant at a multiple-testing correction level (P < 5 SNP selection for the first round replication. A total of 22 loci showed evidence of a significant association with CPO or CLO (i.e., surpassing P < 9 × 10 −7 ) at the GWAS discovery stage. All the association results of CPO and CLO adjusted for sex and PCs (PC1, PC2, PC3 and PC4) with P values less than e-5 in the discovery stage (935 CPO patients, 948 CLO patients and 5,050 control individuals) were listed in S1 Data. Additionally, 22 previously reported loci including 32 SNPs responsible for CL/P, CPO or CLP also showed marginal association (P < 0.05) with CPO or CLO (S1 Table). Among these loci, two reported CPO SNPs rs61776460 in 1p36.11 (GRHL3, P = 9.31× 10 −3 ) and rs604328 in 5p13.2 (UGT3A2, P = 3.15× 10 −3 ) are weakly associated with CPO in this study [28]. For the four reported CPO SNPs by Butali A et al. (rs80004662 and rs113691307 in CTNNA2, rs62529857 in SULT2A1 and rs2325377 in DACH1 [29], we could not find these SNPs in our chips. Two reported CLP SNPs rs908822 in 4q28.1 (LOC285419, P = 2.34 × 10 −3 ) and rs13317 in 8p11.23 (BAG4/FGFR1, P = 4.21 × 10 −3 ) [24] are weakly associated with CPO in this study. Two reported CL/P SNPs rs987525 in 8q24.21 (AC068570.1, P = 5.53 × 10 −3 ) [11,13] [13,16] and rs60417080 in 13q31.1 (RP11-501G7.1, P = 8.83 × 10 −3 ) [15,16,18] are also weakly associated with CPO in this study. Reported CLP SNPs rs560426 and rs66515264 in 1p22.1-21.3 (ARHGAP29, P = 1.79 × 10 −3 and P = 4.83 × 10 −3 respectively) [28], as well as rs1034832 in 8q21.3 (DCAF4L2/CTB-118P15.2, P = 2.60 × 10 −5 ) [24] are weakly associated with CLO. In the same locus, rs12543318 which was reported associated with CL/P, is also weakly associated with CLO (P = 3.79 × 10 −5 ) [15,16]. Three more reported CL/P SNPs, rs8049367 in 16p13.3 (RP11-462G12.2, P = 2.95 × 10 −4 ) [19], rs4791774 in 17p13.1 (NTN1, P = 5.87 × 10 −5 ) [19] and rs227731 in 17q22 (NOG, P = 4.35 × 10 −4 ) [15,16,18] showed weakly association with CLO in this study. In order to replicate the associations that arose from our discovery cohort, we first selected 48 SNPs in the 22 discovered loci mentioned above for replication assays (see Methods, S2 Table) among 724 of the CPO patients, 781 of the CLO patients and 3,265 of the control individuals (the Southern Chinese replication cohort). Thirty-two SNPs in 15 loci maintained statistically significant associations with CPO or CLO (P < 0.05). We then genotyped these 32 SNPs in a secondary replication cohort of Northern Chinese ethnicity, comprising 417 unrelated CPO patients, 492 unrelated CLO patients and 1,832 unrelated normal control individuals. Furthermore, to assess whether these CPO or CLO associated 13 genes/loci were also associated with CLP, we genotyped the 24 SNPs in the 13 genes/loci in an independent cohort consisting of 2,270 unrelated sporadic cases of CLP and 3,265 control individuals of the Southern Han Chinese population, along with 427 CLP patients and 1,832 control individuals of the Northern Han Chinese population (Table 2). However, we found that only the four genes (IRF6, MYCN, VAX1 and MAFB) that were previously reported to be associated with CL/P were significantly associated with CLP (P < 5 × 10 −8 ). These results demonstrate that the novel genes can only be identified by the specific NSOFC subtypes for association studies, and the genetic factors for CL/P that were previously identified are closer to the genetic factors for CLO and CLP than for CPO. Expression dosage effect of IRF6 with different alleles associated with CPO or CLO Consistent with previous findings [11,16,[21][22][23], we also confirmed that the IRF6 gene had the strongest association with both CPO and CLO, supported by the 537 statistically significant SNPs in this region (P < 9 × 10 −7 ) that were identified in the discovery stage (Fig 3A, S3 Table). After the meta-analysis of the results from the discovery and replication cohorts, the IRF6 gene region showed the most significantly associated SNPs with either CPO or CLO (Table 2B, Fig 1). However, the contributions of IRF6 to CPO and CLO were distinctly different based on the following findings: 1) the two alleles of the same associated SNP in the IRF6 showed an opposite direction of association between CPO and CLO, and 2) the SNPs in the IRF6 showed a stronger signal of association with CLO in comparison to CPO. For example, the T allele in rs72741048, had P = 3.07 × 10 −15 and odds ratio (OR) = 1.314 for CPO; while it had P = 8.22 × 10 −40 and OR = 0.575 for CLO (Fig 1, Table 2). Convergent evidence from CPO and CLO points to the IRF6 gene as being a critical factor for the pathogenesis of NSOFC. Most of the SNPs associated with CPO and CLO are located Regional association for the nine newly discovered loci by typical GWAS. Regional association plots indicate the −log 10 P values of the genotyped SNPs of each locus. The sequence data were aligned to human hg19. The Y axis represents the negative logarithm (base 10) of the SNP P value and the X axis represents the position on the chromosome, with the name and location of genes in the UCSC Genome Browser shown in the bottom panel. The SNP with the lowest meta analyzed P value in the region was marked by a purple star. The colors of the other SNPs indicate the r 2 of these SNPs with the lead SNP. Plots were generated with LocusZoom using hg19/1000 genome build LD for ASN population (2014). https://doi.org/10.1371/journal.pgen.1008357.g002 Genetic factors define CPO and CLO subtypes in the 5'UTR and intronic regions of IRF6, which contain enrichment signals of active transcription start sites, transcriptions, enhancers and ChIP-seq chromatin profiling signals (S4 Table, S5 Fig). This suggests that these regulatory elements might control IRF6 gene expression. We next sought to determine whether the associated variants for CPO and CLO can affect the levels of IRF6 gene expression in human CPO or CLO disease tissues. IRF6 gene expression in palatine uvula mucosa from 64 patients with CPO and the edge of the upper lip cleft from 49 patients with CLO were assessed, separately, using real-time polymerase chain reaction (PCR). For example, in the SNP rs72741048, T was a risk allele for CPO, but it was a protective allele for CLO. We also found that IRF6 was down-regulated 1.6 times in the TT genotype in comparison to the AA genotype in CPO, but it was down-regulated 4.1 times in the TT genotype in comparison to the AA genotype in CLO (Fig 3B). In addition, IRF6 was down-regulated about 2.0 times in the TT genotype in comparison to the AA genotype in most of the normal tissues (Genotype-Tissue Expression (GTEX) data) (see the list of URLs), suggesting https://doi.org/10.1371/journal.pgen.1008357.g003 that a relatively low expression level of IRF6 is a risk for CPO but is protective for CLO in the affected tissues. The genotype-specific expression patterns were also confirmed at the protein level by immunohistochemistry in the 31 patient-derived tissue samples (Fig 3C). It should be mentioned that the expression of IRF6 was higher in the edge of the upper lip cleft tissue (the main area affected in CLO condition) than in the uvula tissue (the main area affected in CPO condition) in the normal condition (Fig 3C). Although the sample size was limited, these test results suggest that different CPO and CLO phenotypes are partially associated with dosage imbalances in the gene expression of IRF6 in the disease-related tissues. Five genes/loci were identified by gene network and ontology analysis, and further replications Our typical GWAS results revealed the importance of developmental transcription factors (TFs) in the regulation of the disease direction (CPO versus CLO), as evidenced by the association signals found near the following TFs: WHSC1, PAX9, FOXC2, IRF6, MYCN, VAX1 and MAFB. Given the limited sample size in the genetic study, some true disease genes could be missed by the typical GWAS analysis due to insufficient power. To explore more potential associated genes for CPO or CLO in our data, we conducted the following five steps for further analysis: 1. We conducted first round candidate of a set of NSOFC candidate genes from published references [1,2,5], the GWAS Catalog, the Human Gene Mutation Database, Phynolizer [30] and Human Phenotype Ontology (see URLs; see Methods). 2. We conducted second round candidate of NSOFC genes using GeneMANIA network analysis [31] and ontology, as well as pathway analysis using Database for Annotation, Visualization and Integrated Discovery (DAVID) [32] with the first round candidate NSOFC genes as queries to obtain more functional related genes. A total of 243 genes were enriched either by interaction with the genetic factors or in the same ontology as the candidate genes for NSOFC. The gene interaction and ontology analysis of the enriched genes are shown in S5 Table, S6 Fig and S7 Fig. 3. We next explored whether these candidate genes were associated with CPO or CLO, using our GWAS datasets. In addition to the identified association genes/loci in our typical GWAS analysis as we described in the first part of our results, we also found an additional 44 5. We conducted the final meta-analysis of the discovery and replications cohorts. Five novel genes/loci surpassed genome-wide statistical significance (P < 5 × 10 −8 ) in the final meta-analysis of the discovery and replications cohorts (Table 3), with three for CPO and two for CLO. For CPO: rs72688980 in 4q32.1 between CTSO and PDGFC (P = 1.89 × 10 −8 , Table 3. Genes/loci for NSOFC identified by gene network and ontology analysis and further replications. CPO, CLO or CLP associated SNPs selected by gene network and ontology and replication analysis which reached significant at a multiple-testing correction level (P < 5 × 10 −8 ) in CPO or CLO by meta-analysis of discovery and replication results. Discussion In this study, we showed the advantage of using GWAS combined with gene network and ontology analysis to identify genetic factors for NSOFC subtypes. We identified 13 genes/loci for NSOFC by using the typical GWAS method, which followed discovery (P < 9 × 10 −7 ) and replication steps to obtain the significant genes/loci (P < 5 × 10 −8 for combined results of discovery and replications). We also identified five additional genes/loci for NSOFC (P < 5 × 10 −8 for the combined discovery and replication cohort results), which might be missed by typical GWAS analysis, by the combined method. In the later method, we applied a combination strategy to mine the possible true disease genes by using network and ontology analysis plus further genotype validation. In all, we identified 11 genes/loci for CPO, 10 of which were novel. We also identified nine genes/loci for CLO, five of which were novel. Among these 18 genes (14 of which were novel), IRF6 and DLK1 (novel) were associated with both CPO and CLO, IRF6 was associated with CLP and MYCN, VAX1 and MAFB were associated with CLO and CLP. By comparing the P values and ORs of CPO, CLO and CLP subtypes, we found that the genetic pattern of CLO is more similar to that of CLP than that of CPO (Fig 4A and 4B). This phenomenon was consistent with a recent study by Carlson JC et al., which showed more significant association signals in CLP vs. CP group rather than CL vs. CLP group [28]. Besides IRF6 locus, CLO also shared VAVX, MAFB and MYCN loci. This finding is consistent with a previous CLP GWAS that CLP and CLO shared more genetic factors [24]. Our results also revealed the importance of developmental TFs in the pathogenesis of these three NSOFC subtypes (Fig 4). We identified 12 genes/loci containing TFs that contribute to CL/P. These TFs include seven families: We suggested that these groups of TFs and their target genes function in a coordinated manner to direct palate and lip tissue specialization during embryonic development and intermittently in response to external signals. Besides these TFs, Carlson JC et al. also identified validated TFs PAX7 (1p36.13) and GRHL3 (1p36.11) were also associated with CLP vs. CP group, and transcriptional corepressor TLE1 (9q21.32) was also associated with CLP vs. CP group [28]. Using gene network analysis, we found that a total of 243 genes were either enriched by interaction with the genetic factors or in the same ontology as the candidate genes for NSOFC with different biological functions. Although we confirmed only five genes that surpassed the genome-wide statistical significance in the final meta-analysis of the discovery and replications cohorts, we could not exclude the rest of the genes as candidates with NSOFC for the following reasons: (1) the power might be insufficient to archive the genome-wide statistical significance level due to the limited number of samples in this study, (2) the genetic contributions responsible for NSOFC were missed in the GWAS because of the limited covered regions in the genome of the chip design and (3) their pathogenesis roles for NSOFC might be at the functional modulation level, not at the genetic variation effect, through interaction with the genetic factors. Strikingly, the directions of the associated SNPs in the IRF6 gene with CPO and CLO were opposite, suggesting that these two subtypes have different pathogenesis with IRF6, probably by regulating its expression via the associated SNPs. For example, for one of the leading SNP rs72741048 in this region, OR _T for CPO was 1.314 suggesting its risk effect, while the OR _T for CLO was 0.575 suggesting its protective effect. IRF6 belongs to a family of transcription activators that share a highly conserved, helix-turn-helix, DNA-binding domain. The AP-2α enhancer was previously reported to be associated with CLO through binding with the rs642961 site in the intron of IRF6 to regulate its expression [33]. It is likely that the expression of IRF6 is precisely controlled by the coordination of multiple regulatory elements located in the associated SNPs in the gene region [34,35]. The phenotypes of IRF6 mouse models further suggest that the gene dosage balance might be critical for palate and lip development. IRF6null mouse embryos showed oral cavity adhesion [36], implying that the normal palate development was impaired without IRF6. However, approximately 22.0% of IRF6 transgenic mice exhibit an absence of calvaria, but they retain normal palatal shelf fusion. In contrast, 2.7% of these mice had a cleft lip [37], which supports the high expression level of IRF6 as a risk for CLO. Further investigations are needed to dissect the pathogenesis of the dosage effect of IRF6 for the CPO and CLO. We searched the GWAS Catalog and found five studies based on CPO-related GWAS [12,15,[38][39][40]. These studies did not discover IRF6 locus in CPO perhaps because of 1) a small sample size in the discovery stage and 2) a large degree of population mixture in the discovery stage. From our results, we can see that the power of IRF6 locus in CPO is much lower than that in CLO, so it may be beyond the cutoff threshold of power for GWAS discovery in the small and mixed discovery samples used in the preceding CPO GWAS. However, at least three studies suggested some clues to the association of IRF6 locus with CPO or suggested the opposite OR directions in the SNPs of IRF6 locus in CLO versus CPO. In 2008, Rahimov F. et al. reported that the rs2235371 and rs642961 haplotypes located in the IRF6 region are associated with CL/P. They further validated them in CLO and CPO. These results suggest opposite OR directions in CLO versus CPO in most of the study populations (Norway, Denmark, EURO-CRAN, Europe and Philippines) [33]. In a 2016 Chinese GWAS of nonsyndromic CLP in the discovery stage, the authors identified that IRF6 locus was associated with CLP. Further, they replicated rs861020 in the intron of IRF6 in the replication cohorts of CLP, CLO and CPO. They found that the A allele of this SNP is significantly associated with CLP versus CLO (OR = 0.72, P = 2.05 × 10 −9 ) or CLP versus CPO (OR = 1.51, P = 8.69 × 10 −11 ), suggesting opposite OR directions in these two diseases [24]. Another study reported that rs2235375 in the intron of IRF6 was associated with CPO but not with CLO and CLP in a South Indian population [41]. After the IRF6 gene locus, the PAX9 locus in 14q13.3 was the second strongest associated locus for CPO. In the PAX9 locus, the associated SNPs are located in the intron of the SLA25A gene,~100kb downstream of PAX9. A previous study indicated that mice with a deleted SLA25A gene presented obvious phenotypes of CPO via reduction of PAX9 expression [42]. PAX9 encodes a key TF that was reported to play a role in organs derived from neural crest mesenchyme [43]. PAX9 was required for secondary palate development in mice [44][45][46]. The absence of teeth and the formation of a cleft secondary palate in PAX9-deficient mice have been reported [45]. Furthermore, mutations in PAX9 can cause tooth agenesis in humans [47]. Therefore, it is likely that the CPO associated genetic variants decreased the SLA25A expression and further down-regulated PAX9 expression to associate with the disease. This needs to be addressed in future work. In a recent GWAS, Yu et al. found in a Chinese population 14 loci based on the nonsyndromic CLP in the GWAS discovery stage, but not CPO or CLO [24]. In the current study, we used CPO and CLO in the discovery stage to find new loci for each group. This is very different at the beginning of the study design from Yu et al.'s study. Because the phenotype CLP is different from CPO or CLO, the genetic factors of these phenotypes may be different; that is the question we raised in this paper. We found that the main effect loci, such as IRF6, MSX1, VAX1 and MAFB, are shared by CLP and CLO. We think the difference between CPO, CLO and CLP are caused by minor effect multi-genetic factors rather than heterogeneity among populations. In summary, our study advanced current understanding of the genetic architectures of CPO, CLO and CLP. These findings defined the NSOFC subtypes using genetic factors and their functional ontologies. They also provide a clue to improving a diagnosis and treatment of these conditions in the future. However, the current understanding of the biology of these processes in humans remains largely unknown, and it is expected to be complicated. Further functional studies of the genes for NSOFC identified in this study should be conducted to promote drug development and novel therapeutic approaches to treat the disorder. Subjects All of the CPO, CLO and CLP cases were nonsyndromic. The CPO cases included the complete cleft palate (the hard and soft cleft palate) and the soft cleft palate. The diagnoses, which were made by professional maxillofacial doctors before surgery, were based on a series of tests, including electrocardiogram, radiography, biochemical test, physical examination, speech evaluation, ultrasonic test and genetic counseling as necessary. Only those controls who had no family history of congenital disease were included in this study. All of these evaluations were done by at least three doctors, including a surgeon, a speech clinician and a geneticist. A three-stage GWAS for CPO and CLO was conducted, with further replications for CLP. The discovery stage involved 935 CPO patients, 948 CLO patients and 5,050 control individuals (cohort 1). The first replication study was performed among an additional 724 unrelated CPO cases, 781 CLO cases and 3,265 controls (cohort 2). The second replication study was performed among an additional 417 unrelated CPO cases, 492 CLO cases and 1,832 controls (replication cohort: Northern Han Chinese, cohort 3). The first replication for CLP involved 2,270 CLP cases and 3,265 controls (replication cohort: Southern Han Chinese, cohort 4). The second replication study for CLP was performed among an additional 427 unrelated CLP cases and 1,832 controls (replication cohort: Northern Han Chinese, cohort 5). The same controls were used for CPO, CLO and CLP. No obvious geographic areas or genetic differences occurred in this study. In the discovery stage, the CPO and CLO samples were collected by the same team in the same hospital of Southern Han Chinese people (West China Hospital of Stomatology, Chengdu). The control samples were also collected in the same city of Southern Han Chinese people (Sichuan Provincial People's Hospital, Chengdu). The same controls were used for CPO and CLO. For the CPO, CLO and control samples, we used the same chip for genotyping (HumanOmniZhongHua-8 BeadChip, Illumina). In the replication studies, we enrolled the CPO, CLO, CLP and control samples in the same hospitals (CPO, CLO, CLP and controls were enrolled together). Ethics statement The study was approved by the institutional ethics committee of West China Hospital of Stomatology of Sichuan University and Sichuan Provincial People's Hospital and was conducted according to the Declaration of Helsinki principles [48]. All controls were healthy individuals without NSOFC or family history of NSOFC (including first-, second-and third-degree relatives). Written informed consent was obtained from all the participants or their guardians. Approximately 4 ml of venous blood was collected from each participant and placed in a tube containing ethylenediaminetetraacetic acid (EDTA) as the anticoagulant. Genomic DNA was extracted from peripheral blood lymphocytes using the standard sodium dodecylsulfate (SDS)-proteinase K-phenol/chloroform method. Genotyping and quality control in the GWAS The discovery cohort DNA samples were genotyped by Jinneng Biotech (Shanghai, China) using HumanOmniZhongHua-8 BeadChip (Illumina), according to the manufacturer's protocol, with a starting number of 900,015 SNPs. Any SNPs with call rates of less than 90% were removed from further analysis. SNPs located on the X and Y chromosomes, mitochondrial SNPs, and copy number variant probes were removed from further analysis, in keeping with current GWAS practices. After quality filtering and cleaning, 870,261 SNPs remained in the association analysis for CPO and CLO. Full details of the experimental workflow are provided in S2 Fig. Sex and PCs (PC1, PC2, PC3 and PC4) were adjusted as covariates in the logistic model in PLINK. Close relatives among the participants were calculated using genome-wide IBS/BD among all the samples using PLINK (-genome). We found that z0 was very close to 1 and z1 was very close to 0 in all the samples. Also, there are no related people (IBD pihat > 0.2) within these samples. Association analysis After chip genotyping, PCA was performed for both CPO and CLO separately to remove samples with outlying samples from further analysis using the R statistical software package (see URLs). A total of 930 CPO cases, 945 CLO cases and 5,048 control individuals in the discovery cohort passed quality control for the GWAS discovery stage. Next, we examined potential genetic relatedness on the basis of pairwise identity by state for all of the successfully genotyped samples using PLINK version 1.9 software. The genomic inflation estimate (λGC) was calculated for variants with MAF > 1% using only directly genotyped SNPs using PLINK 1.9 (see URLs). Single-marker association analyses were performed using PLINK 1.9 adjusted for sex and MDS (PC1, PC2, PC3 and PC4) as covariates with SNPs showed missing values < 10%, MAF > 1% and HWE P > 10 −6 . Genotype imputation Genotypes were converted to PLINK binary format, and SNPs with missing values > 10%, MAF < 1% and HWE P < 10 −6 for phasing were excluded (see URLs). The clean data were then phased using SHAPEIT2 [49] (see URLs). After that, the dataset was imputed with 1000 Genomes phase 1 (version 3) of CHB (Han Chinese) and CHS (Southern Han Chinese) (hg19) with Minimac3 [50] (see URLs) with r 2 > 0.6. The association analysis of the imputed dosage data was calculated using PLINK version 2.0 using sex and MDS (PC1, PC2, PC3 and PC4) as covariates. SNP selection for replication studies SNPs showing an association with CPO and CLO exceeding P � 9 × 10 −7 in the GWAS discovery stage were included in the replication stage and analyzed in a similar manner to the discovery stage. In total, 48 SNPs were selected for replication analysis. Genotyping and quality control in the replication studies Genotyping of the SNPs selected for the replication studies were conducted using the Sequenom MassARRAY system genotyping as previously described [27]. The association analysis of the replication genotype data was conducted using PLINK 1.9 adjusted for sex. Meta-analysis PLINK 1.9 was used to perform combined meta-analyses of the GWAS discovery and replication data sets for CPO and CLO. The two CLP replication datasets were also combined using the PLINK 1.9 meta-analysis method. Manhattan plots, QQ plots and LocusZoom plots Manhattan plots were generated using QQ, and Manhattan plots for the GWAS Data R package were generated using SNPs with imputed P values less than 0.05 (see URLs). QQ plots were generated using QQ, and Manhattan plots for the GWAS Data R package were generated using all the direct genotyped SNPs (see URLs). LocusZoom plots were generated online from LocusZoom (hg19 November 2014 ASN population) using SNPs with P values less than 0.05 (see URLs). Epigenetic annotation Epigenomic annotation of genetic variants for 31 tissues was performed using the Roadmap Epigenome Browser, which was based on the WashU (Washington University) Epigenome Browser and integrates data from both the NIH (National Institutes of Health) Roadmap Epigenomics Consortium and ENCODE (Encyclopedia of DNA Elements) in a visualization [51] (see URLs). HI-C data browser Hi-C contact matrices were visualized as heatmaps using the 3D Genome Browser [52] (see URLs). The TAD dataset of normal human epidermal keratinocytes and juvenile foreskin primary cell Hi-C data were used. The hg19 SNP region of the regional association maps was used as the input region. Gene expression Human tissue samples were obtained from CPO and CLO patients during surgical cleft repair. Tissues were collected from the rim of the uvula of CPO patients and from the edge of the upper lip cleft of CLO patients. According to the principles of interdisciplinary team care for cleft lip and palate, the patient's age at operation is usually between three and six months for CLO patients and between one and two years for CPO patients. We collected 211 tissue samples (105 palate tissues, 106 lip tissues) for gene expression analysis. Because the patients were young at the time of surgery and the lesion range is relatively limited, the sample size we collected was very small (about 3 mm � 4 mm � 2 mm) for each patient. In almost all cases, only one tissue sample was collected from one patient. Written informed consent was obtained from all guardians on behalf of the patients. All tissues were stored in liquid nitrogen immediately after incision and then transferred to -80˚C for storage.
2019-10-16T13:01:38.647Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "512f358b0673bc8a0f8ca815b2abe165ca59df35", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1008357&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "30ca6f78eff19530102b784b87fef51d9f87e92b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }